KAFKA-20158: Add AggregationWithHeaders, serialization support and tests (1/N)#21511
KAFKA-20158: Add AggregationWithHeaders, serialization support and tests (1/N)#21511bbejeck wants to merge 3 commits intoapache:trunkfrom
Conversation
aliehsaeedii
left a comment
There was a problem hiding this comment.
Thanks @bbejeck. I left some minor comments.
| * <p> | ||
| * This is used by KIP-1271 to deserialize aggregations with headers from session state stores. | ||
| */ | ||
| class AggregationWithHeadersDeserializer<AGG> implements WrappingNullableDeserializer<AggregationWithHeaders<AGG>, Void, AGG> { |
There was a problem hiding this comment.
This class is package-private while AggregationWithHeadersSerializer is public
There was a problem hiding this comment.
Maybe both classes need to be package-private?
|
|
||
| return baos.toByteArray(); | ||
| } catch (final IOException e) { | ||
| throw new SerializationException("Failed to serialize AggregationWithHeaders", e); |
There was a problem hiding this comment.
Should we add topic to the exception message for better debugging?
There was a problem hiding this comment.
if so, we should also add it to ValueTimestampHeadersSerializer
| * @return the byte array containing the read bytes | ||
| * @throws SerializationException if buffer doesn't have enough bytes or length is negative | ||
| */ | ||
| private static byte[] readBytes(final ByteBuffer buffer, final int length) { |
There was a problem hiding this comment.
This may not be directly related to this PR, but we could refactor the code so that this method lives in a shared place and can be reused by other classes as well.
There was a problem hiding this comment.
Great idea! But we currently don't have a Utils class (at least that I know of) can we defer this to a follow-up PR and maybe consider other code as well?
| * @throws SerializationException if buffer doesn't have enough bytes or length is negative | ||
| */ | ||
| private static byte[] readBytes(final ByteBuffer buffer, final int length) { | ||
| if (length < 0) { |
There was a problem hiding this comment.
thanks. I like this check. We dont have it in other places.
| final ByteBuffer buffer = ByteBuffer.wrap(rawAggregationWithHeaders); | ||
| final int headersSize = ByteUtils.readVarint(buffer); | ||
| final byte[] rawHeaders = readBytes(buffer, headersSize); | ||
| return HEADERS_DESERIALIZER.deserialize("", rawHeaders); |
There was a problem hiding this comment.
If you rebase the PR, you dont need the first input ("") any more.
| * Extract aggregation from serialized AggregationWithHeaders. | ||
| */ | ||
| static <T> T aggregation(final byte[] rawAggregationWithHeaders, final Deserializer<T> deserializer) { | ||
| if (rawAggregationWithHeaders == null) { |
There was a problem hiding this comment.
Not sure why we need this method, but we can go ahead and remove it if it's not needed.
| final byte[] rawAggregation = readBytes(buffer, buffer.remaining()); | ||
| final AGG aggregation = aggregationDeserializer.deserialize(topic, headers, rawAggregation); | ||
|
|
||
| return AggregationWithHeaders.make(aggregation, headers); |
There was a problem hiding this comment.
here if aggregation is null, then make returns null. I think this should not be the desired behaviour. Same for ValueTimestampDeserializer.deserialize. WDYT @frankvicky?
There was a problem hiding this comment.
hmmm...
Make sense, even if we have a null value after deserialization, the headers might still be meaningful.
We should replace with makeAllowNullable
frankvicky
left a comment
There was a problem hiding this comment.
Overall LGTM.
Could you please rebase on the trunk?
There are some changes of deserializer/serializer of headers.
23ff16b to
3110503
Compare
|
@aliehsaeedii @frankvicky all comments addressed ready for another review |
This PR introduces
AggregationWithHeadersand serialization supportintroduced in KIP-1271 for storing session aggregations with headers.