Skip to content

Releases: buoyant-data/hotdog

Release v1.2.4

02 May 17:19

Choose a tag to compare

Remove hard-coded internal amount to buffer

This allows users to specify a flush interval, and as long as they have
enough memory, it will buffer as much as possible internally!

Release v1.2.3

02 May 16:36

Choose a tag to compare

Correctly handle writing larger batch sizes for the parquet sink

Inside Apache Arrow (Rust) there's a 1024 row batch size default which
was clipping the amount of data being decoded when flushing massive
parquet buffers

Release v1.2.2

01 May 22:51

Choose a tag to compare

Remove the bounded channel for queueing into the parquet sink

Release v1.2.1

01 May 21:45

Choose a tag to compare

Make failing to infer the schema non-fatal

if a schem cannot be inferred it's important to log the error, but the
process crashing entirely is not idea

Release v1.2.0

29 Apr 18:51

Choose a tag to compare

Properly flush and exit on ctrl-c

Fixes #60

Release v1.1.0

25 Apr 20:49

Choose a tag to compare

Add support for defining schemas to be used by the sinks

Right now the Kafka sink does not support the use of the defined
schemas, but these allow for defining valid/acceptable schemas up front
for data written to specific topics.

What this will _not_ do however is any form of type coercion! Make sure
the schemas are the right types for the data coming in!

Release v1.0.2

23 Apr 20:59

Choose a tag to compare

Lowercase options before they're passed through to object store

This also introduces the S3_OUTPUT_URL environment variable

Release v1.0.1

18 Apr 13:24

Choose a tag to compare

Minor updates with performance improvements

Release v1.0.0

14 Apr 21:57

Choose a tag to compare

Add parquet support, yay!

Release v0.5.1

26 May 20:03

Choose a tag to compare

Denote that simd_json is unsafe