Releases: buoyant-data/hotdog
Releases · buoyant-data/hotdog
Release v1.2.4
Remove hard-coded internal amount to buffer This allows users to specify a flush interval, and as long as they have enough memory, it will buffer as much as possible internally!
Release v1.2.3
Correctly handle writing larger batch sizes for the parquet sink Inside Apache Arrow (Rust) there's a 1024 row batch size default which was clipping the amount of data being decoded when flushing massive parquet buffers
Release v1.2.2
Remove the bounded channel for queueing into the parquet sink
Release v1.2.1
Make failing to infer the schema non-fatal if a schem cannot be inferred it's important to log the error, but the process crashing entirely is not idea
Release v1.2.0
Properly flush and exit on ctrl-c Fixes #60
Release v1.1.0
Add support for defining schemas to be used by the sinks Right now the Kafka sink does not support the use of the defined schemas, but these allow for defining valid/acceptable schemas up front for data written to specific topics. What this will _not_ do however is any form of type coercion! Make sure the schemas are the right types for the data coming in!
Release v1.0.2
Lowercase options before they're passed through to object store This also introduces the S3_OUTPUT_URL environment variable
Release v1.0.1
Minor updates with performance improvements
Release v1.0.0
Add parquet support, yay!
Release v0.5.1
Denote that simd_json is unsafe