snappy compression for log files kept in S3 to be processed by Databricks #23950
Unanswered
amershareef0708-netizen
asked this question in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
in VRL under sinks we have compression: gzip and encoding-->codec: json. We now want codec to stay json but compression as snappy. Now snappy files will be stored in S3 and will be processed by Databricks. How to achieve this?
Beta Was this translation helpful? Give feedback.
All reactions