-
Notifications
You must be signed in to change notification settings - Fork 61
Description
Describe the bug
First of all: I'm not sure this is a bug report or a feature request. The problem we're having at the moment is that we have multiple versions of an Avro schema, in which some type promotion has occurred. In our case, in version 1 of the schema, some field was an int but in version 2 the field was promoted to a long. Also the column in the database was updated from an Int32 to an Int64.
So far so good. Unfortunately, because of some other constraints, we cannot decode with the latest schema but we have to decode with the exact schema the message was produced with. This means that in this example case, the value arrives in the sink connector as an Integer and the column expects a Long. This results in
java.lang.ClassCastException: class java.lang.Integer cannot be cast to class java.lang.Long
in:
clickhouse-kafka-connect/src/main/java/com/clickhouse/kafka/connect/sink/db/ClickHouseWriter.java
Line 680 in d9e5aee
| BinaryStreamUtils.writeInt64(stream, (Long) value); |
Expected behaviour
We previously set bypassRowBinary to true, so the value is converted to JSON first to make it work. We would like to get rid of that, so for now we have our own fork of this connector, where we implemented this specific cast as
BinaryStreamUtils.writeInt64(stream, ((Number) value).longValue());We were wondering whether type promotion/widening could be a bit more supported, maybe even behind some flag? The typical use cases (for Avro) are:
inttofloat,doubleorlonglongtofloatordoublefloattodoublestringtobytesbytestostring
For us, at least the numeric cases should be supported.