Description
Description
Registry
Version: 2.4.7
Persistence type: sql
Environment
Trying to replace Confluent Schema Registry with Apicurio in a setup with with Confluent's Amazon S3 Sink Connector Plugin, in an environment where multiple versions of the same Avro schema will appear on the Kafka topic.
In order to cope with this, we set the following S3 Connector property
schema.compatibility=BACKWARD
With this setting, the S3 Sink will use schema evolution and write the data according to the latest of the schemas. Without the setting, it will write one entry, close the file, write another entry, close the file, which is really really bad from a performance and cost perspective.
This works flawlessly when using Confluent's key and value converters, but using Apicurio AvroConverter it fails with
org.apache.kafka.connect.errors.SchemaProjectorException: Schema version required for BACKWARD compatibility
here
My converter settings are as follows:
value.converter: io.apicurio.registry.utils.converter.AvroConverter
value.converter.apicurio.registry.url: https://schema.example.com/apis/registry/v2
value.converter.apicurio.auth.service.url: ${env:AUTH_SERVICE_URL}
value.converter.apicurio.auth.realm: ${env:AUTH_REALM}
value.converter.apicurio.auth.client.id: ${env:AUTH_CLIENT_ID}
value.converter.apicurio.auth.client.secret: ${env:AUTH_CLIENT_SECRET}
value.converter.apicurio.registry.use-id: contentId
value.converter.apicurio.registry.headers.enabled: false
value.converter.apicurio.registry.as-confluent: true
Digging into the source code of AvroConverter, I don't think it's too much of a guess that the problem is line 89, i.e this nice little TODO:
Integer version = null; // TODO
A sneak peak at Confluent's AvroConverter shows it saving away the schema version somewhere, and using it in the toConnectData function. Guess something similar is needed.
Pretty big roadblock for us right now :-(
Activity