Is it feasible to fine-tune the sapienzanlp/relik-reader-deberta-v3-base-aida model?

I successfully trained the model using microsoft/deberta-v3-base, but encountered an issue when attempting to configure the transformer model to sapienzanlp/relik-reader-deberta-v3-base-aida in the reader/conf/model/base.yaml file, where the training process failed.