Hello,
I am currently pursuing my Master's Thesis at Sorbonne University, focusing on Explainable AI within the realm of text classification. At present, I am using a Longformer model for text classification And trying to explain my longformer model using your explainability technique
However, I've encountered a roadblock in adapting this code to suit the Longformer architecture. Specifically, I am seeking guidance on how to modify the existing code to align with the Longformer Bert that I am employing (https://huggingface.co/abazoge/DrLongformer). If you have any insights on how I can make this adaptation successfully or if you can suggest alternative methods for explaining Longformer models with attention mechanisms, I would greatly appreciate your input.
Thank you in advance for any assistance you can provide.