Thank you very much for your work. this benchmark is highly intriguing.
After reading your paper, my understanding is that you intend to withhold the original documentation from the models/agents during the testing phase. Instead, agents are expected to generate and store their own 'experiences' during training for subsequent retrieval.
However, how do you prevent 'benchmark hacking' regarding the granularity of the stored experiences? For instance, would it be permissible if an agent simply ingests the entire documentation into its experience base? I am curious about the constraints you've implemented to ensure the evaluation remains fair.