There is some optimization to be had with how context is parsed and packaged into the prompts that are sent to the LLMs. Namely, adding large swaths of items into context may lead to the creation of prompts that extend beyond an LLM's token limit. One possibly worthwhile exploration may be setting up a "context database" and implementing a RAG system that searches through it and appends data that are strictly relevant to the user's query.