-
Notifications
You must be signed in to change notification settings - Fork 24
Description
Description
Hello,
Thanks for developing EcoLogits, it is a very useful piece of software. However, there are certain use cases where the number of input tokens can be quite variable and sometimes very high compared to the number of output tokens. For example, when doing classification tasks with LLMs, the context can be very large while the output may just be a few tokens. RAG is another example where the context / prompt can be very large compared to the response. In these situations, it feels like EcoLogits might not be adequate yet for estimating carbon impacts.
Is this feature on your roadmap ? Did you already collect data regarding the impact of input tokens ? Are there specific reasons why this might be difficult ? (KV-Cache, different attention mechanisms, model architecture, etc.)
Thanks again