Skip to content

Support for input tokens #183

@leokeba

Description

@leokeba

Description

Hello,

Thanks for developing EcoLogits, it is a very useful piece of software. However, there are certain use cases where the number of input tokens can be quite variable and sometimes very high compared to the number of output tokens. For example, when doing classification tasks with LLMs, the context can be very large while the output may just be a few tokens. RAG is another example where the context / prompt can be very large compared to the response. In these situations, it feels like EcoLogits might not be adequate yet for estimating carbon impacts.

Is this feature on your roadmap ? Did you already collect data regarding the impact of input tokens ? Are there specific reasons why this might be difficult ? (KV-Cache, different attention mechanisms, model architecture, etc.)

Thanks again

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions