Skip to content

Optimizing Context Token Overflows #164

@lk340

Description

@lk340

There is some optimization to be had with how context is parsed and packaged into the prompts that are sent to the LLMs. Namely, adding large swaths of items into context may lead to the creation of prompts that extend beyond an LLM's token limit. One possibly worthwhile exploration may be setting up a "context database" and implementing a RAG system that searches through it and appends data that are strictly relevant to the user's query.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions