Skip to content

Responses are inaccurate due to high token usage limit #3

@suyash-purwar

Description

@suyash-purwar

Every request by default has an upper bound of 1000 token usage. Due to this, the AI model returns responses which extra-long responses for easy prompts. Extra content or multiple answers are added for prompts for which the user wouldn't expect long responses.

Solution:

Add three flags indicating the expected response length - short, medium, and long. Users can use these three flags according to his/her expectations of the content length.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions