Skip to content
This repository was archived by the owner on Mar 1, 2023. It is now read-only.
This repository was archived by the owner on Mar 1, 2023. It is now read-only.

Sometimes a return prompt from RAVEN appears to to exceed the beam width of GPT-3 #1

@Adrian-1234

Description

@Adrian-1234

I receive the occasional error:

GPT3 error: This model's maximum context length is 4097 tokens, however you requested 5513 tokens (4513 in your prompt; 1000 for the completion). Please reduce your prompt; or completion length.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions