Skip to content

请问本地ollama大模型的配置方式 #208

Closed
@CH-suping

Description

@CH-suping

我的本地运行的ollama : http://127.0.0.1:11434/

.env

export LLM_API_KEY=""

export LLM_API_BASE="https://127.0.0.1:11434"
export PRIMARY_MODEL="Qwen/Qwen2.5"
export SECONDARY_MODEL="Qwen/Qwen2.5"

测试提示如下:
INFO:openai._base_client:Retrying request to /chat/completions in 0.488746 seconds
INFO:openai._base_client:Retrying request to /chat/completions in 0.836849 seconds

应该时配置问题吧,要怎么配置呢?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions