Skip to content

图片知识库报错code=105000030 message=parser parse failed: parse document failed, err: [ParseImage] model generate failed: error during Chat request: 400 Bad Request: illegal base64 data at input byte 4 #2491

@parkerisme

Description

@parkerisme

文本知识库无问题,图片知识库报下面错误:
code=105000030 message=parser parse failed: parse document failed, err: [ParseImage] model generate failed: error during Chat request: 400 Bad Request: illegal base64 data at input byte 4

关键配置:

Settings for Model

Model for agent & workflow

add suffix number to add different models

export MODEL_PROTOCOL_0="ollama" # protocol
export MODEL_OPENCOZE_ID_0="100001" # id for record
export MODEL_NAME_0="llava:7b" # model name for show
export MODEL_ID_0="llava:7b" # model name for connection
export MODEL_API_KEY_0="" # model api key
export MODEL_BASE_URL_0="http://host.docker.internal:11434" # model base url

Model for knowledge nl2sql, messages2query (rewrite), image annotation, workflow knowledge recall

add prefix to assign specific model, downgrade to default config when prefix is not configured:

1. nl2sql: NL2SQL_ (e.g. NL2SQL_BUILTIN_CM_TYPE)

2. messages2query: M2Q_ (e.g. M2Q_BUILTIN_CM_TYPE)

3. image annotation: IA_ (e.g. IA_BUILTIN_CM_TYPE)

4. workflow knowledge recall: WKR_ (e.g. WKR_BUILTIN_CM_TYPE)

supported chat model type: openai / ark / deepseek / ollama / qwen / gemini

export BUILTIN_CM_TYPE="ollama"

显式指定图像标注任务使用 Ollama

export IA_BUILTIN_CM_TYPE="ollama"
export IA_BUILTIN_CM_OLLAMA_BASE_URL="http://host.docker.internal:11434" # 替换为您自己的宿主机IP
export IA_BUILTIN_CM_OLLAMA_MODEL="llava:7b"

type ollama

export BUILTIN_CM_OLLAMA_BASE_URL="http://host.docker.internal:11434"
export BUILTIN_CM_OLLAMA_MODEL="llava:7b"

Settings for Embedding

The Embedding model relied on by knowledge base vectorization does not need to be configured

if the vector database comes with built-in Embedding functionality (such as VikingDB). Currently,

Coze Studio supports four access methods: openai, ark, ollama, and custom http. Users can simply choose one of them when using

embedding type: ark / openai / ollama / gemini / http

export EMBEDDING_TYPE="ollama"
export EMBEDDING_MAX_BATCH_SIZE=100

ollama embedding

export OLLAMA_EMBEDDING_BASE_URL="http://host.docker.internal:11434" # (string, required) Ollama embedding base_url
export OLLAMA_EMBEDDING_MODEL="bge-m3" # (string, required) Ollama embedding model
export OLLAMA_EMBEDDING_DIMS="1024" # (int, required) Ollama embedding dimensions
/

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions