Skip to content

为什么运行python server.py后mlx_vlm.server 会加载nanollava-1.5-8bit #79

@eruca

Description

@eruca

System Info / 系統信息

M1 macstudio 32G Sequoia 15.7.3

Who can help? / 谁可以帮助到您?

(.venv-mlx) ➜ glm-ocr mlx_vlm.server --trust-remote-code
INFO: Will watch for changes in these directories: ['/Users/nick/Codings/glm-ocr']
INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
INFO: Started reloader process [27549] using StatReload
INFO: Started server process [27570]
INFO: Waiting for application startup.
INFO: Application startup complete.
Loading model from: mlx-community/nanoLLaVA-1.5-8bit
Downloading (incomplete total...): 0%| | 0.00/1.12G [00:00<?, ?B/s]^CCancellation requested; stopping current tasks. | 0/11 [00:00<?, ?it/s]
Fetching 11 files: 36%|████████████████████████████████████████████████▋ | 4/11 [00:02<00:04, 1.52it/s]
ERROR: Exception in ASGI application

Information / 问题信息

  • The official example scripts / 官方的示例脚本
  • My own modified scripts / 我自己修改的脚本和任务

Reproduction / 复现过程

是严格按照m1安装教程进行的

Expected behavior / 期待表现

正常进行

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions