cd ~/mlx_server
./start_mlx_server.sh
Serving on http://127.0.0.1:8080 即表示成功在 OpenClaw 的 ~/.openclawcn/config.yaml 中添加:
models:
- id: local-qwen
name: "本地千问2.5"
provider: openai
api_base: http://127.0.0.1:8080/v1
api_key: mlx-local
model: mlx-community/Qwen2.5-7B-Instruct-MLX
max_tokens: 4096
temperature: 0.7
POST http://127.0.0.1:8080/v1/chat/completionsGET http://127.0.0.1:8080/v1/models在 Claude Code 设置中添加:
{
"model": "mlx-community/Qwen2.5-7B-Instruct-MLX",
"apiBase": "http://127.0.0.1:8080/v1"
}
| 项目 | 最低要求 |
|---|---|
| macOS | 12.0+ (Apple Silicon 推荐) |
| RAM | 16GB+ |
| 存储 | 20GB+ 可用空间 |
| Python | 3.9+ |
# 启动服务器
./start_mlx_server.sh
# 后台运行
nohup ./start_mlx_server.sh > mlx_server.log 2>&1 &
# 停止服务器
pkill -f mlx_lm.server
# 查看日志
tail -f mlx_server.log
# 测试 API
curl http://127.0.0.1:8080/v1/models