本地 Ollama 挂起排查实战:`ollama launch openclaw` 无响应时的最小诊断路径
问题/场景:本地 Ollama 模型在 OpenClaw UI 长时间显示 typing 但无返回。前置条件:可访问 OpenClaw 配置、Ollama 服务与主机进程信息。实施步骤:1) 对比 `ollama run` 与 OpenClaw 内调用结果;2) 校验 provider `baseUrl` 与模型 ID;3) 复现实验并记录 CPU/日志;4) 停止 gateway 后确认推理进程是否残留。关键命令:`ollama run`、`openclaw gateway stop` 等。验证方法:简单输入(ping)可稳定在合理时间返回。风险与边界:该问题为回归报告,最终修复版本与根因需跟踪官方更新。来源归因:Issue #31577。
GITHUBDiscovered 2026-03-08Author veekurz
Prerequisites
- Ollama daemon is running locally and target model can be loaded.
- You can inspect OpenClaw config and restart gateway safely.
Steps
- Run a baseline direct test: `ollama run <model>` and record first-token latency.
- Launch OpenClaw path (`ollama launch openclaw`) and reproduce with `ping` to compare behavior.
- Verify provider config (`models.providers.ollama.baseUrl` without `/v1`, model id mapping, reasoning flags).
- If UI hangs, stop gateway and confirm whether Ollama inference process exits or leaks.
- Keep reproducible logs/config snapshot and track upstream issue for patched releases.
Commands
ollama run qwen3:8bollama launch openclawopenclaw gateway stopopenclaw gateway statusVerify
OpenClaw UI/TUI returns responses for local model prompts and inference processes exit cleanly after stop.
Caveats
- Current data is from a bug report; fix confirmation requires upstream release notes(需验证).
- Do not kill random host processes while troubleshooting; isolate Ollama PID ownership first.
Source attribution
This tip is aggregated from community/public sources and preserved with attribution.
Open original source ↗