← Back to library

本地 Ollama 模型“CLI 可用、Dashboard 不通”的连通性核查清单

问题/场景:终端里本地模型可响应,但 OpenClaw 控制台或 Discord 侧请求失败。前置条件:已有本地模型服务(如 Ollama)和可用云模型对照组。实施步骤:先分层验证模型服务→gateway→渠道,再核对配对 token/endpoint 与网络可达性。关键命令:`openclaw gateway status`、`openclaw gateway restart`。验证:同一提示词在 dashboard 与聊天渠道都能返回本地模型结果。风险:不同宿主网络与反向代理会引入额外故障点。

REDDITDiscovered 2026-02-15Author u/nole_martley
Prerequisites
  • Local model endpoint is reachable from shell on the host (CLI test already passes).
  • OpenClaw gateway is running and at least one channel (dashboard/Discord) is connected.
Steps
  1. Use cloud model as control: confirm the same route works end-to-end to isolate the issue to local-model path.
  2. Validate gateway pairing/token and model endpoint mapping; ensure dashboard side uses the same runtime config as CLI tests.
  3. Check network scope (localhost vs LAN vs container bridge) and verify OpenClaw process can actually reach the local model port.
  4. Restart gateway after config fixes, then retest one prompt from dashboard and one from chat channel.
Commands
openclaw gateway status
openclaw gateway restart
openclaw help
Verify

Both dashboard and channel requests return local-model responses with comparable latency across repeated tests.

Caveats
  • Community thread has limited confirmed fixes; endpoint/token mismatch is a common but not exclusive cause(需验证).
  • Large local models may still fail under sustained load even after connectivity is fixed.
Source attribution

This tip is aggregated from community/public sources and preserved with attribution.

Open original source ↗
Visit original post