← Back to library

重启后 LM Studio 断连的分层排障清单

解决“首次可用、重启后无请求发往 LM Studio”的问题:用端点健康检查 + 网关重启回归验证本地 OpenAI 兼容链路。

GITHUBDiscovered 2026-02-13Author woodywangjs-mi
Prerequisites
  • LM Studio local server is enabled (e.g., `http://127.0.0.1:1234/v1`).
  • OpenClaw model endpoint is configured to the same local OpenAI-compatible URL.
Steps
  1. Before restart, send a short probe prompt and confirm LM Studio receives `POST /v1/chat/completions`.
  2. Restart OpenClaw gateway only, keep LM Studio running, then resend the exact same probe.
  3. If no request hits LM Studio after restart, verify endpoint persistence in runtime config and inspect gateway boot logs.
  4. Add a post-restart smoke test in ops routine so this regression is caught immediately.
Commands
openclaw gateway status
openclaw gateway restart
openclaw logs --local-time
Verify

After every gateway restart, LM Studio receives completion requests and returns valid responses without manual reconfiguration.

Caveats
  • Issue demonstrates symptom on specific macOS + LM Studio build; reproduce on your stack before broad rollout(需验证)。
  • Avoid mixing endpoint changes and restarts in one test cycle to keep diagnosis clean.
Source attribution

This tip is aggregated from community/public sources and preserved with attribution.

Open original source ↗
Visit original post