← Back to library

Ollama 输出空白排障:关闭 `<final>` 强制后恢复正常回包

解决 Ollama 接入时频繁出现 `(no output)` 的问题。前置:模型走 OpenAI 兼容端点并启用流式返回。步骤:升级到含 `fix: stop enforcing <final> for ollama` 的版本→执行回归对话→比对推理/正文字段。关键点:不要把 Ollama 当作需要 `<think>/<final>` 标签的 provider。验证:同一提示不再返回空输出。风险:旧版自定义模板可能仍保留标签约束。

GITHUBDiscovered 2026-02-14Author steipete
Prerequisites
  • OpenClaw is configured to call an Ollama OpenAI-compatible endpoint.
  • You have reproducible prompts that previously returned `(no output)`.
Steps
  1. Upgrade to a revision that includes commit `7d3e578` and related Ollama fix chain.
  2. Run 5-10 previously failing prompts and capture outputs before/after.
  3. Check that content and reasoning fields are preserved without `<final>` hard requirement.
  4. Retire legacy prompt snippets that still force tag wrappers for Ollama.
Commands
openclaw status
openclaw gateway restart
openclaw help
Verify

Previously failing Ollama prompts now return normal text output instead of `(no output)`.

Caveats
  • Mixed-model deployments should validate non-Ollama providers were not regressed.
  • Provider-specific reasoning chunk formats may still vary(需验证).
Source attribution

This tip is aggregated from community/public sources and preserved with attribution.

Open original source ↗
Visit original post