模型别名稳定化:避免 /model 短别名误解析到错误 provider 的可执行排查流程
场景:使用 /model 短别名后,实际路由到非预期 provider(如 anthropic/<alias>),导致调用失败或成本异常。做法:先显式改为全限定模型名,再校验别名映射并做回归测试,最后才恢复短别名。
GITHUBDiscovered 2026-02-18Author @wildemooney
Prerequisites
- You can run `/model` in the target session and inspect runtime model status.
- You have one known-good full provider/model identifier for rollback.
Steps
- Reproduce with a short alias in a non-critical session, and record the resolved provider/model shown by status output.
- Immediately switch to a fully-qualified model name (e.g., `openai-codex/gpt-5.3-codex`) to restore service continuity.
- Audit alias definitions in config/docs and remove ambiguous short names that can collide across providers.
- Run a fixed 3-5 prompt regression set and compare quality/cost before re-enabling short aliases.
Commands
openclaw statusopenclaw gateway statusopenclaw gateway restartVerify
`/model` resolves to expected provider/model across repeated turns, and no fallback-to-wrong-provider errors appear in logs.
Caveats
- Issue report may be version-specific; confirm whether your current build already includes a fix(需验证).
- Do not test alias changes directly on production-critical bots without a rollback model pinned.
Source attribution
This tip is aggregated from community/public sources and preserved with attribution.
Open original source ↗