按 Agent 单独配置 compaction,避免“一刀切”上下文策略
解决“研究型 agent 与执行型 agent 共享同一压缩策略导致上下文利用率失衡”场景:PR #14598 支持 `agents.list[].compaction` 覆盖全局默认。
GITHUBDiscovered 2026-02-12Author curtismercier
Prerequisites
- You use multi-agent config (`agents.list`) with distinct workloads (research, executor, chat, etc.).
- Runtime includes the per-agent compaction patch and allows config reload/restart.
Steps
- Set conservative defaults in `agents.defaults.compaction` first, then override only outlier agents in `agents.list[].compaction`.
- For long-context agents, raise `reserveTokensFloor` and optionally set higher `maxHistoryShare`; for short tasks, lower it.
- If only overriding `memoryFlush.enabled`, keep an eye on inherited prompt/systemPrompt behavior to avoid accidental wipeout.
- Run regression tests across all agents and compare compaction frequency, token use, and response quality.
Commands
openclaw gateway statusopenclaw gateway restartnpm run build# config snippet:
agents:
defaults:
compaction:
mode: default
reserveTokensFloor: 20000
list:
- id: researcher
compaction:
reserveTokensFloor: 40000
maxHistoryShare: 0.8
- id: executor
compaction:
maxHistoryShare: 0.4Verify
Different agents exhibit expected compaction behavior (e.g., researcher retains longer context, executor compacts earlier) with no startup schema errors.
Caveats
- PR discussion includes an edge-case crash warning around partial `memoryFlush` merges; validate in staging before production(需验证).
- Aggressive compaction tuning can hurt answer coherence even if token metrics improve.
Source attribution
This tip is aggregated from community/public sources and preserved with attribution.
Open original source ↗