This chapter explains Context in OpenClaw: what is sent to the model during a run, what consumes the context window, and how to inspect/control it.

Overview: this diagram frames context as one Active Context Window (sent per run) composed of three visible blocks: System Prompt (including Project Context, skills metadata, tool list/schemas, runtime metadata), Conversation History (user/assistant messages plus compaction summaries), and Run Inputs & Outputs (current message, tool calls/results, attachments/transcripts). It also highlights hidden budget pressure from Provider Wrappers/Headers and shows the operator control loop: inspect with /status and /context detail, then adjust using /context list, /usage tokens, and /compact.

Analysis

Use this as the mental model before touching tuning knobs.

ConceptWhy It Matters
ContextThe model can only reason over what fits in the current context window
MemoryCan live on disk and be reloaded later; it is not always in the active window
Main riskToken budget is dominated by hidden contributors (tool schemas, wrappers, attachments)
Main control surface/status, /context list, /context detail, /usage tokens, /compact

Core distinction:

  • Context = run-time window (now)
  • Memory = persisted knowledge (later)

Plan

This chapter is structured in the same order you debug context issues in production.

  1. Define what counts toward the window
  2. Show quick inspection commands
  3. Explain prompt assembly and file injection limits
  4. Break down skills/tools overhead
  5. Explain directives/commands behavior
  6. Clarify persistence (sessions, compaction, pruning)
  7. Give an operator checklist

Quick Start (Inspect Context)

Run these commands in order when replies become weak/truncated or tool calls behave strangely.

CommandWhat You Learn
/statusHigh-level window fullness + session settings
/context listInjected files and rough size totals
/context detailPer-file/per-skill/per-tool schema contributors
/usage tokensPer-reply token usage footer
/compactSummarize older history to free active window

What Counts Toward the Context Window

Everything the model receives counts, including parts you may not see directly.

ContributorCounts?Notes
System promptYesIncludes rules, tools, skills list, runtime metadata, injected files
Conversation historyYesUser + assistant messages in session scope
Tool calls and resultsYesOften large when command output/files are verbose
Attachments/transcriptsYesImages/audio/files and derived text
Compaction artifactsYesSummaries and pruning metadata still consume budget
Provider wrappers/headersYesHidden transport/provider overhead

How OpenClaw Builds the System Prompt

The system prompt is rebuilt each run and owned by OpenClaw.

System Prompt PartTypical Content
ToolingTool list + short descriptions
Skills listSkill metadata (name/description/location)
Workspace infoWorkspace location and runtime metadata
TimeUTC and user-local converted time (if configured)
Project ContextInjected bootstrap files from workspace

Practical implication:

  • A “small” chat can still overflow if tooling/schema/context injection is heavy.

Injected Workspace Files (Project Context)

By default, OpenClaw injects a fixed bootstrap set when present.

FilePurpose
AGENTS.mdBehavior and operating rules
SOUL.mdPersona/tone profile
TOOLS.mdTool behavior notes/policies
IDENTITY.mdAgent identity and stance
USER.mdUser profile/context
HEARTBEAT.mdOptional heartbeat state
BOOTSTRAP.mdFirst-run bootstrap material

Limits:

  • Per-file injection cap: agents.defaults.bootstrapMaxChars (default 20000 chars)
  • Total bootstrap cap: agents.defaults.bootstrapTotalMaxChars (default 24000 chars)

/context shows raw vs injected sizes and whether truncation happened.

Skills and Tools: The Two Biggest Hidden Costs

Use this table to understand why context can fill quickly even with short chat messages.

SourceHow It Adds Cost
SkillsSkills list text is injected into system prompt; full skill instructions are loaded on-demand
Tools (text)Tool list/description section in prompt
Tools (schemas JSON)Large schema payload sent for tool-calling; counts even if not shown as plain text

Practical check:

  • Use /context detail to identify top skill entries and largest tool schemas.

Commands, Directives, and Inline Shortcuts

Not every /... token reaches the model in the same way.

TypeBehavior
Standalone commandMessage containing only /... executes as gateway command
Directive (/think, /verbose, /reasoning, /elevated, /model, /queue)Stripped before model input; can persist settings or act as per-message hint
Inline shortcutAllowlisted senders can trigger certain /... tokens inside normal text; shortcut stripped before model sees remaining content

Sessions, Compaction, and Pruning

This is where many readers confuse persistence behavior.

MechanismWhat PersistsWhat Changes
Normal historyTranscript entriesGrows until compacted/pruned by policy
CompactionSummary persisted into transcriptOlder detail compressed; recent turns kept
PruningTranscript unchangedRemoves old tool results from in-memory prompt for a run

What /context Actually Reports

/context prefers real run-built data when available.

ModeMeaning
System prompt (run)Captured from last embedded tool-capable run and stored in session data
System prompt (estimate)Computed on-demand when no run report exists

Important:

  • /context reports sizes and top contributors.
  • It does not dump full system prompt text or full tool schemas.

Example (Interpreting Output)

Use this pattern to read output quickly:

  1. Check system prompt size first
  2. Check truncation flags in injected files (OK vs TRUNCATED)
  3. Check top tool schemas in /context detail
  4. Compare cached session tokens against model context limit

Operator Troubleshooting Checklist

  1. Run /status and confirm context pressure
  2. Run /context detail and find top contributors (files/tools/skills)
  3. Trim oversized bootstrap files, especially TOOLS.md
  4. Run /compact when history dominates the budget
  5. Re-run and verify token usage with /usage tokens
  • Slash commands
  • Token use and costs
  • Compaction
  • Session pruning
  • System prompt