MAXIM
Prompt System & Tool Injection
How the LLM Sees the World
How the LLM sees the world: prompt construction, tool discovery, and the learning feedback loop.
Prompt Assembly Chain
The agent's system prompt is assembled by prompt_builder.py with sections at different priority levels. When the token budget is tight, lower-priority sections are dropped. This means the LLM's context varies based on conversation length.
Always Present
- instructions — Response format rules (JSON action schema)
- user_request — The triggering input/percept text
Almost Never Dropped
- identity — "You are Maxim, a robot assistant" + operational state + simulation/interactive instructions
- tools — Full available tools list with descriptions, params, and examples
Dropped Under Pressure
- acting_coach — Affordance exploration meta-prompting (B3, embodiment only)
- entity_context — SEM entity capabilities: affordance descriptions + failure triggers (E2, embodiment only)
- tool_guidance — Parameter examples + selection guidance
- datetime — Current date/time
- conversation — Conversation history
- context_pool — Memory agent context (recalled episodes, causal links)
- foundational — Constitution + agent behavioral rules
Frequently Dropped
- mode_context — "ACTIVE MODE" instructions (filesystem permissions, cognitive tools)
- observation, speech, agent_states — Environmental context
- memory sections — Extended memory summaries
Key insight: The mode_context section (which explains filesystem rules, cognitive tools, and behavioral guidelines) is at NICE_TO_HAVE priority and is frequently dropped. The agent often operates with just identity + tools + foundational rules.
Identity Section
The identity section (build_identity_section()) is at CRITICAL priority and always present. It contains:
The SIMULATION ENVIRONMENT block appears when _sim_active is True. The INTERACTIVE MODE block appears when InteractiveMode.ON. Both are in the identity section so they're never budget-dropped.
Tool Injection Chain
Tool Description Sources
| Source | Priority | When Used |
|---|---|---|
TOOL_DESCRIPTIONS dictmodes/definitions.py |
Primary | All built-in tools. Contains description, params, example, followup_type. |
Tool.description + Tool.input_schematools/base.py |
Fallback | User-registered tools, SEM affordance tools, anything not in TOOL_DESCRIPTIONS. |
Tip: If a tool is registered but the LLM never calls it, check whether it has an entry in TOOL_DESCRIPTIONS. The fallback description from the Tool class is often too terse for the LLM to understand when to use it.
Tool Learning Feedback Loop
The agent learns which tools work through the NAc causal learning system:
| Mechanism | What It Does | Where |
|---|---|---|
| Direct attribution | Records tool outcomes to NAc: (context, tool) → success/pain |
ToolPainBridge |
| Causal links in prompt | LLM sees learned tool-outcome associations in context_pool | prompt_builder.py |
| Recent outcomes | Last N tool results shown in prompt for immediate context | agent_loop.py |
| Relevance filter | Learned index trims tool manifest to relevant subset (passive mode only) | LearnedToolIndex |
Tool learning is indirect — NAc learns outcome associations that appear in the prompt as context, but the LLM still sees the full tool manifest. Learning influences when to call a tool, not whether it appears.
Simulation vs Live: Tool Differences
| Tool Category | AUT (Agent Under Test) | Orchestrator |
|---|---|---|
| Filesystem (read, write, bash) | Yes (sandboxed) | No |
| Sim tools (send_message, observe_actions) | No | Yes |
| Introspection (memory_recall, causal_links) | Yes | No |
| Interactive (request_interaction, set_scene) | Yes | No |
| Robot (move, track_target) | Deregistered | Not added |
| Response (respond, speak) | Yes | sim_respond only |
Adding a New Tool
- Create a
Toolsubclass insrc/maxim/tools/withname,description,input_schema, andexecute() - Register in
build_tool_registry()(runtime/bootstrap.py) - Add to
TOOL_DESCRIPTIONS(modes/definitions.py) with description, params, example, and followup_type - Add to the AUT's
allowed_toolswhitelist (simulation/orchestrator.py) if it should be usable in simulation - If it uses JSON Schema
input_schema, verify_validate_input()handles it (tools/base.py detects both formats) - (0.7+) Scene-scoped tools: For tools tied to an entity/scene, use
registry.register_scene_tools(tools, scene_id="...")instead ofregister(). Scene tools are deactivated on scene transition (not deleted) and can be re-activated. The executor gates on active status — deactivated tools return a descriptive error.
Common pitfall: A tool can be registered and allowed but the LLM never calls it because:
- No entry in
TOOL_DESCRIPTIONS(fallback description too terse) - System prompt context discourages it (e.g., "autonomous" mode + "ask user" tool)
- JSON Schema
input_schemacauses silent validation failure in_validate_input()
Files Reference
| File | Role |
|---|---|
agents/prompt_builder.py | Section assembly + budget management |
agents/llm_worker.py | Delegates to prompt builder, submits to LLM |
modes/definitions.py | Mode definitions + TOOL_DESCRIPTIONS dict |
runtime/bootstrap.py | build_tool_registry() — canonical tool registration |
runtime/autonomy.py | Execution-level tool gating (not prompt-level) |
simulation/orchestrator.py | AUT vs orchestrator tool registry setup |
tools/base.py | Tool base class + _validate_input (JSON Schema support) |
api.py | register_tool() / @maxim.tool dynamic registration |