Signature
← Back to Overview

MAXIM

Prompt System & Tool Injection

How the LLM Sees the World

How the LLM sees the world: prompt construction, tool discovery, and the learning feedback loop.

Prompt Assembly Chain

The agent's system prompt is assembled by prompt_builder.py with sections at different priority levels. When the token budget is tight, lower-priority sections are dropped. This means the LLM's context varies based on conversation length.

MANDATORY

Always Present

  • instructions — Response format rules (JSON action schema)
  • user_request — The triggering input/percept text
CRITICAL

Almost Never Dropped

  • identity — "You are Maxim, a robot assistant" + operational state + simulation/interactive instructions
  • tools — Full available tools list with descriptions, params, and examples
IMPORTANT

Dropped Under Pressure

  • acting_coach — Affordance exploration meta-prompting (B3, embodiment only)
  • entity_context — SEM entity capabilities: affordance descriptions + failure triggers (E2, embodiment only)
  • tool_guidance — Parameter examples + selection guidance
  • datetime — Current date/time
  • conversation — Conversation history
  • context_pool — Memory agent context (recalled episodes, causal links)
  • foundational — Constitution + agent behavioral rules
NICE TO HAVE

Frequently Dropped

  • mode_context — "ACTIVE MODE" instructions (filesystem permissions, cognitive tools)
  • observation, speech, agent_states — Environmental context
  • memory sections — Extended memory summaries

Key insight: The mode_context section (which explains filesystem rules, cognitive tools, and behavioral guidelines) is at NICE_TO_HAVE priority and is frequently dropped. The agent often operates with just identity + tools + foundational rules.

Identity Section

The identity section (build_identity_section()) is at CRITICAL priority and always present. It contains:

You are Maxim, a robot assistant. === OPERATIONAL STATE === Mode: ACTIVE Mode goal: Execute tasks and take actions within defined boundaries Autonomy level: autonomous Processing state: AWAKE SIMULATION ENVIRONMENT: You are in a controlled simulation for testing and evaluation. Scenarios presented to you are simulated... INTERACTIVE MODE: A human user is present and watching. You can and should ask them questions using request_interaction...

The SIMULATION ENVIRONMENT block appears when _sim_active is True. The INTERACTIVE MODE block appears when InteractiveMode.ON. Both are in the identity section so they're never budget-dropped.

Tool Injection Chain

build_tool_registry() Always-registered: filesystem, display, response | _inject_pending_tools() User tools from @maxim.tool / register_tool() | Introspection tools memory_recall, causal_links, pain_history, etc. | Narrative tools say, think, examine (sim-only) | Robot tools DEREGISTERED focus_interests, track_target, move (no live robot) | Scene-scoped activation (0.7+) register_scene_tools / deactivate_scene / activate_scene | Cap: 20 active scene tools (core tools exempt) | Auto-evicts oldest scene on overflow | Mode filtering active mode: all tools visible (no filter) | registry.list() Returns ACTIVE tools only (deactivated scene tools excluded) | Prompt builder TOOL_DESCRIPTIONS dict OR dynamic Tool.description | === Available Tools === What the LLM actually sees in the prompt - respond: Answer questions... - request_interaction: Ask the user... - memory_recall: Search episodic memory... | LLM generates action JSON {"tool_name": "respond", "params": {"message": "..."}} | Autonomy policy Execution-level gate (allowed_tools whitelist) | FearAgent review Independent safety gate (action content, not beliefs) | Tool.execute() Runs the tool, returns ToolOutput

Tool Description Sources

Source Priority When Used
TOOL_DESCRIPTIONS dict
modes/definitions.py
Primary All built-in tools. Contains description, params, example, followup_type.
Tool.description + Tool.input_schema
tools/base.py
Fallback User-registered tools, SEM affordance tools, anything not in TOOL_DESCRIPTIONS.

Tip: If a tool is registered but the LLM never calls it, check whether it has an entry in TOOL_DESCRIPTIONS. The fallback description from the Tool class is often too terse for the LLM to understand when to use it.

Tool Learning Feedback Loop

The agent learns which tools work through the NAc causal learning system:

Mechanism What It Does Where
Direct attribution Records tool outcomes to NAc: (context, tool) → success/pain ToolPainBridge
Causal links in prompt LLM sees learned tool-outcome associations in context_pool prompt_builder.py
Recent outcomes Last N tool results shown in prompt for immediate context agent_loop.py
Relevance filter Learned index trims tool manifest to relevant subset (passive mode only) LearnedToolIndex

Tool learning is indirect — NAc learns outcome associations that appear in the prompt as context, but the LLM still sees the full tool manifest. Learning influences when to call a tool, not whether it appears.

Simulation vs Live: Tool Differences

Tool Category AUT (Agent Under Test) Orchestrator
Filesystem (read, write, bash) Yes (sandboxed) No
Sim tools (send_message, observe_actions) No Yes
Introspection (memory_recall, causal_links) Yes No
Interactive (request_interaction, set_scene) Yes No
Robot (move, track_target) Deregistered Not added
Response (respond, speak) Yes sim_respond only

Adding a New Tool

  1. Create a Tool subclass in src/maxim/tools/ with name, description, input_schema, and execute()
  2. Register in build_tool_registry() (runtime/bootstrap.py)
  3. Add to TOOL_DESCRIPTIONS (modes/definitions.py) with description, params, example, and followup_type
  4. Add to the AUT's allowed_tools whitelist (simulation/orchestrator.py) if it should be usable in simulation
  5. If it uses JSON Schema input_schema, verify _validate_input() handles it (tools/base.py detects both formats)
  6. (0.7+) Scene-scoped tools: For tools tied to an entity/scene, use registry.register_scene_tools(tools, scene_id="...") instead of register(). Scene tools are deactivated on scene transition (not deleted) and can be re-activated. The executor gates on active status — deactivated tools return a descriptive error.

Common pitfall: A tool can be registered and allowed but the LLM never calls it because:

  • No entry in TOOL_DESCRIPTIONS (fallback description too terse)
  • System prompt context discourages it (e.g., "autonomous" mode + "ask user" tool)
  • JSON Schema input_schema causes silent validation failure in _validate_input()

Files Reference

File Role
agents/prompt_builder.pySection assembly + budget management
agents/llm_worker.pyDelegates to prompt builder, submits to LLM
modes/definitions.pyMode definitions + TOOL_DESCRIPTIONS dict
runtime/bootstrap.pybuild_tool_registry() — canonical tool registration
runtime/autonomy.pyExecution-level tool gating (not prompt-level)
simulation/orchestrator.pyAUT vs orchestrator tool registry setup
tools/base.pyTool base class + _validate_input (JSON Schema support)
api.pyregister_tool() / @maxim.tool dynamic registration