Memory API Reference
API reference for the memory system that manages agent conversation history.
ConversationMemory
Stores conversation history as a list of Message objects.
Import
from marsys.agents.memory import ConversationMemory
Constructor
ConversationMemory(description: Optional[str] = None)
| Parameter | Type | Description | Default |
|---|---|---|---|
| description | str | Initial system message content | None |
Methods
| Method | Signature | Description |
|---|---|---|
| add | add(message=None, *, role=None, content=None, name=None, tool_calls=None, agent_calls=None, images=None, tool_call_id=None) -> str | Add message, returns message_id |
| update | update(message_id, *, role=None, content=None, ...) -> None | Update existing message by ID |
| retrieve_all | retrieve_all() -> List[Dict[str, Any]] | Get all messages as dicts |
| retrieve_recent | retrieve_recent(n: int = 1) -> List[Dict[str, Any]] | Get last n messages as dicts |
| get_messages | get_messages() -> List[Dict[str, Any]] | Get messages for LLM (same as retrieve_all) |
| retrieve_by_id | retrieve_by_id(message_id: str) -> Optional[Dict] | Get message by ID |
| retrieve_by_role | retrieve_by_role(role: str, n: Optional[int] = None) -> List[Dict] | Filter by role |
| remove_by_id | remove_by_id(message_id: str) -> bool | Delete message by ID |
| replace_memory | replace_memory(idx: int, ...) -> None | Replace message at index |
| delete_memory | delete_memory(idx: int) -> None | Delete message at index |
| reset_memory | reset_memory() -> None | Clear all (keeps system prompt) |
Example
memory = ConversationMemory(description="You are helpful")# Add messagesmsg_id = memory.add(role="user", content="Hello")memory.add(role="assistant", content="Hi there!")# Retrieve messagesall_msgs = memory.retrieve_all() # List[Dict]recent = memory.retrieve_recent(5)# Clearmemory.reset_memory() # Keeps system prompt
ManagedConversationMemory
Conversation memory with automatic token management.
Import
from marsys.agents.memory import ManagedConversationMemory, ManagedMemoryConfig
Constructor
ManagedConversationMemory(config: Optional[ManagedMemoryConfig] = None,trigger_strategy: Optional[TriggerStrategy] = None,process_strategy: Optional[ProcessStrategy] = None,retrieval_strategy: Optional[RetrievalStrategy] = None,description: Optional[str] = None)
Additional Methods
| Method | Description |
|---|---|
| get_raw_messages() | Get full raw message history |
| get_cache_stats() | Get cache statistics |
| compact_for_payload_error(runtime=None) -> bool | Run payload-too-large recovery compaction. Returns True only when serialized payload bytes are reduced |
Payload Error Recovery
When an LLM/API call fails with payload-too-large classification, Agent.run_step() can call compact_for_payload_error to compact memory and retry.
- Splits messages into prefix and protected tail using assistant-round boundary semantics
- Runs summarization on the prefix using the same summarization config as normal compaction
- Re-appends the protected tail unchanged
- Preserves conversation structure (including leading system message)
- Retains only selected images with a provider-aware byte budget and optional max_retained_images cap
- Emits CompactionEvent lifecycle events (started, completed, failed)
ManagedMemoryConfig
@dataclassclass ManagedMemoryConfig:threshold_tokens: int = 150_000image_token_estimate: int = 800trigger_events: List[str] = ["add", "get_messages"]cache_invalidation_events: List[str] = ["add", "remove_by_id", "delete_memory"]token_counter: Optional[Callable] = Noneactive_context: ActiveContextPolicyConfig = ...@propertydef compaction_target_tokens(self) -> int:"""Derived: threshold_tokens * (1 - min_reduction_ratio)."""
ActiveContextPolicyConfig
@dataclassclass ActiveContextPolicyConfig:enabled: bool = Truemode: str = "compaction" # "compaction" | "sliding_window"processor_order: List[str] = ["tool_truncation", "summarization", "backward_packing"]excluded_processors: List[str] = []min_reduction_ratio: float = 0.4tool_truncation: ToolTruncationConfig = ...summarization: SummarizationConfig = ...backward_packing: BackwardPackingConfig = ...
SummarizationConfig
@dataclassclass SummarizationConfig:enabled: bool = Truegrace_recent_messages: int = 1output_max_tokens: int = 6000include_original_instruction: bool = Trueinclude_image_payload_bytes: bool = Falsemax_retained_images: Optional[int] = None # Payload recovery defaults to 5 when unset
Summarization Notes
grace_recent_messages uses assistant-round semantics (protect from the N-th most recent assistant message onward). max_retained_images limits retained image count in summarization output.
KGMemory
Knowledge graph memory storing facts as (Subject, Predicate, Object) triplets.
Import
from marsys.agents.memory import KGMemory
Constructor
KGMemory(model: Union[BaseLocalModel, BaseAPIModel, LocalProviderAdapter],description: Optional[str] = None)
Additional Methods
| Method | Description |
|---|---|
| add_fact(role, subject, predicate, obj) | Add fact directly |
| extract_and_update_from_text(input_text, role) | Extract facts from text using model |
MemoryManager
Factory that creates appropriate memory type.
Import
from marsys.agents.memory import MemoryManager
Constructor
MemoryManager(memory_type: str = "conversation_history",description: Optional[str] = None,model: Optional[Union[BaseLocalModel, LocalProviderAdapter]] = None,memory_config: Optional[ManagedMemoryConfig] = None,token_counter: Optional[Callable] = None)
Parameters
| Parameter | Type | Description | Default |
|---|---|---|---|
| memory_type | str | "conversation_history", "managed_conversation", or "kg" | "conversation_history" |
| description | str | Initial system prompt | None |
| model | BaseLocalModel / LocalProviderAdapter | Required for "kg" type | None |
| memory_config | ManagedMemoryConfig | Config for managed memory | None |
| token_counter | Callable | Custom token counter | None |
Additional Methods
| Method | Description |
|---|---|
| set_event_context(agent_name, event_bus, session_id=None) | Enable memory event emission (MemoryResetEvent, CompactionEvent) |
| compact_if_needed(runtime=None) | Trigger token-threshold compaction when supported |
| compact_for_payload_error(runtime=None) -> bool | Trigger payload-too-large recovery compaction when supported |
| save_to_file(filepath, additional_state=None) | Save memory state to JSON |
| load_from_file(filepath) -> Optional[Dict] | Load memory state and return additional_state if present |
Example
# Standard memorymanager = MemoryManager(memory_type="conversation_history",description="System prompt")# With token managementmanager = MemoryManager(memory_type="managed_conversation",memory_config=ManagedMemoryConfig(threshold_tokens=100000))# Knowledge graphmanager = MemoryManager(memory_type="kg",model=your_model)# Use itmanager.add(role="user", content="Hello")msgs = manager.get_messages()manager.save_to_file("memory.json")
MemoryResetEvent
Emitted when memory is cleared (e.g., reset_memory()), enabling planning state to auto-clear.
from marsys.agents.memory import MemoryResetEventfrom marsys.coordination.event_bus import EventBusbus = EventBus()manager.set_event_context(agent_name="Researcher", event_bus=bus, session_id="run_123")
CompactionEvent
Emitted by managed memory compaction to report lifecycle progress to status channels.
from marsys.coordination.status.events import CompactionEventCompactionEvent(session_id: str,agent_name: str,status: str, # "started" | "completed" | "failed"pre_tokens: int = 0,post_tokens: int = 0,pre_messages: int = 0,post_messages: int = 0,duration: Optional[float] = None,stages_run: Optional[List[str]] = None,)
Message
Single message in conversation.
Import
from marsys.agents.memory import Message
Constructor
Message(role: str,content: Optional[Union[str, Dict[str, Any], List[Dict[str, Any]]]] = None,message_id: str = auto_generated,name: Optional[str] = None,tool_calls: Optional[List[ToolCallMsg]] = None,agent_calls: Optional[List[AgentCallMsg]] = None,structured_data: Optional[Dict[str, Any]] = None,images: Optional[List[str]] = None,tool_call_id: Optional[str] = None,reasoning_details: Optional[List[Dict[str, Any]]] = None)
Reasoning Details
The reasoning_details field preserves model thinking/reasoning traces (e.g., Gemini 3 thought signatures). This is critical for multi-turn tool calling with models that use extended thinking.
Role Values
"system"- System instructions"user"- User input"assistant"- Agent response"tool"- Tool response
Methods
| Method | Description |
|---|---|
| to_llm_dict() | Convert to LLM API format dict |
| to_api_format() | Convert to OpenAI API format |
| from_response_dict(response_dict, ...) | Create from model response (classmethod) |
| from_harmonized_response(response, ...) | Create from HarmonizedResponse (classmethod) |
Example
# Simple messagemsg = Message(role="user", content="Hello")# With tool callsmsg = Message(role="assistant",content=None,tool_calls=[ToolCallMsg(id="call_123",call_id="call_123",type="function",name="search",arguments='{"query": "AI"}')])# Tool responsemsg = Message(role="tool",content='{"result": "found"}',tool_call_id="call_123",name="search")# With imagesmsg = Message(role="user",content="Describe this",images=["path/to/image.jpg"])
MessageContent
Structured content for agent action responses.
Import
from marsys.agents.memory import MessageContent
Constructor
MessageContent(thought: Optional[str] = None,next_action: Optional[str] = None,action_input: Optional[Dict[str, Any]] = None)
Valid next_action values:
"call_tool""invoke_agent""final_response"
ToolCallMsg
Tool call request in a message.
from marsys.agents.memory import ToolCallMsgToolCallMsg(id: str,call_id: str,type: str,name: str,arguments: str)
AgentCallMsg
Agent invocation request.
from marsys.agents.memory import AgentCallMsgagent_call = AgentCallMsg(agent_name="DataProcessor",request="Process the sales data")
Related Documentation
- Agent API - Agent memory integration
- Memory Concepts - Memory usage patterns