🐾 claw-stack
Architecture Module OpenClaw Ecosystem

Memory System

Intelligent persistent memory for AI agents β€” across every session, every agent, forever.

Never explain the same thing twice. Your agents remember everything β€” project history, client preferences, past decisions β€” even after a restart.

πŸ’‘ No more Goldfish Memory β€” agents that actually remember client preferences, project history, and past decisions.

Overview

Memory System fuses three proven memory paradigms into one unified layer for OpenClaw agents. Facts are automatically extracted from raw memory files, deduplicated, classified, and indexed β€” then surfaced via intelligent memory search and a compact INDEX.md injected at session start.

3
Memory Paradigms
4
TTL Categories
∞
Cross-Session Memory
>60%
Dedup Threshold

Capabilities

Key Features

Six purpose-built capabilities that make your agents smarter over time.

Three-Paradigm Fusion

Combines Mem0 (fact extraction via LLM), Zep (temporal decay with TTL), and MemGPT (agent self-management) into one unified system.

Temporal Decay (TTL)

Four TTL categories: personal (∞), system (∞), agents (30 days), and tasks (7 days). Expired memories are summarized and archived β€” never deleted.

Instant Recall

Intelligent memory search with semantic re-ranking and automatic query expansion for concept-aware retrieval across all memory.

Auto Fact Extraction

Claude Haiku scans raw .md files, extracts structured facts, deduplicates by word overlap (>60% threshold), and classifies before storage.

Lessons Module

Automatically scans session transcripts, detects failure→retry→success patterns, and extracts structured lessons with two-layer deduplication.

Cross-Agent Sync

Shared memory layer accessible to all agents β€” main, researcher, coding, content, trader β€” with a compact INDEX.md (<2KB) injected at session start.

Architecture

How It Works

A five-stage pipeline transforms raw agent notes into a searchable, time-aware memory graph.

01

Raw Memory Files

Agent dumps .md files into the memory workspace. The organizer scans for changed files using MD5 hash checks.

02

LLM Fact Extraction

Claude Haiku reads each changed file and extracts discrete, structured facts. Deduplication removes any facts with >60% word overlap.

03

Classify & Assign TTL

Each fact is categorized (personal / tasks / agents / system) and assigned a TTL. Facts are written to SQLite with FTS5 full-text index.

04

Search & Recall

Indexer generates INDEX.md (<2KB) and exports structured/*.md files. Intelligent memory search auto-indexes every 10 minutes.

05

Agent Context Injection

Agents read INDEX.md at session start. QMD auto-injects semantically relevant memories. Agents can call the search API for deep queries.

Raw .md files β†’ LLM fact extraction β†’ Dedup + classify β†’ SQLite + FTS5 β†’ INDEX.md + Search β†’ Agent context

Coverage

What Gets Remembered

Six memory domains, each with its own TTL policy and retrieval strategy.

Personal Preferences

User identity, habits, preferences, family, and pets. Stored permanently β€” the things your agent should always know about you.

TTL Permanent

Task History

Recent work sessions, debugging results, solutions found. Auto-expires after 7 days to keep the context fresh and relevant.

TTL 7 days

Agent Learnings

How agents are configured, managed, and their current state. Refreshes on a 30-day cycle as your agent setup evolves.

TTL 30 days

System Config

Tools, environment settings, API keys, file paths. Permanent storage because your infrastructure rarely changes fundamentally.

TTL Permanent

Lessons Learned

Structured lessons extracted from error→retry→success episodes. Prevents the same mistakes from recurring across sessions and agents.

TTL Permanent

Cross-Session Context

Facts that matter beyond a single session. Surfaced via QMD vector search and injected automatically into agent context as needed.

TTL Dynamic

Decay Engine

Nothing Is Ever Deleted

When a memory's TTL expires, the decay engine kicks in. Claude summarizes the expired facts into consolidated lessons, then archives them β€” permanently. Archived memories remain fully searchable via full-text and semantic search; they just no longer appear in the active INDEX.md to keep agent context lean.

Expired β†’ LLM summarize β†’ Archive (still searchable) β€” no data is ever lost.
∞
Archive Retention
FTS5
Full-Text Search
7d
Task Memory TTL
30d
Agent Memory TTL