Chuyển tới nội dung chính

Memory Providers

Hermes Agent ships with 8 external memory provider plugins that give the agent persistent, cross-session knowledge beyond the built-in MEMORY.md and USER.md. Only one external provider can be active at a time — the built-in memory is always active alongside it.

Quick Start

hermes memory setup      # interactive picker + configuration
hermes memory status # check what's active
hermes memory off # disable external provider

Or set manually in ~/.hermes/config.yaml:

memory:
provider: openviking # or honcho, mem0, hindsight, holographic, retaindb, byterover, supermemory

How It Works

When a memory provider is active, Hermes automatically:

  1. Injects provider context into the system prompt (what the provider knows)
  2. Prefetches relevant memories before each turn (background, non-blocking)
  3. Syncs conversation turns to the provider after each response
  4. Extracts memories on session end (for providers that support it)
  5. Mirrors built-in memory writes to the external provider
  6. Adds provider-specific tools so the agent can search, store, and manage memories

The built-in memory (MEMORY.md / USER.md) continues to work exactly as before. The external provider is additive.

Available Providers

Honcho

AI-native cross-session user modeling with dialectic Q&A, semantic search, and persistent conclusions.

Best forMulti-agent systems with cross-session context, user-agent alignment
Requirespip install honcho-ai + API key or self-hosted instance
Data storageHoncho Cloud or self-hosted
CostHoncho pricing (cloud) / free (self-hosted)

Tools: honcho_profile (peer card), honcho_search (semantic search), honcho_context (LLM-synthesized), honcho_conclude (store facts)

Setup Wizard:

hermes honcho setup        # (legacy command) 
# or
hermes memory setup # select "honcho"

Config: $HERMES_HOME/honcho.json (profile-local) or ~/.honcho/config.json (global). Resolution order: $HERMES_HOME/honcho.json > ~/.hermes/honcho.json > ~/.honcho/config.json. See the config reference and the Honcho integration guide.

Key config options
KeyDefaultDescription
apiKey--API key from app.honcho.dev
baseUrl--Base URL for self-hosted Honcho
peerName--User peer identity
aiPeerhost keyAI peer identity (one per profile)
workspacehost keyShared workspace ID
recallModehybridhybrid (auto-inject + tools), context (inject only), tools (tools only)
observationall onPer-peer observeMe/observeOthers booleans
writeFrequencyasyncasync, turn, session, or integer N
sessionStrategyper-directoryper-directory, per-repo, per-session, global
dialecticReasoningLevellowminimal, low, medium, high, max
dialecticDynamictrueAuto-bump reasoning by query length
messageMaxChars25000Max chars per message (chunked if exceeded)
Minimal honcho.json (cloud)
{
"apiKey": "your-key-from-app.honcho.dev",
"hosts": {
"hermes": {
"enabled": true,
"aiPeer": "hermes",
"peerName": "your-name",
"workspace": "hermes"
}
}
}
Minimal honcho.json (self-hosted)
{
"baseUrl": "http://localhost:8000",
"hosts": {
"hermes": {
"enabled": true,
"aiPeer": "hermes",
"peerName": "your-name",
"workspace": "hermes"
}
}
}
Migrating from hermes honcho

If you previously used hermes honcho setup, your config and all server-side data are intact. Just re-enable through the setup wizard again or manually set memory.provider: honcho to reactivate via the new system.

Multi-agent / Profiles:

Each Hermes profile gets its own Honcho AI peer while sharing the same workspace -- all profiles see the same user representation, but each agent builds its own identity and observations.

hermes profile create coder --clone   # creates honcho peer "coder", inherits config from default

What --clone does: creates a hermes.coder host block in honcho.json with aiPeer: "coder", shared workspace, inherited peerName, recallMode, writeFrequency, observation, etc. The peer is eagerly created in Honcho so it exists before first message.

For profiles created before Honcho was set up:

hermes honcho sync   # scans all profiles, creates host blocks for any missing ones

This inherits settings from the default hermes host block and creates new AI peers for each profile. Idempotent -- skips profiles that already have a host block.

Full honcho.json example (multi-profile)
{
"apiKey": "your-key",
"workspace": "hermes",
"peerName": "eri",
"hosts": {
"hermes": {
"enabled": true,
"aiPeer": "hermes",
"workspace": "hermes",
"peerName": "eri",
"recallMode": "hybrid",
"writeFrequency": "async",
"sessionStrategy": "per-directory",
"observation": {
"user": { "observeMe": true, "observeOthers": true },
"ai": { "observeMe": true, "observeOthers": true }
},
"dialecticReasoningLevel": "low",
"dialecticDynamic": true,
"dialecticMaxChars": 600,
"messageMaxChars": 25000,
"saveMessages": true
},
"hermes.coder": {
"enabled": true,
"aiPeer": "coder",
"workspace": "hermes",
"peerName": "eri",
"recallMode": "tools",
"observation": {
"user": { "observeMe": true, "observeOthers": false },
"ai": { "observeMe": true, "observeOthers": true }
}
},
"hermes.writer": {
"enabled": true,
"aiPeer": "writer",
"workspace": "hermes",
"peerName": "eri"
}
},
"sessions": {
"/home/user/myproject": "myproject-main"
}
}

See the config reference and Honcho integration guide.


OpenViking

Context database by Volcengine (ByteDance) with filesystem-style knowledge hierarchy, tiered retrieval, and automatic memory extraction into 6 categories.

Best forSelf-hosted knowledge management with structured browsing
Requirespip install openviking + running server
Data storageSelf-hosted (local or cloud)
CostFree (open-source, AGPL-3.0)

Tools: viking_search (semantic search), viking_read (tiered: abstract/overview/full), viking_browse (filesystem navigation), viking_remember (store facts), viking_add_resource (ingest URLs/docs)

Setup:

# Start the OpenViking server first
pip install openviking
openviking-server

# Then configure Hermes
hermes memory setup # select "openviking"
# Or manually:
hermes config set memory.provider openviking
echo "OPENVIKING_ENDPOINT=http://localhost:1933" >> ~/.hermes/.env

Key features:

  • Tiered context loading: L0 (~100 tokens) → L1 (~2k) → L2 (full)
  • Automatic memory extraction on session commit (profile, preferences, entities, events, cases, patterns)
  • viking:// URI scheme for hierarchical knowledge browsing

Mem0

Server-side LLM fact extraction with semantic search, reranking, and automatic deduplication.

Best forHands-off memory management — Mem0 handles extraction automatically
Requirespip install mem0ai + API key
Data storageMem0 Cloud
CostMem0 pricing

Tools: mem0_profile (all stored memories), mem0_search (semantic search + reranking), mem0_conclude (store verbatim facts)

Setup:

hermes memory setup    # select "mem0"
# Or manually:
hermes config set memory.provider mem0
echo "MEM0_API_KEY=your-key" >> ~/.hermes/.env

Config: $HERMES_HOME/mem0.json

KeyDefaultDescription
user_idhermes-userUser identifier
agent_idhermesAgent identifier

Hindsight

Long-term memory with knowledge graph, entity resolution, and multi-strategy retrieval. The hindsight_reflect tool provides cross-memory synthesis that no other provider offers.

Best forKnowledge graph-based recall with entity relationships
RequiresCloud: pip install hindsight-client + API key. Local: pip install hindsight + LLM key
Data storageHindsight Cloud or local embedded PostgreSQL
CostHindsight pricing (cloud) or free (local)

Tools: hindsight_retain (store with entity extraction), hindsight_recall (multi-strategy search), hindsight_reflect (cross-memory synthesis)

Setup:

hermes memory setup    # select "hindsight"
# Or manually:
hermes config set memory.provider hindsight
echo "HINDSIGHT_API_KEY=your-key" >> ~/.hermes/.env

Config: $HERMES_HOME/hindsight/config.json

KeyDefaultDescription
modecloudcloud or local
bank_idhermesMemory bank identifier
budgetmidRecall thoroughness: low / mid / high

Holographic

Local SQLite fact store with FTS5 full-text search, trust scoring, and HRR (Holographic Reduced Representations) for compositional algebraic queries.

Best forLocal-only memory with advanced retrieval, no external dependencies
RequiresNothing (SQLite is always available). NumPy optional for HRR algebra.
Data storageLocal SQLite
CostFree

Tools: fact_store (9 actions: add, search, probe, related, reason, contradict, update, remove, list), fact_feedback (helpful/unhelpful rating that trains trust scores)

Setup:

hermes memory setup    # select "holographic"
# Or manually:
hermes config set memory.provider holographic

Config: config.yaml under plugins.hermes-memory-store

KeyDefaultDescription
db_path$HERMES_HOME/memory_store.dbSQLite database path
auto_extractfalseAuto-extract facts at session end
default_trust0.5Default trust score (0.0–1.0)

Unique capabilities:

  • probe — entity-specific algebraic recall (all facts about a person/thing)
  • reason — compositional AND queries across multiple entities
  • contradict — automated detection of conflicting facts
  • Trust scoring with asymmetric feedback (+0.05 helpful / -0.10 unhelpful)

RetainDB

Cloud memory API with hybrid search (Vector + BM25 + Reranking), 7 memory types, and delta compression.

Best forTeams already using RetainDB's infrastructure
RequiresRetainDB account + API key
Data storageRetainDB Cloud
Cost$20/month

Tools: retaindb_profile (user profile), retaindb_search (semantic search), retaindb_context (task-relevant context), retaindb_remember (store with type + importance), retaindb_forget (delete memories)

Setup:

hermes memory setup    # select "retaindb"
# Or manually:
hermes config set memory.provider retaindb
echo "RETAINDB_API_KEY=your-key" >> ~/.hermes/.env

ByteRover

Persistent memory via the brv CLI — hierarchical knowledge tree with tiered retrieval (fuzzy text → LLM-driven search). Local-first with optional cloud sync.

Best forDevelopers who want portable, local-first memory with a CLI
RequiresByteRover CLI (npm install -g byterover-cli or install script)
Data storageLocal (default) or ByteRover Cloud (optional sync)
CostFree (local) or ByteRover pricing (cloud)

Tools: brv_query (search knowledge tree), brv_curate (store facts/decisions/patterns), brv_status (CLI version + tree stats)

Setup:

# Install the CLI first
curl -fsSL https://byterover.dev/install.sh | sh

# Then configure Hermes
hermes memory setup # select "byterover"
# Or manually:
hermes config set memory.provider byterover

Key features:

  • Automatic pre-compression extraction (saves insights before context compression discards them)
  • Knowledge tree stored at $HERMES_HOME/byterover/ (profile-scoped)
  • SOC2 Type II certified cloud sync (optional)

Supermemory

Semantic long-term memory with profile recall, semantic search, explicit memory tools, and session-end conversation ingest via the Supermemory graph API.

Best forSemantic recall with user profiling and session-level graph building
Requirespip install supermemory + API key
Data storageSupermemory Cloud
CostSupermemory pricing

Tools: supermemory_store (save explicit memories), supermemory_search (semantic similarity search), supermemory_forget (forget by ID or best-match query), supermemory_profile (persistent profile + recent context)

Setup:

hermes memory setup    # select "supermemory"
# Or manually:
hermes config set memory.provider supermemory
echo 'SUPERMEMORY_API_KEY=***' >> ~/.hermes/.env

Config: $HERMES_HOME/supermemory.json

KeyDefaultDescription
container_taghermesContainer tag used for search and writes. Supports {identity} template for profile-scoped tags.
auto_recalltrueInject relevant memory context before turns
auto_capturetrueStore cleaned user-assistant turns after each response
max_recall_results10Max recalled items to format into context
profile_frequency50Include profile facts on first turn and every N turns
capture_modeallSkip tiny or trivial turns by default
search_modehybridSearch mode: hybrid, memories, or documents
api_timeout5.0Timeout for SDK and ingest requests

Environment variables: SUPERMEMORY_API_KEY (required), SUPERMEMORY_CONTAINER_TAG (overrides config).

Key features:

  • Automatic context fencing — strips recalled memories from captured turns to prevent recursive memory pollution
  • Session-end conversation ingest for richer graph-level knowledge building
  • Profile facts injected on first turn and at configurable intervals
  • Trivial message filtering (skips "ok", "thanks", etc.)
  • Profile-scoped containers — use {identity} in container_tag (e.g. hermes-{identity}hermes-coder) to isolate memories per Hermes profile
  • Multi-container mode — enable enable_custom_container_tags with a custom_containers list to let the agent read/write across named containers. Automatic operations (sync, prefetch) stay on the primary container.
Multi-container example
{
"container_tag": "hermes",
"enable_custom_container_tags": true,
"custom_containers": ["project-alpha", "shared-knowledge"],
"custom_container_instructions": "Use project-alpha for coding context."
}

Support: Discord · support@supermemory.com


Provider Comparison

ProviderStorageCostToolsDependenciesUnique Feature
HonchoCloudPaid4honcho-aiDialectic user modeling
OpenVikingSelf-hostedFree5openviking + serverFilesystem hierarchy + tiered loading
Mem0CloudPaid3mem0aiServer-side LLM extraction
HindsightCloud/LocalFree/Paid3hindsight-clientKnowledge graph + reflect synthesis
HolographicLocalFree2NoneHRR algebra + trust scoring
RetainDBCloud$20/mo5requestsDelta compression
ByteRoverLocal/CloudFree/Paid3brv CLIPre-compression extraction
SupermemoryCloudPaid4supermemoryContext fencing + session graph ingest + multi-container

Profile Isolation

Each provider's data is isolated per profile:

  • Local storage providers (Holographic, ByteRover) use $HERMES_HOME/ paths which differ per profile
  • Config file providers (Honcho, Mem0, Hindsight, Supermemory) store config in $HERMES_HOME/ so each profile has its own credentials
  • Cloud providers (RetainDB) auto-derive profile-scoped project names
  • Env var providers (OpenViking) are configured via each profile's .env file

Building a Memory Provider

See the Developer Guide: Memory Provider Plugins for how to create your own.