Search & Hypergraph

Retrieve memories by meaning and discover relationships using semantic search and hypergraphs.

Overview

Semantic search retrieves memories based on meaning rather than exact keyword matches. When you search for "user's favorite programming language," you'll find memories about "prefers TypeScript" even without shared keywords.

How It Works

Embedding

When memories are stored with --embed true, their content is converted to vector embeddings that capture semantic meaning:

"User prefers TypeScript for new projects"
    ↓ embedding model
[0.234, -0.567, 0.891, ...]  (vector)

Query Processing

Your search query is embedded the same way, and memories are ranked by similarity.

Searching Memories

Find memories using natural language queries:

ensue search_memories --query "what are the user's TypeScript preferences" --limit 10

Returns results ranked by semantic similarity, including:

  • Key name
  • Description
  • Similarity score

Options:

Option Description
--query Natural language search query (required)
--limit Maximum results to return (default: 10, max: 100)

Note: Only searches memories that have embeddings enabled (--embed true when created).

Discovering Memories

Find related memories using semantic discovery on key names:

ensue discover_memories --query "coding patterns"

Use this when you don't remember exact key names but know the topic.

Query Optimization

Be Specific

More specific queries yield more relevant results:

# Better - specific
ensue search_memories --query "user's preferred code editor and configuration"

# Worse - vague
ensue search_memories --query "preferences"

Natural Language

Write queries as you would ask a colleague:

# Good - natural
ensue search_memories --query "what decisions have been made about the database?"

# Works but less effective
ensue search_memories --query "database decision"

Include Context

Add context to disambiguate:

# Better - includes context
ensue search_memories --query "Python version requirements for the ML pipeline project"

# Less precise
ensue search_memories --query "Python version"

Hypergraph Inference

The build_hypergraph tool analyzes memories and discovers semantic clusters—groups of related memories that form coherent topics.

Building a Hypergraph

ensue build_hypergraph --query "team collaboration project tasks" --limit 20 --output-key "analysis/team-hypergraph"

Options:

Option Description
--query Semantic query to find relevant memories (required)
--limit Maximum memories to analyze (max: 50, required)
--output-key Key where the hypergraph result is stored (required)
--model Inference model (optional)
--within-days Only analyze memories from last N days (optional)

Hypergraph Format

The result is stored as a structured format:

HG:query|17n|9e
N:1|team/members/sarah|Team lead Sarah focusing on database
N:2|team/tasks/migration|Database migration task
E:A|1,2,5|database-decision
E:B|3,4,7|api-architecture
  • Header: HG:query|Nn|Ee - query used, N nodes, E edges
  • Nodes (N): Individual memories with ID, key, and description
  • Edges (E): Semantic clusters connecting related nodes with labels

What Hypergraphs Discover

The inference discovers relationships that weren't explicitly stated:

  • Dependency chains: Who is blocked by whom
  • Collaboration patterns: Cross-team partnerships
  • Decision impact: How technical choices affect downstream work
  • Bottleneck detection: Common blockers appearing in multiple clusters

Use Cases

Project Management:

# Discover hidden dependencies
ensue build_hypergraph --query "project dependencies blockers" --limit 30 --output-key "analysis/dependencies"

Knowledge Management:

# Map expertise across team
ensue build_hypergraph --query "team expertise skills" --limit 50 --output-key "analysis/expertise-map"

Time-Based Analysis:

# Analyze last week's work
ensue build_hypergraph --query "sprint progress" --limit 20 --within-days 7 --output-key "analysis/weekly"

Multi-View Analysis:

# Build different views of the same data
ensue build_hypergraph --query "technical decisions" --limit 20 --output-key "views/technical"
ensue build_hypergraph --query "team dependencies" --limit 20 --output-key "views/dependencies"
ensue build_hypergraph --query "project risks" --limit 20 --output-key "views/risks"

Retrieving Hypergraph Results

After building, retrieve the hypergraph:

ensue get_memory --key-names '["analysis/team-hypergraph"]'

Available Models

Model Description
llama-3.3-70b-versatile Best quality, slower
llama-3.1-8b-instant Fast, good quality
mixtral-8x7b-32768 Large context window
qwen-3-32b Balanced performance

Next Steps