Are you an LLM? Read llms.txt for a summary of the docs, or llms-full.txt for the full context.
Skip to content

Memory Relevance & Retrieval

Semantic search, confidence scoring, and filters work together to surface validated knowledge — not just similar text. This is where Structured Agent Knowledge actively beats vector recall.

What Retrieval Returns

When you call retrieve_memories, LocusGraph runs three steps in order: filter the candidate set by scope, rank what is left by semantic similarity, then adjust ranking by confidence. The agent gets a small, validated payload — not a wall of similar snippets.

Semantic Search

Your query is embedded into a vector and matched against stored knowledge using semantic similarity. LocusGraph returns results ranked by relevance — not keyword matching, but meaning.

A query like "handling null values" can match a stored learning like "NullPointerException in user profile access" even though they share few exact words.

Limit Parameter

The limit parameter controls how many results come back. Defaults vary, but you should set it explicitly.

  • Start small (5-10). Most agents need a handful of validated facts, not a wall of text.
  • Increase when exploring. If you are surveying a broad topic, raise the limit to 20-30.
  • Decrease for precision tasks. When the agent needs one specific answer, set limit to 3-5.
{
  "tool": "retrieve_memories",
  "arguments": {
    "query": "database connection pooling",
    "limit": 5
  }
}

Context Filters

Filters narrow the search space before semantic matching runs. This improves both speed and relevance. See Scoping Strategies for details on combining filters.

The order of operations:

  1. Filters remove non-matching contexts from the candidate set.
  2. Semantic search ranks the remaining candidates.
  3. Confidence scoring adjusts the final ranking.
  4. Top results up to limit are returned.

Confidence and Ranking

Every locus in LocusGraph carries a confidence score. Confidence changes over time and is what ensures only validated knowledge gets brought to the forefront:

  • Reinforced knowledge ranks higher. When multiple events confirm the same fact, its confidence increases. Patterns rise. Skills graduate.
  • Contradicted knowledge drops. When new evidence contradicts existing knowledge, the original locus loses confidence and falls in ranking. It does not disappear — it simply becomes less prominent.
  • Fresh knowledge starts neutral. New loci begin with a baseline confidence. Repeated reinforcement or contradiction adjusts them over time.

You do not manage confidence manually. LocusGraph updates confidence automatically based on event links (reinforces, contradicts). Design your agent to emit these links and the graph handles the rest.

Vector Memory vs. LocusGraph Retrieval

AspectVector memoryLocusGraph
Match basisCosine similarity over textSemantic similarity + structured filters
RankingSimilarity score onlySimilarity score + confidence + source weight
DecayNone — old text ranks the same foreverContradictions lower ranking automatically
ContradictionsBoth versions can be returned silentlyContradicted version drops in ranking
Output"Similar snippets""Validated knowledge for this task"

Retrieval Tuning Tips

  1. Be specific in queries. "React useEffect cleanup patterns" retrieves better results than "React stuff."
  2. Combine semantic queries with type filters. Search for "common mistakes" within error: contexts.
  3. Watch for noise. If irrelevant results appear, tighten your scope or lower the limit.
  4. Use include_ids for debugging. When results seem off, include locus IDs to inspect what is being returned and why it ranked.

Next

Context Windows
Keep your agent's context lean and effective.
Retrieve Memories API
Full API reference for retrieval.