LangChain Integration
Connect Structured Agent Knowledge to LangChain agents with LocusGraphMemory and LocusGraphRetriever. Both adapters are LLM agnostic — you can swap models without losing the knowledge graph behind them.
Installation
LocusGraphMemory
LocusGraphMemory gives LangChain chains access to your structured agent knowledge as conversational context. It stores conversation events automatically and retrieves validated knowledge on each turn.
TypeScript
import { LocusGraphClient, LocusGraphMemory } from '@locusgraph/client';
import { ConversationChain } from 'langchain/chains';
import { ChatOpenAI } from '@langchain/openai';
const client = new LocusGraphClient({
agentSecret: process.env.LOCUSGRAPH_AGENT_SECRET,
});
const memory = new LocusGraphMemory(
client,
'default', // graphId
'my-agent', // agentId
'session-123', // sessionId
);
const chain = new ConversationChain({
llm: new ChatOpenAI(),
memory,
});
const response = await chain.call({ input: 'What do you know about my preferences?' });Python
from locusgraph_client import LocusGraphClient, LocusGraphMemory
from langchain.chains import ConversationChain
from langchain_openai import ChatOpenAI
client = LocusGraphClient(agent_secret="your-secret")
memory = LocusGraphMemory(
client,
"default", # graph_id
agent_id="my-agent",
session_id="session-123",
)
chain = ConversationChain(llm=ChatOpenAI(), memory=memory)
response = chain.invoke({"input": "What do you know about my preferences?"})Memory Keys
LocusGraphMemory exposes three keys to your chain's prompt:
| Key | Description |
|---|---|
history | Recent conversation turns from the current session |
memories | Validated knowledge retrieved from LocusGraph |
memory_info | Metadata about retrieved contexts (scores, types) |
Automatic Event Classification
When LocusGraphMemory stores conversation events, it classifies them automatically:
| Classification | Trigger |
|---|---|
fact | User states preferences, personal details, or factual information |
action | User requests a task or the agent performs one |
decision | User makes a choice between alternatives |
feedback | User expresses opinions or satisfaction |
observation | Default for all other conversational content |
LocusGraphRetriever
LocusGraphRetriever implements LangChain's retriever interface, letting you plug your structured agent knowledge into any retrieval chain.
TypeScript
import { LocusGraphClient, LocusGraphRetriever } from '@locusgraph/client';
import { RetrievalQAChain } from 'langchain/chains';
import { ChatOpenAI } from '@langchain/openai';
const client = new LocusGraphClient({
agentSecret: process.env.LOCUSGRAPH_AGENT_SECRET,
});
const retriever = new LocusGraphRetriever({
client,
graphId: 'default',
limit: 10,
});
const chain = RetrievalQAChain.fromLLM(new ChatOpenAI(), retriever);
const response = await chain.call({ query: 'Summarize user preferences' });Python
from locusgraph_client import LocusGraphClient, LocusGraphRetriever
from langchain.chains import RetrievalQA
from langchain_openai import ChatOpenAI
client = LocusGraphClient(agent_secret="your-secret")
retriever = LocusGraphRetriever(
client=client,
graph_id="default",
limit=10,
)
chain = RetrievalQA.from_llm(llm=ChatOpenAI(), retriever=retriever)
response = chain.invoke({"query": "Summarize user preferences"})LocusGraphRetriever returns LangChain Document objects. Each document's page_content holds the context content, and metadata includes context_id, context_type, and score.