NeuralGraph is a context engine that stores knowledge as interconnected graphs and retrieves the right context at the right time. Not a chatbot framework. Not a RAG pipeline. A structured memory layer that any AI application can plug into.
Most AI memory systems store flat chunks of text and retrieve them with vector similarity. That works for simple recall, but it loses structure. NeuralGraph stores knowledge as a graph — nodes represent discrete pieces of knowledge, edges represent relationships between them.
A node can be anything: a fact, a person, a preference, a concept, an event. Each node has a type, importance score, and structured data. Edges carry meaning — RELATED_TO, CONTRADICTS, SUBTOPIC_OF, SUPERSEDES — so the system understands not just what it knows, but how things connect.
When new information arrives, the graph updates. If David changes jobs, the old WORKS_AT edge gets superseded — not deleted. The graph maintains history while keeping current state clear.
This is what makes NeuralGraph different from every other graph memory system. Triggers are semantic hooks — short phrases extracted alongside each node that describe when that knowledge should surface. They're not keywords. They're intent signals.
When a user says "I'm stressed about the launch," the system doesn't just vector-search for similar text. It scans for triggers. The node [Q3 Launch] might have triggers like "launch timeline", "project deadline", "work pressure". The node [Stress] might have "feeling overwhelmed", "anxiety". Both fire. Both contribute context.
Trigger strength is adaptive. When the AI uses a piece of context and it's relevant, the triggers that surfaced it get stronger. When context is irrelevant, those triggers weaken. The graph learns what matters over time — no retraining, no manual tuning.
Knowledge in NeuralGraph lives in spaces — isolated graphs with their own schema, ingestion rules, and retrieval config. A space can be anything: a personal memory store, a domain knowledge base, a team wiki, a product catalog.
Each space has a space type defined in YAML that controls everything: what node types exist, what edges are allowed, how text is extracted into nodes, how nodes are scored during retrieval, and how knowledge decays over time.
Spaces are isolated by default, composable at query time. A single hydration request can search across any combination of spaces — merging a scientist's research graph with their personal memory graph and an AI personality graph in one query. The results are scored and ranked together.
This is where it gets powerful. At query time, you pass a list of space_ids and NeuralGraph searches across all of them simultaneously. Each space contributes nodes, and they're scored together in a single ranked list.
Imagine a research assistant that knows both your personal context and a domain knowledge base. When you ask "What papers should I read next?", NeuralGraph can pull from your personality space (you prefer practical over theoretical), your memory space (you mentioned interest in reinforcement learning last week), and a research space (recent RLHF papers). All composed dynamically, no upfront merging needed.
The system prompt is built from the top-scoring nodes across all spaces. The AI gets exactly the context it needs — personal knowledge, domain knowledge, and behavioral guidelines — all in one call.
NeuralGraph doesn't rely on a single retrieval method. It runs three concurrent channels and merges the results:
| Channel | How It Works | What It Catches |
|---|---|---|
| Trigger Matching | Scans input for trigger phrases, weighted by strength | Contextually relevant nodes that should fire based on conversational patterns |
| Vector Search | Embeds the input and finds semantically similar nodes | Nodes with similar meaning that triggers might miss |
| Graph Expansion | Traverses edges from matched nodes to find connected knowledge | Related context that wouldn't match on text alone |
All three channels run without an LLM call. Retrieval is pure computation — trigger matching, embedding lookup, graph traversal. This means context retrieval takes milliseconds, not seconds. The only LLM call is the final response generation, which uses the context NeuralGraph assembled.
Knowledge isn't static. NeuralGraph manages the full lifecycle of every node:
The result is a graph that naturally reflects what matters now. Recent, frequently-referenced knowledge rises to the top. Old, unused knowledge fades. The system maintains itself without manual curation.
After every response, the AI rates which context was relevant and which wasn't. These ratings flow back to NeuralGraph as relevance feedback, adjusting trigger strengths in real time.
Relevant context? Its triggers get stronger — that pattern will surface more easily next time. Irrelevant context? Its triggers weaken — the system learns not to surface it in similar situations. No retraining. No embeddings recomputed. Just edge weight adjustments in the graph.
| Approach | Retrieval | Structure | Adapts Over Time | Multi-Domain |
|---|---|---|---|---|
| RAG (vector only) | Semantic similarity | Flat text chunks | No | No |
| Entity Knowledge Graphs | Entity lookup | Entity-relation triples | No | No |
| Memory-augmented LLMs | Conversation window | Flat message history | Recency only | No |
| Graph Memory Systems | Graph relationships | Memory graph with updates/extends | Partial (rewrites) | No |
| NeuralGraph | Triggers + vectors + graph traversal (concurrent) | Typed nodes, weighted edges, configurable schemas | Adaptive triggers, relevance feedback, importance decay | Multi-space composition at query time |
Most existing approaches solve one piece of the problem. Vector RAG handles semantic recall but loses structure. Knowledge graphs preserve structure but can't adapt. Conversation windows are simple but forget everything outside the window. NeuralGraph combines structured storage, multi-channel retrieval, and continuous adaptation into a single platform — and lets applications compose across independent knowledge domains at query time.