NeuralGraph Code

Your codebase's
institutional memory

Claude Code can read your files. It can grep your functions. What it can't do is remember why your code exists, what was tried before, or what your team decided last month. NeuralGraph Code gives every session the context that makes the difference.

The context gap in practice

AI coding assistants are powerful code generators. But they start every session blind to the accumulated knowledge of your project. These aren't hypothetical problems — they happen dozens of times a day on real codebases.

"I want to upload this file to S3"

Your codebase has three different S3 upload patterns. The original one in pkg/storage/ uses the low-level PutObject API. Six months ago, Sarah refactored the media pipeline to use multipart uploads because files grew past 5GB. Last month, David added a presigned URL flow for client-side uploads.

Claude reads the code and sees all three. Without context, it picks one — probably the first one it finds. Maybe it writes a fourth pattern. The team reviews the PR and says "we use the presigned URL flow for this, see the media pipeline."

Without NeuralGraph

Claude generates a new upload function using PutObject. PR rejected, 20 minutes wasted.

With NeuralGraph
## Project Memory ### Storage Patterns - S3 upload: Use presigned URL flow for all new uploads (decided March 2026) - Rejected: Direct PutObject (doesn't support >5GB), multipart (too complex for client) - Implementation: pkg/storage/presigned.go, called from API handlers - Legacy PutObject in pkg/storage/upload.go is deprecated, do not use for new code

Claude sees the decision history, uses the right pattern, and the PR is clean.

"Fix this flaky test"

A test in the payment service fails intermittently on CI. It passed locally every time. Three months ago, someone on the team spent two days debugging this exact class of failure and found it was a race condition in the test database setup — the fix was adding a transaction wrapper.

That debugging session happened in a Claude Code conversation that's long gone. The commit message says "fix flaky payment test" with no details. The new developer starts the investigation from scratch.

With NeuralGraph
### Known Issues - Payment test flakiness: Race condition in DB setup between test cases - Root cause: Parallel test execution shares connection pool, uncommitted writes leak - Fix: Wrap each test case in a transaction that rolls back (see TestPaymentBase) - Diagnosed by: Sarah, January 2026. Took 2 days to isolate.

Two days of debugging reduced to reading one paragraph.

"Add authentication to this new endpoint"

Your team has specific auth conventions — middleware ordering matters, certain endpoints need scope-based checks, the JWT validation library has a known quirk with clock skew. These aren't in a README. They're in the heads of the three engineers who built the auth layer.

With NeuralGraph
### Auth Conventions - Middleware order: CORS first, then auth, then rate limit (auth failures return before rate limit counts) - JWT clock skew: Allow 30s tolerance — nimbus-jose-jwt doesn't do this by default, set in JwtIssuer - Scope-based routes: Use verifySpaceAccess() for any space-scoped endpoint - Never put auth on /ping or /metrics (health checks must be unauthenticated)

"We need to migrate this service to Kotlin"

A cross-team initiative touches fifteen repositories. The architecture decisions, library choices, and migration patterns were discussed across dozens of conversations. A new engineer joins the project and needs to get up to speed.

With NeuralGraph
## Kotlin Migration - All new services: Kotlin + Ktor + Koin. Existing Go services stay unless major refactor. - Shared auth library: kotlin-auth-lib v2.3 required for all Kotlin services - Database: Keep Postgres, use HikariCP pool, Flyway for migrations - Rejected: Spring Boot (too heavy), Micronaut (team unfamiliar) - Milestone: Core services migrated by Q3 2026

The new engineer clones the repo and immediately has the full project context.

What NeuralGraph remembers

NeuralGraph Code doesn't index your source files — Claude already reads those. It captures the knowledge that lives between the lines: the reasoning, the history, the decisions that shaped the code.

Decisions

Why Chi over Gin. Why Postgres over DynamoDB. What was considered and rejected, and what constraints drove the choice.

Patterns

How your team does error handling. Which testing patterns to follow. Where to put new endpoints. The conventions that aren't in a linter.

Issues & Workarounds

The SQLite deadlock that took two days to debug. The API quirk that requires a specific header. Problems solved once, never re-investigated.

Commands

How to run the integration tests. The deploy script flags. The docker-compose override for local dev. Commands that work.

Session History

What was worked on yesterday. What was accomplished in the last refactor. The narrative of how the codebase evolved, session by session.

Developer Preferences

Your coding style across all projects. Language preferences. How you like explanations. Carried between repos automatically.

Nothing to remember

Most memory tools ask you to do something. Tag this. Save that. "Hey AI, remember this for later." NeuralGraph Code asks nothing. You don't tell it what to remember. The LLM doesn't decide what to store. Nobody calls a "save" command.

The developer does nothing

You code. You talk to Claude. You commit. That's it. You never type "remember this." You never tag a conversation as important. You never organize your memories into folders. The system captures context from the natural lifecycle of your work — conversations that get long enough to compact, commits that land — without any action from you.

The LLM does nothing

Claude doesn't decide what to remember. It doesn't call a tool to store context. It doesn't choose whether to recall — context is injected before Claude even sees your message. The read path is a programmatic hook, not an LLM judgment call. This means retrieval is deterministic, fast, and never forgotten because the model "didn't think to look."

The system does everything

Hooks fire on lifecycle events that already happen. Conversation compacted? Knowledge extracted. Code committed? Context captured. New prompt? Relevant memories injected. The write path runs on events. The read path runs on every prompt. Both are automatic. The only manual action is the initial npx @neuralgraph/code init — after that, memory accumulates silently.

How it works

NeuralGraph Code hooks into Claude Code's lifecycle events. Context flows in before every prompt, knowledge flows out on compaction and commit. No tool calls, no commands, no decisions.

1

Every prompt gets context

Before Claude sees your message, NeuralGraph searches your project's memory for relevant knowledge. Decisions about the module you're touching. Issues in the file you're editing. Patterns your team follows. Injected as context — Claude doesn't call a tool, it just knows.

2

Sessions become memory

When Claude Code compacts a long conversation, the summary is ingested into NeuralGraph. The extraction pipeline identifies decisions, explanations, issues, and patterns — structuring them as searchable knowledge nodes with semantic triggers.

3

Commits capture context

Each git commit is a decision point. NeuralGraph captures the commit message and changed files, linking them to the session's reasoning. Future developers can trace not just what changed, but why.

4

The graph learns

Memories that keep surfacing get stronger. Memories that stop being relevant fade. The retrieval adapts to your actual workflow — no manual curation needed.

Three layers of memory

Context lives at different scopes. NeuralGraph Code maintains three memory layers, queried together on every prompt.

Repository Memory

One per git repo. Stores the institutional knowledge of that codebase — architecture decisions, coding patterns, known issues, useful commands. Shared by everyone who works on the repo. Committed to git via .neuralgraph/spaces.yaml so new engineers get context on clone.

Project Memory

One per cross-repo initiative. When a feature touches multiple services, the decisions and requirements that span repos live here. "All new Kotlin services must use kotlin-auth-lib" appears in every repo that's part of the migration.

Developer Memory

One per developer, private and portable. Your coding style, language preferences, tool choices. Follows you across every project. "Prefers early returns" and "senior Go engineer, beginner at React" shape how Claude communicates with you specifically.

Built for teams

NeuralGraph Code is designed for engineering organizations, not just individual developers.

Google SSO

Engineers sign in with their corporate Google account. Domain-based tenant mapping means zero provisioning — if you're @company.com, you're in.

Data Isolation

Tenant boundaries are enforced on every API call. Your org's data never leaks to another tenant. Developer preferences are private even within a team.

Shared Repo Memory

Every engineer's sessions contribute to the collective knowledge of the codebase. New team members inherit months of context on day one.

Cross-Repo Context

Platform-wide decisions, architecture standards, and migration plans surface in every repo they affect. One source of truth.

Get Started

Install the Claude Code plugin and sign in. Context starts flowing automatically.

npx @neuralgraph/code init