CONTEXT INFRASTRUCTURE FOR AI AGENTS

Agents are only as good
as the context they receive.

Your AI hallucinates because it's missing relationships, not documents. We build hypergraph-based context infrastructure that preserves the structure of knowledge — so agents can reason over connections, track what's current, and know when to abstain.

Six problems with AI context today.

RAG retrieves documents. Agents need structured knowledge.

Isolated facts

AI stores facts but loses the relationships between them.

Relational storage that preserves connections.

No sense of time

Stale information gets treated the same as fresh.

Temporal reasoning that knows what supersedes what.

Everything remembered

Context windows fill with irrelevant history.

Automatic decay that forgets what's no longer relevant.

Scattered signals

The same fact mentioned 10 times stays as 10 separate entries.

Consolidation that merges repeated signals into beliefs.

Always answers

AI makes up facts when it doesn't know, confidently.

Abstention — say 'I don't know' when uncertain.

Ungrounded claims

No way to verify if retrieved context is actually correct.

Verification against authoritative sources.

Our thesis.

Three principles that guide everything we build.

Hypergraphs

Relationships are rarely binary.

Real-world facts involve multiple entities at once. 'Dr. Smith prescribed aspirin to Patient 123 for a headache at Mercy Hospital' is one event, not five separate edges. Hypergraphs store N-ary relationships atomically, preserving the structure that binary knowledge graphs destroy.

Human-inspired

Memory that works like yours.

Human memory consolidates repeated signals, decays irrelevant information, and knows when it doesn't know. We build on cognitive science — ACT-R memory models, complementary learning systems — to give AI agents the same capabilities.

Verification layer

Grounded in sources.

Every fact carries provenance: where it came from, how confident we are, when it was recorded. When knowledge conflicts, the system surfaces the discrepancy instead of silently picking one. Answers can be traced back to their origins.

Open source foundation.

Two products, one mission: give AI agents the context layer they deserve.

Hypabase

Hypergraph engine for knowledge representation

Available

Hypergraphs preserve atomicity. Real-world events involve multiple entities at once. "Dr. Smith prescribed aspirin to Patient 123 for a headache at Mercy Hospital" is one event — not five binary edges that lose the connection between participants.

N-ary hyperedges store the complete event structure in a single retrievable unit. Provenance tracking (source + confidence) built in. SQLite persistence with O(1) vertex-set lookup.

GitHub
structure.py
from hypabase import Hypabase

hb = Hypabase("knowledge.db")

# One edge connecting five entities
hb.edge(
    ["dr_smith", "patient_123", "aspirin",
     "headache", "mercy_hospital"],
    type="treatment",
    source="clinical_records",
    confidence=0.95
)

# Query and traverse
hb.edges(containing=["patient_123"])
hb.paths("dr_smith", "mercy_hospital")

Hypabase Memory

Sanskrit-inspired memory for AI agents

Coming Soon

Sanskrit semantics meets modern AI. We use Karaka roles from Paninian grammar — the world's first formal language framework — to label semantic relationships. Combined with Abstract Meaning Representation (AMR) and PENMAN notation for structured fact extraction from natural language.

Dual-arm retrieval combines graph traversal (QB-PPR) with semantic search via Reciprocal Rank Fusion. ACT-R inspired memory strength with automatic decay and consolidation. Achieves 87.4% accuracy on LongMemEval (ICLR 2025).

memory.py
from hypabase import Memory

mem = Memory("agent.db")

# Store structured facts in PENMAN notation
mem.remember("""
(prefers :subject User
         :object morning_meetings
         :locus work
         :memory_type semantic)
""")

# Dual-arm recall: graph + semantic
memories = mem.recall(entity="User")

# Consolidate duplicates, decay old memories
mem.consolidate()

Why Gamgee.

Built by people who know what production looks like.

Enterprise DNA

Our team built data systems at JP Morgan, Slice, and Auquan. We know what enterprise compliance, scale, and reliability require.

Security-first

SOC 2 TYPE II compliant and ISO 27001 certified. Your data governance requirements are met from day one.

SOC 2 TYPE II
ISO 27001

Production-tested

We've shipped analytics systems that run every day, not just in demos. Error recovery, sandboxed execution, and observability are all built in.

Open source foundation

Hypabase, our hypergraph engine, is open source. Our core technology is auditable and our roadmap is public.

Built by engineers who've shipped.

Enterprise-hardened. Production-tested. Years building data and AI systems at JP Morgan, Slice, and Auquan.

Yash Goyal

Yash Goyal

CEO

Ex-Equity Research at JP Morgan. At slice small finance bank, built and scaled a credit product from zero. IIT Madras '19.

Harshid Wasekar

Harshid Wasekar

CTO

Ex-Founding Lead Engineer at Auquan. Built AI infrastructure for finance products used by MetLife, T. Rowe Price, and BC Partners. BITS Pilani '18.

Kevin Pandya

Kevin Pandya

HEAD OF ENGINEERING

Ex-Quant and Infrastructure Engineer at HFT firm, Quant Researcher at JP Morgan. 5+ years building enterprise-grade infrastructure. IIT Guwahati '18.

Frequently asked questions.

Technical details for the curious.