# Mnemix - Full Text Knowledge Bundle
Generated: 2026-05-02T00:00:00.000Z
Canonical: https://mnemix.ai

---

## Canonical Summary

Mnemix is the memory + real-world enrichment API for AI voice agents. Same-number caller memory and available enrichment help agents start informed, with sub-300ms voice recall as a design target. Voice-first, API-native. Built for Vapi, Retell, Bland, LiveKit, and Twilio, with the same memory and enrichment API available to MCP tools, Claude Code CLI workflows, and backend agents via HTTP.

## What/Who/Price/How

- WHAT: Mnemix is a memory + real-world enrichment API for AI voice agents.
- WHO: For developers building AI voice agents on Vapi, Retell, Bland, LiveKit, or Twilio.
- PRICE: Hobby $0 (free tier · 50 sessions / 1,000 memory ops / 100 lookups). Starter, Pro, and Elite tiers — contact sales for pricing while billing is in private beta.
- HOW: One API call per turn — lookup, remember, recall. Sub-300ms voice recall is a design target at the Cloudflare edge.

## Wedge

Mnemix is the memory API where callers can arrive with same-number memory and available enrichment joined to the agent's context. Real-world identity, intent, and history are shaped for voice workflows, with sub-300ms voice recall as a design target.

### Caller-ID enrichment, voice-native
Twilio Lookup + Trestle person/company resolution + Baylio call-intent fan out before the first audio packet. The agent's first response is already informed.

### Sub-300ms voice recall as an edge design target
Cloudflare Workers across 5+ regions. The latency budget targets memory + enrichment in parallel under the 800ms voice ceiling — measured public benchmark coming with v1.0.

### Bi-temporal session memory
Four timestamps per fact (valid_from, valid_to, observed_at, ingested_at). Built for voice flows where context shifts mid-call.

## Pricing

| Tier | Price | Sessions/mo | Memory ops/mo | Twilio Lookups | Enrichments | SLA |
|---|---|---|---|---|---|---|
| Hobby | $0 Hobby tier | 50 | 1,000 | 100 | 0 | Community Discord |
| Starter | Starter - contact hello@mnemix.ai for pricing (private beta) | 500 | 25,000 | 2,000 | 200 | Email · 48h |
| Pro | Pro - contact hello@mnemix.ai for pricing (private beta) | 5,000 | 250,000 | 25,000 | 2,500 | Email · 24h + Slack channel |
| Elite | Elite - contact hello@mnemix.ai for pricing (private beta) | 25,000 | unlimited | unlimited | 25,000 | Slack DM · 4h + dedicated CSM |

## Integrations

### Bland
Bland.ai pathway integration with pre-call enrichment and structured task memory.
Quickstart: https://mnemix.ai/integrations/bland

## How Mnemix compares

### Mnemix vs Mem0

**Verdict:** Mem0's developer experience for chat is genuinely good and their vector-store breadth is unmatched. The moment you swap stdin for a Vapi webhook, you're outside their lane — and their independent LongMemEval result was 49% vs the 93.4% they self-report. Choose Mnemix if you're building voice.

Background: The mindshare leader in OSS memory. 54k stars, $24M raised, 19 vector backends — but no voice, no enrichment.

| Dimension | Mem0 | Mnemix |
|---|---|---|
| GitHub stars | 54,251 | n/a (closed alpha) |
| Funding | $24M (YC, Peak XV, Basis Set) | Bootstrapped |
| Voice integrations | ❌ | ✅ Twilio, Vapi, Retell, Bland |
| Caller-ID enrichment | ❌ | ✅ Twilio Lookup + Trestle + Baylio |
| Bi-temporal memory | ❌ | ✅ (4 timestamps per fact) |
| Edge runtime | ❌ Python server | ✅ Cloudflare Workers |
| Vector backends | 19 | 1 (Supabase pgvector) |
| Multilingual NER | ❌ English-only spaCy | ✅ libphonenumber + Trestle |
| LongMemEval (self-reported) | 93.4% | Coming May 2026 (public methodology) |
| LongMemEval (independent) | 49% (community replication) | Coming May 2026 |
| Compliance | SOC 2 Type 2, HIPAA | GDPR day 1; SOC 2 Year 1 |
| Starter price | $19 to $249 cliff (13.1x) | Hobby $0; Starter+ contact sales |

Full comparison: https://mnemix.ai/compare/mem0-vs-mnemix

### Mnemix vs Zep

**Verdict:** Zep's bi-temporal graph is the strongest research-cited memory architecture for chat agents and Graphiti is genuinely impressive engineering. But voice is a community demo, not a first-class integration, and the Mem0 paper's independent LOCOMO result (63.8%) sits well below Zep's self-reported 71.2%. Choose Mnemix if you're building voice.

Background: The research-cited choice. Bi-temporal knowledge graph (Graphiti), SOC 2, BYOC. Voice is a community demo only.

| Dimension | Zep | Mnemix |
|---|---|---|
| GitHub stars | 25,463 (graphiti) + 4,495 (zep) | n/a |
| Bi-temporal | ✅ (4 timestamps in graphiti_core/edges.py) | ✅ (session-scoped) |
| Voice integrations | 🟡 community demo only | ✅ Twilio, Vapi, Retell, Bland |
| Caller-ID enrichment | ❌ | ✅ Twilio Lookup + Trestle + Baylio |
| Edge runtime | ❌ Python on AWS | ✅ Cloudflare Workers |
| Claimed P95 retrieval | <200ms | designed for sub-300ms voice recall |
| LongMemEval (self) | 71.2% | Coming May 2026 |
| LongMemEval (independent) | 63.8% (Mem0 paper) | Coming May 2026 |
| Starter price | $25 to $125 (5x) | Hobby $0; Starter+ contact sales |

Full comparison: https://mnemix.ai/compare/zep-vs-mnemix

## FAQ

### What is Mnemix?
Mnemix is a memory + real-world enrichment API for AI voice agents. It joins identity, intent, and call history to your agent's memory at Cloudflare edge latency, with sub-300ms voice recall as a design target.

### Who is Mnemix for?
Developers building AI voice agents on platforms like Vapi, Retell AI, Bland AI, LiveKit, or directly on Twilio. Also chat developers who want phone-native identity.

### How is Mnemix different from Mem0?
Mem0 is a general-purpose memory layer with strong mindshare and 19 vector backends. Mnemix is voice-first with caller-ID enrichment built in (Twilio Lookup + Trestle + Baylio), runs on Cloudflare Workers at the edge, and ships first-class integrations with Vapi, Retell, and Bland. Choose Mnemix if you're building voice. Choose Mem0 if you need a general-purpose memory layer with deep vector-store choice.

### How is Mnemix different from Zep?
Zep + Graphiti is the research-cited bi-temporal knowledge graph for chat agents. Mnemix is voice-native with built-in caller-ID enrichment. Zep's voice support is a community demo; Mnemix ships first-class integrations with Vapi, Retell, Bland, and Twilio. Choose Mnemix if you're building voice.

### How is Mnemix different from Supermemory?
Supermemory and Mnemix share the same Cloudflare Workers stack. The difference is voice + enrichment: Mnemix bundles Twilio Lookup, Trestle, and Baylio call-intent into the memory primitives. Supermemory is general-purpose memory. Choose Mnemix if you're building voice.

### How is Mnemix different from Letta?
Letta deprecated its voice endpoint in March 2026 (HTTP 410 in letta/server/rest_api/routers/v1/voice.py). Mnemix is voice-first. Choose Mnemix if you're building voice.

### How is Mnemix different from LangMem?
LangMem is bound to LangGraph and reports 59.82-second p95 latency in its own benchmarks — structurally disqualified for voice. Mnemix targets sub-300ms voice recall as a design goal. Choose Mnemix if you're building voice.

### How much does Mnemix cost?
Hobby is $0/month — free tier with 50 sessions, 1,000 memory operations, and 100 Twilio Lookups. Starter, Pro, and Elite tiers are in private beta — contact hello@mnemix.ai for pricing while we wire billing.

### Is there a free tier?
Yes. Hobby is $0/month with 50 sessions, 1,000 memory operations, and 100 Twilio Lookups. Attribution required (Powered by Mnemix). Sign up at https://mnemix.ai.

### What latency does Mnemix offer?
Sub-300ms voice recall is a Cloudflare-edge design target. A measured public benchmark across 5 regions ships with v1.0 (target May 2026). We'd rather under-promise and publish honest numbers.

### Where does Mnemix run?
Cloudflare Workers (edge), with Durable Objects for session state, R2 for blob storage, KV for hot cache, and Supabase Postgres + pgvector for the durable memory store.

### Does Mnemix support the OpenAI Assistants API?
Yes — Mnemix is a migration target. The OpenAI Assistants API is shutting down 2026-08-26. See https://mnemix.ai/compare/openai-assistants-vs-mnemix for the migration plan.

### Does Mnemix integrate with Vapi?
Yes. The Vapi integration takes ~5 minutes — drop the Mnemix middleware into your Vapi function-call handler. See https://mnemix.ai/integrations/vapi.

### Does Mnemix integrate with Retell AI?
Yes. The Retell integration provides caller-ID lookup and conversation memory persistence. See https://mnemix.ai/integrations/retell.

### Does Mnemix integrate with Bland AI?
Yes. See https://mnemix.ai/integrations/bland.

### What benchmark scores does Mnemix have?
Mnemix's first LongMemEval result is targeted for May 2026 (≥85% on the oracle split). Until then, no benchmark numbers are published — see https://mnemix.ai/research/longmemeval for the methodology and harness.

### Is Mnemix open source?
The enrichment SDK is MIT-licensed on GitHub. The memory layer is closed-source SaaS with self-host coming for Elite tier.

### Is Mnemix HIPAA-compliant?
GDPR-compliant from day 1. SOC 2 Type II in progress (Year 1 target). HIPAA-ready architecture with BAA available at Pro+ after $250K MRR.

### Where is Mnemix data stored?
Cloudflare global edge for hot session state; Supabase EU + US regions for durable storage. EU data residency available.

### How do I get started with Mnemix?
Sign up at https://mnemix.ai for a free Hobby key, install the SDK (npm install @mnemix/client), and follow the 5-minute quickstart at https://mnemix.ai/docs/quickstart. For voice, see https://mnemix.ai/docs/voice-quickstart.

## Discovery surfaces

- https://mnemix.ai/llms.txt
- https://mnemix.ai/llms-full.txt
- https://mnemix.ai/agents.json
- https://mnemix.ai/.well-known/openapi.json
- https://mnemix.ai/api/agents/knowledge

Last updated: 2026-05-02