Privacy & your data
OpenMind's promise is in the tagline: Talk it. Map it. Own it. The 'own it' part is load-bearing — here's what that actually means.
What we store
Your messages, the extracted graph (nodes + edges + their embeddings), your project metadata, and authentication artefacts (your email, magic-link tokens). Nothing else. We don't profile you, we don't run analytics on your conversations, and we don't sell anything.
Where it lives
On the hosted demo: a Supabase Postgres in EU-central. Row-level security ensures one user's project rows are invisible to every other user — enforced by the database itself, not by application code. On self-host: wherever you put it. That's the whole point.
Who can read it
You. The maintainers can technically read raw rows on the hosted demo for debugging, but we don't, and we'd ask first. LLM providers (Anthropic, OpenAI, Ollama, LM Studio) see only the slice of context an extraction or RAG call needs — never your full history. If you bring your own API key, those calls go directly from our backend to the provider you chose.
Export everything
Settings → Account → 'Download my data' produces a single ZIP with every project, every message, every node and edge, every attachment metadata row. JSON-formatted, readable in any text editor. Use it to migrate to a self-host, archive your work, or just verify what we have.
Delete your account
Settings → Account → 'Delete account' wipes your auth row and cascades through every project, conversation, message, node, edge, attachment, and embedding. Type-the-email-to-confirm to prevent accidents. The deletion is immediate and irreversible — there's no soft-delete grace period.
Self-host if you want full control
OpenMind ships as a docker-compose stack you can run on a $5/month VPS or your own server. See the self-hosting guide. Same code as the hosted demo, your data never leaves your infrastructure.
Training data — strictly opt-in, per project
OpenMind's roadmap includes a small fine-tuned extraction model (Phase 4). Training corpora are eligible only from projects whose owners explicitly opt in via the "Training data…" item in the project kebab menu — every project ships opted out by default. Even when enabled, the corpus contains anonymised triples (project-salted SHA-256 hashes of canonical keys, never raw labels or messages) and is only assembled when you run POST /api/v1/corpus yourself; the maintainers don't auto-pull from your projects.