Skip to main content
Help · Self-host

Run OpenMind on your own server

Back to help

OpenMind is Apache-2.0; the same docker-compose stack that backs the hosted demo runs on a $5/month VPS, your laptop, or your air-gapped lab. Five steps from `git clone` to a working instance.

  1. Step 1.

    Clone the repo

    `git clone https://github.com/Impetoast/Openmind` and `cd` into it. Everything downstream lives at the repo root — no need to dig into subdirectories unless you're modifying code.

  2. Step 2.

    Set environment variables

    Copy `.env.example` to `.env`. Fill in your Anthropic / OpenAI API keys (or skip them and use Ollama later). The defaults boot a working stack against the bundled Postgres; only the LLM keys are non-optional for the extraction pipeline to actually run.

  3. Step 3.

    Boot the stack

    `docker compose up -d`. First boot pulls images from GitHub Container Registry and applies migrations against the bundled Postgres — about 90 seconds end-to-end on a typical VPS. The web app shows up at http://localhost:3000.

  4. Step 4.

    Sign in

    Magic-link auth works locally too — the bundled Inbucket service captures every email at http://localhost:54324 so you can click your own magic link without a real SMTP provider. Production deploys configure a real provider via the `[auth.hook.send_email]` block in `supabase/config.toml`.

  5. Step 5.

    Bring your own LLM

    Optional: install Ollama on the same host, pull a small model (`ollama pull llama3.2`), and switch the project's provider in Settings → LLM provider. End-to-end offline; no cloud LLM calls leave the box. Same goes for LM Studio if you prefer its UI.

Full operator depth — every env var, every compose profile, every gotcha — lives in the contributor guide at docs/self-hosting.md (in the repo). This page is the elevator pitch; the docs are the manual.