Pick the right LLM provider
OpenMind talks to four LLM providers via a thin adapter layer. Pick per project in Settings → LLM provider; switch any time without losing the existing graph (the data shape is provider-agnostic).
- Cloud · default
Anthropic Claude Sonnet 4.6
The default. Picked for extraction quality on conversational German + English. Fastest path to a working extraction loop; no setup, just an API key.
- Cloud · alternative
OpenAI GPT-4o
Solid alternative when you already have OpenAI credit or your team is standardised on the OpenAI API. Slightly different extraction style — A/B per project to see which fits your domain.
- Local · privacy-first
Ollama
End-to-end offline on your own machine. Smaller models (Llama 3.2, Phi-4-mini, Qwen 2.5) trade extraction quality for zero data leaving the box. Best for sensitive projects and self-host operators.
- Local · power-user
LM Studio
If you prefer LM Studio's GUI for managing local models and quantisations. Same endpoint shape as Ollama; the adapter just calls the OpenAI-compatible server LM Studio exposes on :1234.