Conclave is a native Windows app that combines a full IDE with a real debugger, a multi-provider AI chat, agent teams, persistent project memory, scheduled automations, and 140+ MCP plugins.
Free. Run local models with Ollama or free NVIDIA NIM — zero API cost. Or bring your own keys. Your conversations never touch our servers.
Cold-starts in under a second. Memory footprint that doesn't embarrass itself on an 8 GB machine. Behaves like a real Windows app — proper window snapping, multi-monitor, no Chromium baggage.
Claude, GPT, Gemini, Groq, Kimi, Copilot — all configured in one place. Sign in with Claude, ChatGPT, or Copilot directly, no API key required. Pin a model to an automation. Switch on the fly.
Run any Ollama model locally — Llama, Qwen, DeepSeek, Mistral — fully offline, zero API cost, your data never leaves the machine. Or use NVIDIA NIM's free hosted tier. Conclave treats them as equals to the paid providers.
Need a quick answer? Chat mode is your ChatGPT replacement. Building something? IDE mode gives you a multi-tab editor, integrated terminal, git source control, and a real debugger with breakpoints. One toggle. Same app.
Four layers of memory: per-project facts in .conclave/memory.md, a live code knowledge graph of every file and call, long-term archives you can semantically search, and shared team memory for Quorum. Your AI actually remembers your project.
F5, F9, F10, F11 — set breakpoints in the gutter, step through code, inspect state. Real Debug Adapter Protocol, real language servers for completions and diagnostics. Plus Open VSX extension marketplace for everything else.
Spin up multiple specialized agents (researcher, coder, reviewer) that work in parallel — same-provider runs collapse for speed, cross-provider runs fan out. Or run the same prompt across three models and compare answers side by side.
Built-in slash commands for the work you actually do: /health, /perf, /predict, /scan, /audit, /translate, /timemachine, /architect, /design, /resolve, /compare, and more.
Every risky turn snapshots your workspace before touching it. Diff preview with per-hunk accept/reject before applying. Five permission modes from Ask Permissions to Plan Mode to Autopilot. One-click undo, always.
Connect your AI to GitHub, Linear, Gmail, Stripe, Notion, your database, and dozens of other services. Curated MCP marketplace with OAuth handled for you. Per-server trust prompts. Install in one click.
"Every morning at 9, summarize my open PRs." "Every Friday, audit the codebase." Backed by Windows Task Scheduler — automations fire on schedule even when Conclave is closed. Results land in a tagged chat for review.
Kick off long-running tasks with /background and keep working. Autopilot loops through planning, tool use, and self-correction with live phase tracking. Auto-fix detects errors in tool output and offers one-click repair.
Conclave never sees your conversations. Your data goes directly from your machine to the AI provider you picked. API keys are DPAPI-encrypted and bound to your Windows account. Telemetry is opt-in. Export or delete all data anytime.
Pre-built personas — Security Reviewer, REST API Builder, WPF Expert, and dozens more — that specialize the AI for the task. Or write your own skills and slash commands and reuse them across projects.
Push-to-talk voice input with Whisper. Paste or drop screenshots straight into chat. Built-in computer-use tools let agents take screenshots and drive your mouse and keyboard for desktop automation.
Most AI tools pick a side — Cursor is Claude+GPT, Codex is OpenAI-only, Claude Code is Claude-only. Conclave runs every major model in the same app. When the next one drops, you add a key and it works. No waiting for your tool to catch up.
Most people use ChatGPT Desktop for quick questions and Cursor or VS Code for coding — and lose context switching between them. Conclave morphs between a simple chat view and a full IDE depending on what you're doing. Same conversations, same models, same everything.
Cursor, Windsurf, ChatGPT Desktop, Claude Desktop — all Electron. They run on Windows but were built for Mac and feel like it. Conclave is native WPF on .NET 8. It belongs on a Windows machine.
Bring your own API keys. Your conversations go directly from your machine to whichever AI provider you chose. Conclave never sees them. We don't have a server to subpoena, breach, or sell your data from — because we don't have a server.
Most "AI workstations" require a paid subscription before you can type a word. Conclave runs end-to-end on free options: install Ollama and pull Llama, Qwen, or DeepSeek for fully offline inference, or use NVIDIA NIM's free hosted models. No credit card, no usage caps, no telemetry on your prompts.
| Cursor | Windsurf | ChatGPT Desktop | Claude Desktop | Conclave | |
|---|---|---|---|---|---|
| Native (not Electron) | ✗ | ✗ | ✗ | ✗ | ✓ |
| Multi-provider AI | partial | partial | ✗ | ✗ | ✓ All major |
| Full IDE with debugger | partial | partial | ✗ | ✗ | ✓ |
| Simple chat mode | partial | partial | ✓ | ✓ | ✓ |
| Agent orchestration | partial | partial | ✗ | ✗ | ✓ Team & Quorum |
| Persistent project memory | partial | partial | partial | partial | ✓ 4-layer |
| Code knowledge graph | ✗ | ✗ | ✗ | ✗ | ✓ |
| Plugin marketplace | ✗ | ✗ | ✗ | partial | ✓ 140+ MCP |
| Scheduled automations | ✗ | ✗ | ✗ | ✗ | ✓ |
| Auto-checkpoints & undo | partial | partial | ✗ | ✗ | ✓ |
| Local models (Ollama) | partial | partial | ✗ | ✗ | ✓ First-class |
| Runs free, no subscription | ✗ | ✗ | ✗ | ✗ | ✓ |
| BYOK (no vendor lock-in) | partial | partial | ✗ | ✗ | ✓ |
One native Windows app. Every major AI model. Run it free with local Ollama or NVIDIA NIM — or bring your own keys.
Get Early Access