Launching March 2026

GDPR-compliant RAG chatbot
you actually own

A self-hosted boilerplate for building AI chatbots over your documents. No US data transfers. No JavaScript frameworks. Deploy to a €5/mo Hetzner VPS in minutes.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Newsletter

Subscribe

We use Brevo as our marketing platform. By submitting this form you agree that the personal data you provided will be transferred to Brevo for processing in accordance with Brevo's Privacy Policy.

One-time purchase, from €49 Full source code Self-hosted MCP-ready
FastAPI backend HTMX frontend Ollama / Mistral / OpenAI ChromaDB vectors MCP server Docker deploy

The problem with AI chatbots in Europe

Most RAG solutions either send your data to the US, lock you into expensive SaaS, or require a full-stack JavaScript team to customize.

The problem

SaaS RAG tools route data through US servers. Your client data leaves the EU. Your DPO is unhappy. You need a Data Processing Agreement you'll never get.

EuroRAG

Everything runs on your server. Ollama for local inference, ChromaDB for local vectors. Data never leaves your VPS. Mistral (French) as the default cloud provider when you need one.

The problem

Paid boilerplates like ChatRAG and StartKit.AI are locked into Supabase, Pinecone, and OpenAI. You can't self-host the vector store. You can't run inference locally. Your GDPR story falls apart at the architecture level.

EuroRAG

EuroRAG is self-hosted top to bottom. ChromaDB runs on your machine. Ollama runs on your machine. No cloud dependency, no API keys required. Add Mistral or OpenAI when you choose to — with full data residency tracking.

The problem

Most RAG boilerplates are React + Node.js + Python + Docker orchestration. Your Python team now needs JavaScript expertise to customize the frontend.

EuroRAG

One language. Python handles the backend, Jinja2 handles the templates, HTMX handles interactivity. If you know Python, you can customize everything. No npm install required.

What's in the box

Everything you need to build a production RAG chatbot, nothing you don't.

🔒

GDPR by Architecture

Data residency tracking per provider (LOCAL, EU, US). Deletion cascades. Audit logs. Strict EU mode blocks non-EU providers entirely. Not a checkbox — a design principle.

🐍

Pure Python Stack

FastAPI + Jinja2 + HTMX. One language, one codebase, one container. No React, no Node.js, no webpack. Your Python team owns the entire stack.

🤖

Flexible Model Support

Ollama for fully local inference. Mistral API for EU-hosted cloud. OpenAI/Groq when you need it. Switch providers by changing one env variable.

🔌

MCP Server Built-In

Expose your private documents as an MCP tool. Claude Desktop, LangGraph, CrewAI, or any MCP client can query your knowledge base without custom integration.

📄

Document Connectors

Upload PDFs, DOCX, TXT, Markdown. Sync from local folders, crawl websites, connect Nextcloud. All self-hosted, no monthly connector fees.

💬

Streaming Chat + API

Real-time SSE streaming in the UI. Full OpenAI-compatible API at /v1/chat/completions so your existing tools just work.

🔄

Update-Safe Customization

Core/custom separation architecture. Modify templates, add routes, swap services — your changes survive when you pull updates from the core engine.

💶

Runs on €5/mo

Designed for Hetzner CX22 (2 vCPU, 4GB RAM). Docker Compose up and you're running. No Kubernetes, no managed services, no surprise bills.

🛡️

Security Hardened

CSRF protection, rate limiting, path traversal prevention, session management, structured logging with request IDs. Audited and production-ready.

Deploy in 5 minutes

From purchase to running chatbot on your own server.

Get the source code

Purchase and download the complete codebase. Unzip it on your server or local machine.

Configure your environment

Copy .env.example to .env. Set your model provider (Ollama for local, Mistral for EU cloud), choose your vector store, done.

Deploy with Docker

Run docker compose up -d. The app, Ollama, and ChromaDB start together. Everything stays on your server.

Upload documents and chat

Drop your PDFs into the admin UI, wait for indexing, and start asking questions. Connect Nextcloud or a web crawler for ongoing sync.

Make it yours

Customize templates, add routes, change the branding. It's your code — build the product your clients need.

How EuroRAG compares

Compared against other paid RAG boilerplates and the leading open-source alternative.

EuroRAG ChatRAG StartKit.AI AnythingLLM
TypeBoilerplateBoilerplateBoilerplateFull app
StackPython + HTMXNext.js + ReactNode.js + ReactJS + Python
Fully self-hostedYes — everything localNo — requires SupabaseNo — requires PineconeYes
Local LLM inferenceOllama built-inOpenAI onlyOpenAI primaryYes
GDPR data residency trackingBuilt-inNoneNoneManual
MCP serverBuilt-inNoNoNo
Designed for customizationCore/custom separationFork and modifyFork and modifyFull app — hard to rip apart
Runs on €5/mo VPSYesNeeds SupabaseNeeds PineconeYes
PriceFrom €49$199–269$199+Free / $40

AnythingLLM is excellent if you want a ready-to-use app. ChatRAG and StartKit.AI are great if you're building on Next.js/Node.js. EuroRAG is for Python developers who want a self-hosted, GDPR-compliant starting point they fully control.

One price. Your code. Forever.

No subscriptions. No per-seat fees. No "contact sales." Buy it, own it, ship it.

Launching March 2026

Developer

€49
One-time payment
The RAG engine for agent builders. API + MCP server, no chat UI.
  • Core RAG engine
  • OpenAI-compatible API
  • MCP server endpoints
  • All document connectors
  • Data residency tracking
  • Docker Compose deployment
  • Community support
Get Notified
Launching March 2026

Starter

€89
One-time payment
The complete boilerplate. Everything you need for your own RAG chatbot.
  • Everything in Developer
  • Streaming chat UI
  • User authentication
  • Rate limiting
  • Citation display
  • Session management
  • Community support
Get Notified
Coming Soon

Agency

€349
One-time payment
Deploy for unlimited clients. White-label, multi-tenant, lifetime updates.
  • Everything in Professional
  • White-label rights
  • Multi-tenant workspaces
  • Priority email support
  • Lifetime updates
  • Deploy for unlimited clients
Get Notified

All tiers include full source code

Frequently asked questions

Is this actually GDPR-compliant out of the box?

EuroRAG gives you the technical infrastructure for GDPR compliance — data residency tracking, deletion flows, audit logs, consent management, and the ability to run everything locally or on EU servers. However, GDPR compliance also involves organizational measures (privacy policies, DPAs, etc.) that depend on your specific use case. We provide the tools; you'll still need legal review for your deployment.

What models can I use?

Any model that runs on Ollama (Mistral, Llama 3, Qwen, Gemma, etc.) for fully local inference. Any OpenAI-compatible API (Mistral API, OpenAI, Groq, Together AI) for cloud. Or self-hosted inference servers like vLLM or LocalAI. Switch providers by changing environment variables — no code changes needed.

What's the Developer tier for?

The Developer tier gives you the RAG engine, OpenAI-compatible API, and MCP server — without the chat UI or admin panel. It's for developers building agents or custom interfaces who want a GDPR-compliant private knowledge base they can query programmatically. Think of it as the backend for your own AI workflows.

I'm not a developer. Is this for me?

Probably not. EuroRAG is source code — a boilerplate for developers and agencies building RAG products. You need to be comfortable with Python, Docker, and basic server administration. If you want a ready-to-use app, AnythingLLM might be a better fit.

Why Python-only? Isn't React better for UIs?

React is powerful, but it means your team needs JavaScript expertise, a Node.js build pipeline, and a separate frontend deployment. EuroRAG uses Jinja2 templates + HTMX, which means the same Python developer who writes the API can also modify the UI. For a boilerplate you're meant to customize, this is a significant advantage.

How does it compare to ChatRAG or StartKit.AI?

ChatRAG ($199+) and StartKit.AI ($199+) are both excellent boilerplates, but they're built on Next.js/Node.js and depend on cloud services (Supabase, Pinecone, OpenAI). EuroRAG is fully self-hosted, runs entirely on your infrastructure with local inference via Ollama, and includes GDPR compliance tooling that neither competitor offers. If you're in the EU and care about data sovereignty, EuroRAG is the option that's GDPR-compliant by architecture.

Can it really run on a €5/month server?

Yes, with caveats. A Hetzner CX22 (2 vCPU, 4GB RAM) can run the app, ChromaDB, and small Ollama models for light usage (~10 concurrent users). For larger models or heavier loads, you'll want more RAM. The recommended setup is a CAX21 (4 ARM vCPU, 8GB) at €7.49/mo.

What's the MCP server for?

MCP (Model Context Protocol) is the open standard for connecting AI agents to data sources. With EuroRAG's MCP server, any MCP-compatible client — Claude Desktop, Cursor, LangGraph, CrewAI — can query your private documents without custom integration. Your documents stay on your server; agents just ask questions.

When does it launch?

March 2026. Leave your email above and you'll get a single notification when it's available — no drip campaigns, no spam.

Do you offer refunds?

No, the software is not returnable. If you have more questions or would like to see code snippets - get in touch.

Stop sending your clients' data to the US.

Get the source code. Deploy on your server. Own your AI stack.

Get Notified