Transparent pricing

One plan for every step of your journey

From POC on Free to team deployment on Scale. Mandatory BYOK, zero markup, cancel anytime.

Pricing

Simple, transparent pricing

Start for free. Scale as you grow.

Free

0€/mo
Get started
  • 1 Collection
  • 50 Documents
  • 1 Workflow
  • 100 Runs/month
Most popular

Pro

99€/mo
Choose Pro
  • 15 Collections
  • 2,000 Documents
  • 20 Workflows
  • 5,000 Runs/month
  • 5 MCP Servers
  • All triggers
  • API + SDK access
  • 1 User (solo)

Scale

399€/mo
Choose Scale
  • 50 Collections
  • 10,000 Documents
  • 100 Workflows
  • 50,000 Runs/month
  • 50 MCP Servers
  • All triggers
  • API + SDK access
  • 15 Members
  • Priority support

Enterprise

Custom
Contact sales
  • Dedicated deployment
  • Azure & Microsoft 365 ingestion
  • Unlimited everything
  • BYOK (Bring Your Own Keys)
  • Guaranteed SLA
  • Dedicated support
  • Custom domain
  • SSO / SAML

All plans billed in euros. No commitment, cancel anytime. BYOK: you pay LLM providers directly, zero markup.

FAQ

Questions? Answers.

Everything you need to know.

1

What's the difference with a classic chatbot?

A classic chatbot follows predefined scripts or responds from memory without sources. IgnitionRAG uses RAG: it searches your documents in real-time to answer with precise, sourced information. Beyond chat, you have autonomous agents, workflows, and an API to build complete AI systems.

2

How is this different from an agency contract like JAsk or Auria?

Agencies like JAsk or Auria sell custom projects at €50,000 – €200,000 delivered over months. IgnitionRAG delivers the same outcome in self-serve from €99/month, because the platform automates 90% of the work: ingestion, RAG pipeline, deployment, observability. You keep control of your data, your keys, and you can stop whenever you want.

3

What's the difference with Langflow / Flowise / Dify?

Langflow and Flowise are visual builders for experimenting with LLM pipelines. They're great for POCs but often lack observability, scaling, and production tools (SDK, robust API, A/B testing). IgnitionRAG is designed to go from experimentation to production without rebuilding everything.

4

Do I need to know how to code?

Not to start. The no-code interface lets you create assistants and workflows in a few clicks. But if you code, you get access to a complete API, TypeScript/Python SDKs, and full pipeline control β€” the best of both worlds.

5

Can I use my own API keys?

Yes, it's native. BYOK (Bring Your Own Keys) lets you use your OpenAI, Anthropic, or other provider keys. You keep full control of costs and models. We don't take commission on your LLM usage.

6

Can I embed the widget on my site?

Yes, in 2 lines of code. The widget is embeddable on any site or app. Customizable styles, injectable context, and built-in analytics to track conversations.

7

What is RAG, concretely?

RAG stands for Retrieval-Augmented Generation. The idea: instead of asking a language model to answer "from memory" (which produces hallucinations), we first search your documents for relevant passages, then feed them to the model as context so it produces a grounded, sourced answer. Result: reliable, verifiable answers that evolve with your data β€” no retraining needed.

8

Is my data kept private? Where is it stored?

Yes. Your documents and vectors are stored in your dedicated collection β€” never mixed with other customers'. We never train models on your data. LLMs (OpenAI, Anthropic) are called through your own keys in BYOK mode, so your queries don't pass through our accounts. Hosted in France, GDPR-compliant, on-premise option available for enterprise.

Ready to go from POC to production?

Start for free. No credit card required. BYOK with zero markup. Cancel anytime.