The manageable AI backend for your documents and agents

API for developers, dashboard for business teams, BYOK with zero markup. Go from POC to production in minutes.

IgnitionRAG - Multimodal RAG platform, from POC to production in minutes | Product Hunt

Compatible with every LLM provider, zero markup

Native BYOK — your OpenAI, Anthropic, Mistral or Azure OpenAI keys. You pay the providers directly.

A complete RAG pipeline

Everything you need to go from POC to production

Multimodal ingestion, hybrid search, reranking, agents with tools — without assembling five frameworks.

Text, PDF, images, tables

Import PDFs, DOCX, PPTX, Excel, Parquet, JSON and images. OCR, figure extraction, contextual chunking — we handle it all.

Three personas, one platform

Built for devs, usable by business teams

Every persona gets their own tools — but everyone works on the same data.

agent.ts
// 1. Install
$ bun add @ignitionai/sdk

import { IgnitionAI } from "@ignitionai/sdk";

const ai = new IgnitionAI({ apiKey });

const stream = await ai.chat.stream({
  collectionId: "docs",
  message: "How do I deploy?",
});

for await ({ delta } of stream) {
  process.stdout.write(delta);
}

For devs — API, SDK, MCP

REST API with OpenAPI, TypeScript and Python SDKs, native MCP server. Plug your agent tools in under 5 minutes. BYOK, zero markup on your LLM tokens.

For business teams — Dashboard, Widgets, Monitoring

Drag & drop imports, user feedback, widget deployment, cost and quality monitoring. Not a single line of code to write.

Collections
Import
support-kb
412 documents
Ready
product-specs
87 documents
Ready
onboarding
34 documents
Indexing

For decision makers — ROI, costs, audit

ROI dashboards, GDPR audit logs, RBAC governance, full traceability. Production metrics, not promises.

Usage & coûts
30j
Requêtes
12 487
+18%
Coût LLM
€84
BYOK
Satisfaction
94%
+2.1
Self-hostable infra (Docker Compose, VPS or on-premise)
Native multimodal: cross-modal text ↔ image search
Hybrid vector + BM25 search with reranking
Post-ingestion LLM enrichment (summaries, entities, tags)
Native MCP — give custom tools to your agents
GDPR compliant, data in France, mandatory BYOK
Workflow Builder — no-code

Compose your agents without writing a single line of code

Drag and drop your steps, plug in your collections, test live. Your business teams build, your devs stay free to ship the rest.

support-agent.workflow
Live in prod
User question
Webhook / widget
RAG retrieval
support-kb · hybrid
Condition
if score > 0.7
LLM answer
GPT-4o · BYOK
Response
Widget / Slack
Drag & drop to edit
Visual drag & drop editor, zero code
Native conditions, branches and loops
Plug any LLM, tool or MCP server
One-click deploy as widget or API

A complete stack under one roof

Stop assembling seven tools just to run a RAG agent in production.

LLM
Embeddings
Vector DB
Reranking
Agents
Workflows
Widgets
Pricing

Simple, transparent pricing

Start for free. Scale as you grow.

Free

0/mo
Get started
  • 1 Collection
  • 50 Documents
  • 1 Workflow
  • 100 Runs/month
Most popular

Pro

99/mo
Choose Pro
  • 15 Collections
  • 2,000 Documents
  • 20 Workflows
  • 5,000 Runs/month
  • 5 MCP Servers
  • All triggers
  • API + SDK access
  • 1 User (solo)

Scale

399/mo
Choose Scale
  • 50 Collections
  • 10,000 Documents
  • 100 Workflows
  • 50,000 Runs/month
  • 50 MCP Servers
  • All triggers
  • API + SDK access
  • 15 Members
  • Priority support

Enterprise

Custom
Contact sales
  • Dedicated deployment
  • Azure & Microsoft 365 ingestion
  • Unlimited everything
  • BYOK (Bring Your Own Keys)
  • Guaranteed SLA
  • Dedicated support
  • Custom domain
  • SSO / SAML

All plans billed in euros. No commitment, cancel anytime. BYOK: you pay LLM providers directly, zero markup.

FAQ

Questions? Answers.

Everything you need to know.

1

What's the difference with a classic chatbot?

A classic chatbot follows predefined scripts or responds from memory without sources. IgnitionRAG uses RAG: it searches your documents in real-time to answer with precise, sourced information. Beyond chat, you have autonomous agents, workflows, and an API to build complete AI systems.

2

How is this different from an agency contract like JAsk or Auria?

Agencies like JAsk or Auria sell custom projects at €50,000 – €200,000 delivered over months. IgnitionRAG delivers the same outcome in self-serve from €99/month, because the platform automates 90% of the work: ingestion, RAG pipeline, deployment, observability. You keep control of your data, your keys, and you can stop whenever you want.

3

What's the difference with Langflow / Flowise / Dify?

Langflow and Flowise are visual builders for experimenting with LLM pipelines. They're great for POCs but often lack observability, scaling, and production tools (SDK, robust API, A/B testing). IgnitionRAG is designed to go from experimentation to production without rebuilding everything.

4

Do I need to know how to code?

Not to start. The no-code interface lets you create assistants and workflows in a few clicks. But if you code, you get access to a complete API, TypeScript/Python SDKs, and full pipeline control — the best of both worlds.

5

Can I use my own API keys?

Yes, it's native. BYOK (Bring Your Own Keys) lets you use your OpenAI, Anthropic, or other provider keys. You keep full control of costs and models. We don't take commission on your LLM usage.

6

Can I embed the widget on my site?

Yes, in 2 lines of code. The widget is embeddable on any site or app. Customizable styles, injectable context, and built-in analytics to track conversations.

7

What is RAG, concretely?

RAG stands for Retrieval-Augmented Generation. The idea: instead of asking a language model to answer "from memory" (which produces hallucinations), we first search your documents for relevant passages, then feed them to the model as context so it produces a grounded, sourced answer. Result: reliable, verifiable answers that evolve with your data — no retraining needed.

8

Is my data kept private? Where is it stored?

Yes. Your documents and vectors are stored in your dedicated collection — never mixed with other customers'. We never train models on your data. LLMs (OpenAI, Anthropic) are called through your own keys in BYOK mode, so your queries don't pass through our accounts. Hosted in France, GDPR-compliant, on-premise option available for enterprise.

Ready to go from POC to production?

Start for free. No credit card required. BYOK with zero markup. Cancel anytime.