The manageable AI backend for your documents and agents
API for developers, dashboard for business teams, BYOK with zero markup. Go from POC to production in minutes.
Compatible with every LLM provider, zero markup
Native BYOK — your OpenAI, Anthropic, Mistral or Azure OpenAI keys. You pay the providers directly.
Everything you need to go from POC to production
Multimodal ingestion, hybrid search, reranking, agents with tools — without assembling five frameworks.
Text, PDF, images, tables
Import PDFs, DOCX, PPTX, Excel, Parquet, JSON and images. OCR, figure extraction, contextual chunking — we handle it all.
Built for devs, usable by business teams
Every persona gets their own tools — but everyone works on the same data.
// 1. Install
$ bun add @ignitionai/sdk
import { IgnitionAI } from "@ignitionai/sdk";
const ai = new IgnitionAI({ apiKey });
const stream = await ai.chat.stream({
collectionId: "docs",
message: "How do I deploy?",
});
for await ({ delta } of stream) {
process.stdout.write(delta);
}For devs — API, SDK, MCP
REST API with OpenAPI, TypeScript and Python SDKs, native MCP server. Plug your agent tools in under 5 minutes. BYOK, zero markup on your LLM tokens.
For business teams — Dashboard, Widgets, Monitoring
Drag & drop imports, user feedback, widget deployment, cost and quality monitoring. Not a single line of code to write.
For decision makers — ROI, costs, audit
ROI dashboards, GDPR audit logs, RBAC governance, full traceability. Production metrics, not promises.
Compose your agents without writing a single line of code
Drag and drop your steps, plug in your collections, test live. Your business teams build, your devs stay free to ship the rest.
A complete stack under one roof
Stop assembling seven tools just to run a RAG agent in production.
Simple, transparent pricing
Start for free. Scale as you grow.
Pro
- 15 Collections
- 2,000 Documents
- 20 Workflows
- 5,000 Runs/month
- 5 MCP Servers
- All triggers
- API + SDK access
- 1 User (solo)
Scale
- 50 Collections
- 10,000 Documents
- 100 Workflows
- 50,000 Runs/month
- 50 MCP Servers
- All triggers
- API + SDK access
- 15 Members
- Priority support
Enterprise
- Dedicated deployment
- Azure & Microsoft 365 ingestion
- Unlimited everything
- BYOK (Bring Your Own Keys)
- Guaranteed SLA
- Dedicated support
- Custom domain
- SSO / SAML
All plans billed in euros. No commitment, cancel anytime. BYOK: you pay LLM providers directly, zero markup.
Questions? Answers.
Everything you need to know.
What's the difference with a classic chatbot?
A classic chatbot follows predefined scripts or responds from memory without sources. IgnitionRAG uses RAG: it searches your documents in real-time to answer with precise, sourced information. Beyond chat, you have autonomous agents, workflows, and an API to build complete AI systems.
How is this different from an agency contract like JAsk or Auria?
Agencies like JAsk or Auria sell custom projects at €50,000 – €200,000 delivered over months. IgnitionRAG delivers the same outcome in self-serve from €99/month, because the platform automates 90% of the work: ingestion, RAG pipeline, deployment, observability. You keep control of your data, your keys, and you can stop whenever you want.
What's the difference with Langflow / Flowise / Dify?
Langflow and Flowise are visual builders for experimenting with LLM pipelines. They're great for POCs but often lack observability, scaling, and production tools (SDK, robust API, A/B testing). IgnitionRAG is designed to go from experimentation to production without rebuilding everything.
Do I need to know how to code?
Not to start. The no-code interface lets you create assistants and workflows in a few clicks. But if you code, you get access to a complete API, TypeScript/Python SDKs, and full pipeline control — the best of both worlds.
Can I use my own API keys?
Yes, it's native. BYOK (Bring Your Own Keys) lets you use your OpenAI, Anthropic, or other provider keys. You keep full control of costs and models. We don't take commission on your LLM usage.
Can I embed the widget on my site?
Yes, in 2 lines of code. The widget is embeddable on any site or app. Customizable styles, injectable context, and built-in analytics to track conversations.
What is RAG, concretely?
RAG stands for Retrieval-Augmented Generation. The idea: instead of asking a language model to answer "from memory" (which produces hallucinations), we first search your documents for relevant passages, then feed them to the model as context so it produces a grounded, sourced answer. Result: reliable, verifiable answers that evolve with your data — no retraining needed.
Is my data kept private? Where is it stored?
Yes. Your documents and vectors are stored in your dedicated collection — never mixed with other customers'. We never train models on your data. LLMs (OpenAI, Anthropic) are called through your own keys in BYOK mode, so your queries don't pass through our accounts. Hosted in France, GDPR-compliant, on-premise option available for enterprise.
Ready to go from POC to production?
Start for free. No credit card required. BYOK with zero markup. Cancel anytime.