OpenRAG

Configure your LLM endpoint to power the RAG demo

OpenAI-compatible endpoint. Works with OpenAI, local servers (Ollama, LM Studio), or any compatible proxy.
Stored locally in your browser. Never sent anywhere except your configured endpoint.
Model identifier for your endpoint (e.g. gpt-4o-mini, llama3, mistral).
OpenRAG
v0.9.2
Live Demo

Ask anything about
your documents

RAG powered by your LLM · 6 source connectors · real answers

Load Knowledge Sources
Fetches README or specify a path: owner/repo/docs/guide.md
Generate at id.atlassian.com/manage-profile/security/api-tokens
JQL to find issues. Results: summary + description of top 10 matches.
CQL query. Fetches page body content from top 10 results.
Google Cloud API key with Drive API enabled. For public/shared files only.
From the URL: drive.google.com/file/d/{FILE_ID}/view or paste full link.
Fetches and extracts text content from any public web page.
📄
Tap to upload or drag & drop
.txt, .md, .json, .csv, .html, .xml, .log
Knowledge Base
Quick Queries
The RAG Pipeline
Live Status
Get started in 5 lines
✓ MIT Licensed

Free to use in commercial projects. No usage limits. Self-host anywhere.