The easiest way to use Agentic RAG in any enterprise.
As simple to configure as OpenAI's custom GPTs, but deployable in your own cloud infrastructure using Docker. Built using LlamaIndex.
Get Started · Endpoints · Deployment · Contact
To run, start a docker container with our image:
docker run -p 8000:8000 ragapp/ragapp
Then, access the Admin UI at http://localhost:8000/admin to configure your RAGapp.
You can use hosted AI models from OpenAI or Gemini, and local models using Ollama.
Note: To avoid running into any errors, we recommend using the latest version of Docker and (if needed) Docker Compose.
The docker container exposes the following endpoints:
- Admin UI: http://localhost:8000/admin
- Chat UI: http://localhost:8000
- API: http://localhost:8000/docs
Note: The Chat UI and API are only functional if the RAGapp is configured.
Just the RAGapp container doesn't come with any authentication layer by design. This is the task of an API Gateway routing the traffic to RAGapp. This step heavily depends on your cloud provider and the services you use. For a pure Docker Compose environment, you can look at our RAGapp with management UI deployment.
Later versions of RAGapp will support restricting access based on access tokens forwarded from an API Gateway or similar.
You can easily deploy RAGapp to your own infrastructure with one of these Docker Compose deployments:
It's easy to deploy RAGapp in your own cloud infrastructure. Customized K8S deployment descriptors are coming soon.
Move to src/ragapp directory and start with these commands:
export ENVIRONMENT=dev
poetry install --no-root
make build-frontends
make dev
Then, to check out the admin UI, go to http://localhost:3000/admin.
Note: Make sure you have Poetry installed.
Questions, feature requests or found a bug? Open an issue or reach out to marcusschiesser.