Understand your data.
Completely private.

Nexus RAG is an enterprise-grade retrieval-augmented generation engine that runs entirely on your local machine. No data leaks, no subscriptions, just pure intelligence.

Why Nexus RAG?

Modern AI engineering for local-first data processing.

Multi-Notebooks

Organize your knowledge into separate notebooks. Isolated, secure, and always relevant.

🔒

100% Private

Powered by Ollama and HuggingFace, your data never leaves your hardware. Zero-cloud architecture.

⚡️

Fast Ingestion

Advanced document parsing and vector chunking using LangChain for optimal query context.

💾

Persistence

Powered by ChromaDB, your vector stores are persisted locally for instant availability.

Local Installation

Run NexusRAG on your own hardware in less than 5 minutes.

1. Clone & Environment

git clone https://github.com/artjomartur/nexus-rag.git
cd nexus-rag
python -m venv venv
source venv/bin/activate  # macOS/Linux

2. Install Dependencies

pip install -r requirements.txt
cp .env.example .env

3. Start Ollama

Ensure Ollama is installed and running, then pull the model:

ollama pull llama3

4. Launch NexusRAG

python main.py

Open localhost:8000 to start querying.

How it works

01

Upload. Drag and drop your PDFs into a notebook.

02

Index. The engine automatically embeds your data locally.

03

Query. Ask anything and get context-aware answers instantly.

Get in Touch

Interested in this project or looking for a developer? Send me a message.