Product

AI RAG

Upload your documents, search semantically, and chat with your knowledge base. Retrieval-Augmented Generation made simple.

From documents to answers in seconds

Your files go through an automated pipeline — parsed, chunked, embedded, and indexed — so AI can retrieve the right context every time.

Upload
Parse
Chunk
Embed
Search

Supports PDF, DOCX, PPTX, XLSX, HTML, Markdown, TXT, CSV, JSON, EML, and more.

Everything you need for knowledge retrieval

A complete RAG platform — from document ingestion to AI-powered answers.

Document Management

Upload and manage documents in datasets. Automatic parsing with MarkItDown converts PDF, DOCX, PPTX, and 10+ formats to searchable text.

Hybrid Search

Three retrieval modes — semantic (vector), keyword (BM25), and hybrid. Weighted fusion finds the most relevant results every time.

AI Chat

Chat with your knowledge base. AI retrieves relevant context and generates grounded answers with streaming output support.

Multi-Model Support

Connect OpenAI, Anthropic, Google Gemini, Ollama, or any OpenAI-compatible endpoint. Switch models without changing code.

Multi-Tenant Workspaces

Isolated workspaces with role-based access. Owner, Admin, Member, and Viewer roles for secure team collaboration.

Observability

Built-in Langfuse integration for LLM call tracing. Monitor latency, token usage, and retrieval quality in real time.

Dual API architecture

Console API for your dashboard. Service API for programmatic access.

Search your knowledge base

curl -X POST https://api.litestartup.com/api/v1/search/ \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "query": "How do I reset my password?",
    "dataset_ids": ["ds-abc123"],
    "mode": "hybrid",
    "top_k": 5
  }'

Chat with context

curl -X POST https://api.litestartup.com/api/v1/chats/ \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "message": "Summarize our refund policy",
    "conversation_id": "conv-xyz789",
    "stream": true
  }'

Built on production-grade infrastructure

Modern stack designed for reliability and scale.

FastAPI

Backend framework

PostgreSQL

Relational database

Milvus

Vector database

Celery + Redis

Async task queue

React 19

Frontend UI

Tailwind CSS

Styling

Langfuse

LLM observability

Docker

Container deployment

Up and running in 3 steps

1

Connect your model

Add your LLM provider — OpenAI, Anthropic, Gemini, or Ollama. Configure your embedding and inference models.

2

Upload your documents

Create a dataset and upload files or paste text. Documents are parsed, chunked, and indexed automatically in the background.

3

Search and chat

Use semantic search to find relevant content, or start a conversation and let AI answer questions grounded in your data.

Built for real-world use cases

Internal Knowledge Base

Upload company docs, policies, and playbooks. Let employees search and get instant answers instead of digging through wikis.

Customer Support Bot

Feed your help center articles and FAQs into a dataset. Build a support chatbot that answers questions with accurate, sourced responses.

Research Assistant

Upload research papers, reports, and notes. Ask questions across your entire research library and get cited answers.

Product Documentation

Index your API docs, changelogs, and guides. Developers can ask natural language questions and get code-relevant answers.

Ready to build your knowledge base?

Start with the free tier. Upload your first documents in minutes.

Start Free Trial