I’m excited to introduce RAGs - create a “ChatGPT over your data” purely with natural language
Setup and use a full RAG pipeline over files/web urls in three simple steps:
RAG Builder: Describe your RAG pipeline in natural language (data, parameters like top-k, system prompt), and let the builder agent build this for you.
View Config: View the generated RAG configuration and make edits to it if you want to update the agent.
RAG Agent: Ask the agent anything and it’ll answer the question!
Of course, this is heavily inspired by GPTs from @OpenAI. The goal here was to build a similar natural language interface but tailored around RAG - retrieval parameters, prompts, data sources, all through conversation. Have an agent help you setup QA over your data instead of you writing code.
We built the app with @llama_index + @streamlit. It currently supports the following:
- LLMs: Supports our 20+ LLMs for builder agent (uses OpenAI by default) and currently supports OpenAI for generated agent (not too hard to extend).
- Embeddings: Supports OpenAI ada embeddings and @huggingface models.
There’s so many things we want to do. In the meantime if you have feedback/suggestions let us know!
Check out the attached video, blog post, and repo.
Blog:
Introducing RAGs: Your Personalized ChatGPT Experience Over Your Data
Repo: github.com/run-llama/rags