Skip to content

Latest commit

 

History

History
46 lines (33 loc) · 1.41 KB

README.md

File metadata and controls

46 lines (33 loc) · 1.41 KB

Question Answering RAG using LlamaIndex in agenta

This templates is a question answering application with a RAG architecture using LlamaIndex, openAI. It provides a playground to experiment with different prompts and parameters in LlamaIndex and evaluate the results. It runs with agenta. Agenta is an open-source LLMOps platform that allows you to 1) create a playground from the code of any LLM app to quickly experiment, version, and collaborate in your team 2) evaluate LLM applications, and 3) deploy applications easily.

How to use

0. Prerequisites

  • Install the agenta CLI
pip install -U agenta

1. Clone the repository

git clone https://github.com/Agenta-AI/qa_llama_index_playground.git

2. Initialize the project

cd qa_llama_index_playground
agenta init

3. Setup your openAI API key

Create a .env file by copying the .env.example file and add your openAI API key to it.

OPENAI_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxx

4. Deploy the application to agenta

agenta variant serve app.py

5. Experiment with the prompts in a playground and evaluate different variants in agenta

job_desc.mp4