RAG with LM Studio, local LLMs
This notebook performs the following operations:
- Reads all PDFs in a given folder
- Extracts text using GROBID
- Stores text elements in SQLite3 database
- Handles recursive chunks
- Embeds text
- Vectorizes extracted data
- Retrieval Methods
- Standard Retrieval
- LangChain: MultiQueryRetriever
- OpenAI-based chat using LM Studio
- Displays:
- Query
- Prompt Information
- Answer: Dashboard Browser Tab
- Retrieval and QA chain based chat using LM Studio
- Displays results in a new Browser Tab
- Additional information is within the Notebook, as some Markdown cells describe requirements and usage details.
- Install LM Studio
- Follow the official LM Studio installation guide for your operating system.
- Download LLM Model from Hugging Face
- You can download a pre-trained model from Hugging Face using the Hugging Face Model Hub. Follow the instructions on their site to use the desired model.
- Install Docker for GROBID
- Make sure Docker is installed on your machine. You can follow the installation instructions from the Docker website.
- After installing Docker, pull the GROBID Docker image by running:
docker pull lfoppiano/grobid
- Clone the repository.
- Install the required dependencies.
- Run the
PDFs_RAG_with_LMstudio.ipynb
notebook to begin processing your PDFs.