Skip to content

Retrieval Augmented Generation (RAG) using LangChain Framework, FAISS vector store and FastEmbed text embedding model.

Notifications You must be signed in to change notification settings

RoydonTay/Seedly-Articles-RAG

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Retrieval Augmented Generation with Seedly Articles

Outline

In this project, I built an RAG on the llama3-70b-8192 LLM, accessed using Groq API. The documents used for RAG were retrieved via webscrapping of the Seedly Blog, which contains articles about personal finance. The articles I retrieved were mostly about purchasing property in Singapore and insurance policies. The aim is to develop a language model that is more contextually aware and capable of answering questions related to personal finance within the context of Singapore.

Method

I used Scrapy for scraping the articles, all-MiniLM-L6-v2 model for conversion of text chunks into embeddings, and FAISS vector store for text storage and retrieval. Finally, I used LangChain to interface all the different components, from retrieval of text chunks to prompt structuring and chaining to achieve a desired output.

The final output is generated by chaining two prompts, the first one to summarize context provided (top 3 most similar text chunks to the question user asked), and the second one to generate actual response. This is to ensure the prompt used to generate responses does not become too long (if it contains full-length text chunks as context), allowing more context to be provided to the LLM.

The chaining of prompts works as expected, with summarization of text chunks successfully added to second prompt. Pipeline's performance varies depending on quality of context documents provided.

Scripts

Reflections

Doing this project, I learnt how to implement an RAG using LangChain, how to interface an LLM with prompt templates and prompt chaining using the LangChain Expression Language (LCEL), and how to use vector stores and embedding functions together with LangChain. I believe these foundational concepts of LLM applications development will allow me to build more complex LLM applications in the future.

About

Retrieval Augmented Generation (RAG) using LangChain Framework, FAISS vector store and FastEmbed text embedding model.

Topics

Resources

Stars

Watchers

Forks

Languages