-
Notifications
You must be signed in to change notification settings - Fork 471
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Chat RAG can use tool functions and has better performance. ($300) #526
Comments
Hey, I would love to work on this issue. Please assign me this issue : ) Also, I am building a RAG webapp for my college, ask anything about my college it will tell you. https://github.com/aialok/iiitr.insights Thank you !! |
Assigning to @aialok for the next 2 days |
Thanks @josancamon19.
I'm currently encountering an error while setting up the environment, and I have a few questions before I proceed:
I think there should be proper documentation for setting up the backend. For example, new contributors don't have an idea of what the appropriate dimensions for our model for vector embedding would be. Edit : Thanks ! I have resolved all the issue : ) |
@josancamon19 need some time. Thank you : ) |
hey @josancamon19, I am quite familiar with RAG/langchain, I am starting to work on this can you please assign this issue to me |
Hey @josancamon19 ! I will work on this issue as I discuss with you already I am done with some work. |
Describe the feature
Current chat is a 2 prompts,
Check
backend/utils/llm.py
Chat should be a langchain agent instead, that has a retrieval function with multiple options.
Topics, date based, individual memories..
I want to have a much better chat performance. ~ performance refers to capabilities of the chat retrieval.
Additionally, I want to be able to chat with individual memories.
This might include better vectorization of current memories structure.
(This might include a better vectorization of the current memories)
The text was updated successfully, but these errors were encountered: