-
Notifications
You must be signed in to change notification settings - Fork 512
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Enhancement]: Support langchain.schema.ChatGeneration #447
Comments
Do you mean that the gptcache in langchain can't work well now? |
It work if you use this langchain implementation: import langchain
from langchain.llms import OpenAI
# To make the caching really obvious, lets use a slower model.
llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2)
#####
langchain.llm_cache = GPTCache(init_gptcache)
llm("Tell me a joke") # simple query to the llm
# This query uses/returns langchain.schema.Generation but with for more complex querys with prompt template usage: import langchain
chat_chain=langchain.LLMChain(
llm=ChatOpenAI(model="gpt-3.5-turbo",temperature=0,cache=True),
prompt=prompt_template,
verbose=True,
)
langchain.llm_cache= GPTCache(init_gptcache)
chat_chain.predict(prompt_template_input_text_1="Tell me a joke",
prompt_temlpate_input_text_2=contex_embeddings) # More 'complex' query
# This query uses/returns langchain.schema.ChatGeneration GPTCache does not support langchain.schema.ChatGeneration |
Can other caches work well? like sqlite cache |
I'm not sure if it's because langchain has modified something that caused the cache not to work or for other reasons, because GPTCache implements the cache capability based on the interface provided by langchain before, and there may be incompatibility now. |
@SimFG For chat models, LangChain is inheriting from |
@Sendery I would like to check this problem whether is fixed or solved? |
Is your feature request related to a problem? Please describe.
I'm using langchain and started a GPTCache integration, but after a few attempts I did manage to config everything as I whised then i start testing and:
ValueError: GPTCache only supports caching of normal LLM generations, got <class 'langchain.schema.ChatGeneration'>
I understand that this is not a real error but from langchain the start suggesting use this schemas, wich suport text and embbedings mixed.
Describe the solution you'd like.
I did track the error to this function:
RETURN_VAL_TYPE IS A langchain.schema.Generation and well the return_val is not, so the code works as expected
I did try to this changes:
Describe an alternate solution.
No response
Anything else? (Additional Context)
Thank you for your time, and work!
The text was updated successfully, but these errors were encountered: