-
Notifications
You must be signed in to change notification settings - Fork 511
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: GPT cache llama index integration #554
Comments
You only need to deal with pydantic's check of attributes in the class, and naturally you can use GPTCache. Or you can build an openai proxy service and use GPTCache in the service. |
I am not getting any pydantic error but when i am trying to set or retrieve the cache key, I am getting errors But here self.cache[cache_key] = result and return self.cache[cache_key] lines are throwing errors and it is not working. Can i get an example using above code on how to do that |
The error is ? |
File "C:\AILatestClone\EconomistDigitalSolutions\openai-hack\app\service\CachedLLMPredictor.py", line 22, in predict File "C:\AILatestClone\EconomistDigitalSolutions\openai-hack\app\service\CachedLLMPredictor.py", line 26, in predict I think i am not accessing the cache correctly |
Were you able to solve this issue? |
Current Behavior
I am trying to integrate GPTCache with llama index but LLM predictor is not accepting cache argument , to fix this i have created a cacheLLMPredictor class extended from LLM Predictor
But here self.cache[cache_key] = result and return self.cache[cache_key] lines are throwing errors and it is not working.
My actual problem is i have to add GPTCache to the existing LLamaIndex calls , my existing implementation is as below
def load_index(self,
tenant_index: Index,
tenant_config: Config,
model_name: Optional[str] = None):
Expected Behavior
need ti implement gpt caching in llm calls
Steps To Reproduce
Environment
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered: