-
Notifications
You must be signed in to change notification settings - Fork 511
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases #533
Comments
This error seems to be caused by an incompatibility issue between langchain and gptcache. You can try using an older version of langchain. |
Looking into it, it looks like the metaclass from Langchain's LLM and pydantic's BaseModel are mismatched : So the hierarchy from LangChainLLMs raises an error. I did some dirty testing and it seems that simply removing BaseModel from the hierarchy fixes the issue. I however don't have the big picture knowledge necessary to know what are the impacts of that change Any chance for a fix ? |
This seems to need to be fixed in langchain repo |
This issue seems to still be present using Langchain v0.0.312 and GPTCache v0.1.42 :
|
Thank you very much for your feedback, I will fix it when I have free time. Of course, if you know more about this kind of problem, I look forward to your fix pr |
This problem occurs when Pydantic v2 is installed. It is explained on the Langchain side and can be solved by using Pydantic v1 and v2 without mixing them.(https://python.langchain.com/docs/guides/pydantic_compatibility) The following classes used in
Currently, the Langchain side explicitly uses Pydantic v1, so when the GPTCache side is in a situation to inherit v2, an error occurs. As already suggested, it would be best to remove BaseModel inheritance from LangChainLLMs and LangChainChat. I am currently encountering this issue myself, and since I cannot lower the version of Pydantic or Langchain, I have added code to my repository to work around it. May I submit this as a pull request? |
yes, you can do it |
thank you very much @yacchi . i've tried to use your forked repository and it works well! |
Current Behavior
from gptcache.adapter.langchain_models import LangChainChat
Traceback (most recent call last):
File "/home/ld/miniconda3/envs/llm/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/ld/miniconda3/envs/llm/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/ld/.vscode-server/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/main.py", line 39, in
cli.main()
File "/home/ld/.vscode-server/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/home/ld/.vscode-server/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
runpy.run_path(target, run_name="main")
File "/home/ld/.vscode-server/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File "/home/ld/.vscode-server/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/home/ld/.vscode-server/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "/home/ld/code/cache_t/gptcache_t.py", line 14, in
from cache import MyLangChainChat
File "/home/ld/code/cache_t/cache/init.py", line 1, in
from .langchainchat import MyLangChainChat
File "/home/ld/code/cache_t/cache/langchainchat.py", line 4, in
from gptcache.adapter.langchain_models import LangChainChat
File "/home/ld/miniconda3/envs/llm/lib/python3.10/site-packages/gptcache/adapter/langchain_models.py", line 30, in
class LangChainLLMs(LLM, BaseModel):
TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases
Expected Behavior
No response
Steps To Reproduce
No response
Environment
Anything else?
No response
The text was updated successfully, but these errors were encountered: