Replies: 1 comment 1 reply
-
你这个case我跑通了的呀 |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
结合langchain ,langchain文档上给出了很有效的方式实现RAG,但是文档示例上采用的是openai的api接口,如果将他换成Tongyi(通义千问)的接口,功能也可以实现,但是如果我想使用在本地部署的模型代替openai或者Tongyi,应该如何操作呢?
代码其中一部分如下所示:
加载文档
file_path = "知识库.txt"
loader = TextLoader(file_path)
documents = loader.load()
拆分文本
text_splitter = CharacterTextSplitter(chunk_size=5000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
创建嵌入模型
embeddings = TensorflowHubEmbeddings()
print(embeddings)
创建向量数据库
vdb = FAISS.from_documents(texts, embeddings)
print(vdb)
创建通义千问模型
#model = Tongyi()【这一块使用Tongyi()就没问题,使用本地部署的就报错,】
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-1_8B-Chat", device_map="cpu", trust_remote_code=True).eval()
while True:
query = input("请输入您的问题:")
Beta Was this translation helpful? Give feedback.
All reactions