Skip to content

Agent with Ollama

Konstantin S edited this page Feb 8, 2024 · 3 revisions

Introduction

By nature LLM's are restricted to text generation. So there is no access to internet, your computer files or your inner network to do something usefull. Usually information that you can ask about is limited by LLM's learning date. Everything that occured after this date is a mistery for it.

It's easy to check. You would need these packages:

LangChain.Core
LangChain.Providers.Ollama

And ollama with mistral running on your computer.

We will start with basic ollama setup and simple question to the LLM:

using System.Text;
using LangChain.Chains.StackableChains.Agents.Tools.BuiltIn;
using LangChain.Providers;
using Newtonsoft.Json;
using static LangChain.Chains.Chain;
using JsonSerializer = System.Text.Json.JsonSerializer;


var model = new OllamaLanguageModelInstruction("mistral:latest",
    "http://localhost:11434",
    options: new OllamaLanguageModelOptions()
    {
        Temperature = 0
    }).UseConsoleForDebug();


var chain =
    Set("What is tryAGI/LangChain?")
    | LLM(model);

await chain.Run();

In the console you will see pretty general answer:

tryAGI/LangChain is a research project focused on developing an advanced conversational AI system that can understand and generate human-like text in multiple languages. The goal is to create a versatile, multilingual language model that can engage in complex conversations, learn from its interactions, and adapt to new contexts.
...

So, basically, mistral guesses the answer. It has no idea what you are asking about.

ReAct

But how we can fix that? If you know about RAG then you know that there is some tricks to bring new abilities to LLM. And one of those tricks is ReAct prompting.

In simple words ReAct is forcing LLM to reflect on your question and injects responses as if LLM figured them out by itself. This allows you to connect any datasource or tool tou your LLM. Let's try to use ReAct and connect Google search to your LLM.

Google custom search

LangChain has Google search built in. To use it you need to get key and cx from Google. Don't worry, it's free.

To get api key go here: https://developers.google.com/custom-search/v1/overview. You need to create Programmable Search Engine to get cx.

Using ReAct with Google search

Now you should have all necessary to connect your LLM to Google search

using System.Text;
using LangChain.Chains.StackableChains.Agents.Tools.BuiltIn;
using LangChain.Providers;
using Newtonsoft.Json;
using static LangChain.Chains.Chain;
using JsonSerializer = System.Text.Json.JsonSerializer;


var model = new OllamaLanguageModelInstruction("mistral:latest",
    "http://localhost:11434",
    options: new OllamaLanguageModelOptions()
    {
        Stop = new[] { "Observation", "[END]" }, // add injection word `Observation` and `[END]` to stop the model(just as additional safety feature)
        Temperature = 0
    }).UseConsoleForDebug();


// create a google search tool
var searchTool = new GoogleCustomSearchTool(key: "<your key>",cx: "<your cx>",resultsLimit:1);

// chain
var chain =
    Set("What is tryAGI/LangChain?")
    | ReActAgentExecutor(model) // does the magic
        .UseTool(searchTool); // add the google search tool
    

await chain.Run();

Lets run it and see the output: As you can see, instead of giving answer right away, the model starts to think on it

Question: What is tryAGI/LangChain?
Thought: I don't know much about tryAGI or LangChain, so I need to search for more information.
Action: google
Action Input: tryAGI LangChain

Observation:

Here is where magic occurs. The model stops vecause of the stop word and ReAct agent kicks in. It sees that model wants to use google and look for "tryAGI LangChain". So it does exactly this and puts search results back to the model:

Observation: tryAGI/LangChain: C# implementation of LangChain. We try ... - GitHub
C# implementation of LangChain. We try to be as close to the original as possible in terms of abstractions, but are open to new entities.
Source url: https://github.com/tryAGI/LangChain

So now model sees the result of it's action so it continues it's reflection:

Thought: Based on the observation, I now have a better understanding of what tryAGI/LangChain is.
Final Answer: tryAGI/LangChain is an open-source C# implementation of LangChain. It aims to be as close to the original abstractions as possible but is also open to new entities.

The Final answer is actually correct. You can use

var res=await chain.Run("text");

to get answer from the model and do something with it.