This project explores the use of vector databases, specifically focusing on pgvector
and LangChain
.
Vector databases allow you to store vectors and perform efficient nearest neighbor searches. This project uses pgvector
for PostgreSQL, which is a vector extension for PostgreSQL. It also explores LangChain
, a language model that uses pgvector
.
For more detailed information, please refer to this blog post.
The main difference between this repo and the blog post is that we're running Ollama locally instead of using OpenAI to create the embeddings, which is not in their free offering.
This project requires Python 3.6 or later, and a local instance of Ollama. Here are the steps to set up the project:
-
Clone the repository:
git clone [email protected]:rodbv/pgvector-test.git cd pgvector-test
-
Create a virtual environment and activate it:
python3 -m venv .env source .env/bin/activate## Pulling Different Models with Ollama
-
Install the dependencies:
pip install -r requirements.txt
-
Installign Ollama locally
To install Ollama locally, please refer to their documentation (it's easy stuff): https://github.com/ollama/ollama.
Once installed, you can start it by running the command
ollama serve
..and then this command to get the default model we're using:
ollama pull nomic-embed-text
-
Running Postgres with pgVector support from Docker.
You can run a PostgreSQL instance with
pgvector
support using Docker. Here's how you can do it:First, pull the
pgvector
image from Docker Hub:docker pull pgvector/pgvector:pg16
Then, run a container from the pulled image. Replace
<your_password>
with your desired PostgreSQL password:docker run --name pgvector -e POSTGRES_PASSWORD=<your_password> -p 5432:5432 -d pgvector/pgvector:pg16
This command will start a PostgreSQL server with
pgvector
support on port 5432.To connect to the PostgreSQL server, you can use any PostgreSQL client with the following connection details:
- Host:
localhost
- Port:
5432
- User:
postgres
- Password:
<your_password>
Remember to replace
<your_password>
with the actual password you used when starting the container. - Host:
After installation, you can run the main script passing a query:
python main.py --query "Find parts of the speech related to Brazil"
Ollama allows you to pull different models for your project. Here's how you can do it:
-
To pull a model, use the
ollama pull
command followed by the model name. For example, to pull theorca-mini
model, you would run:ollama pull orca-mini
-
The model will be downloaded and stored in a directory named
.ollama
in your home directory. You can use the model by passing the parameter--model
.