Skip to content

Commit

Permalink
new: Added multi-gpu example (#422)
Browse files Browse the repository at this point in the history
* new: Added multi-gpu example

* improve: Updated multi-gpu example

* improve: Updated fastembed multi gpu docs example
  • Loading branch information
hh-space-invader authored Dec 17, 2024
1 parent c8b1a18 commit 55b985c
Showing 1 changed file with 88 additions and 0 deletions.
88 changes: 88 additions & 0 deletions docs/examples/FastEmbed_Multi_GPU.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Fastembed Multi-GPU Tutorial\n",
"This tutorial demonstrates how to leverage multi-GPU support in Fastembed. Fastembed supports embedding text and images utilizing modern GPUs for acceleration. Let's explore how to use Fastembed with multiple GPUs step by step."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Prerequisites\n",
"To get started, ensure you have the following installed:\n",
"- Python 3.9 or later\n",
"- Fastembed (`pip install fastembed-gpu`)\n",
"- Refer to [this](https://github.com/qdrant/fastembed/blob/main/docs/examples/FastEmbed_GPU.ipynb) tutorial if you have issues with GPU dependencies\n",
"- Access to a multi-GPU server"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Multi-GPU using cuda argument with TextEmbedding Model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from fastembed import TextEmbedding\n",
"\n",
"# define the documents to embed\n",
"docs = [\"hello world\", \"flag embedding\"] * 100\n",
"\n",
"# define gpu ids\n",
"device_ids = [0, 1]\n",
"\n",
"if __name__ == \"__main__\":\n",
" # initialize a TextEmbedding model using CUDA\n",
" text_model = TextEmbedding(\n",
" model_name=\"sentence-transformers/all-MiniLM-L6-v2\",\n",
" cuda=True,\n",
" device_ids=device_ids,\n",
" lazy_load=True,\n",
" )\n",
"\n",
" # generate embeddings\n",
" text_embeddings = list(text_model.embed(docs, batch_size=2, parallel=len(device_ids)))\n",
" print(text_embeddings)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this snippet:\n",
"- `cuda=True` enables GPU acceleration.\n",
"- `device_ids=[0, 1]` specifies GPUs to use. Replace `[0, 1]` with available GPU IDs.\n",
"- `lazy_load=True`\n",
"\n",
"**NOTE**: When using multi-GPU settings, it is important to configure `parallel` and `lazy_load` properly to avoid inefficiencies:\n",
"\n",
"`parallel`: This parameter enables multi-GPU support by spawning child processes for each GPU specified in device_ids. To ensure proper utilization, the value of `parallel` must match the number of GPUs in device_ids. If using a single GPU, this parameter is not necessary.\n",
"\n",
"`lazy_load`: Enabling `lazy_load` prevents redundant memory usage. Without `lazy_load`, the model is initially loaded into the memory of the first GPU by the main process. When child processes are spawned for each GPU, the model is reloaded on the first GPU, causing redundant memory consumption and inefficiencies."
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.10.15"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

0 comments on commit 55b985c

Please sign in to comment.