You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I finally managed to use a local embedding model (mxbai-embed-large-v1) using new SentenceTransformerEmbeddingModel class (thenks to developper team for this work !!! ;-)).
To configure the SentenceTransformerEmbeddingModel to use the GPU for the mxbai-embed-large-v1 model, you need to set the device parameter in the config dictionary to "cuda". Here's how you can modify your code:
sparse_embedding_model=SparseEmbeddingModel(ndim=4096)
sentence_transformer=SentenceTransformerEmbeddingModel(
name="mixedbread-ai/mxbai-embed-large-v1",
config={"device": "cuda"} # Set the device to GPU
)
hybrid_model=HybridEmbeddingModel(models=[sparse_embedding_model, sentence_transformer])
This configuration will ensure that the model uses the GPU for computations [1].
Hello,
I finally managed to use a local embedding model (mxbai-embed-large-v1) using new SentenceTransformerEmbeddingModel class (thenks to developper team for this work !!! ;-)).
It works fine but it's very slow because doesn't use GPU...
Is it a way to force model to use GPU (in config options I guess...).
Thanks for help.
@dosu.
The text was updated successfully, but these errors were encountered: