You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I get to know that delete is not happening on hybrid index. Deleting the embeddings using its "id" is only deleting the embedding list but the parallely created "scoring.terms" and "scoring" remains untouched. This makes /add and /upsert to be very delayed (I guess so).
I could see that I have 30k plus embedding count available, using /count API. With hybrid index even though search is faster it looks like /upsert is very slow as the data increases. I thought deleting some data would help speed up the upsert, but deleting the embedding doesn't reduce the file size of "scoring" and "scoring.terms" in the actual file directory.
Or is there anyway to access the files like documents, scoring.terms and scoring and delete something safely?
How can I speed up upsert as the count of documents increase?
The text was updated successfully, but these errors were encountered:
I get to know that delete is not happening on hybrid index. Deleting the embeddings using its "id" is only deleting the embedding list but the parallely created "scoring.terms" and "scoring" remains untouched. This makes /add and /upsert to be very delayed (I guess so).
I could see that I have 30k plus embedding count available, using /count API. With hybrid index even though search is faster it looks like /upsert is very slow as the data increases. I thought deleting some data would help speed up the upsert, but deleting the embedding doesn't reduce the file size of "scoring" and "scoring.terms" in the actual file directory.
Or is there anyway to access the files like documents, scoring.terms and scoring and delete something safely?
How can I speed up upsert as the count of documents increase?
The text was updated successfully, but these errors were encountered: