Replies: 3 comments 1 reply
-
The snippets it uses are incorporated into the prompt behind the scenes, so that contributes to filling up the context. If you can and the model allows it, increase your context size.
As long as a LocalDocs collection is active, every new prompt is used to find relevant snippets. You may lose the information of previous ones when the context is recalculated.
I'm not sure I understand this one. It already looks for several snippets in a single answer, depending on your settings. |
Beta Was this translation helpful? Give feedback.
-
i see ... so the snippet size and , snippet per prompt depends on model ? |
Beta Was this translation helpful? Give feedback.
-
is the max context lenght read out of the model specs? first test a model accept 32768, i choose 16384 btw. who can programm that the list of the models is not sorted by upper and lower case letters ^^ |
Beta Was this translation helpful? Give feedback.
-
so long this feature works . . .
So if the PDF file is indexed and embedded and I ask a question
then the PDF is searched and an attempt is made to find 2-4 snippeds with 256/512 characters per ... (you can customize the parameters)
I have noticed that sometimes the answer is recalculated after a few lines ...
Beta Was this translation helpful? Give feedback.
All reactions