-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"list index out of range" error #121
Comments
if block exeption in thread_get_message : ERROR:asyncio:Task exception was never retrieved |
There some error in file "/llm telegram_bot/source/utils.py", line 67, and i 've been delete code of function prepare_text , because llm can chat without translate. |
Hi and thank you very much !
I added an admin ID and write to the bot directly - I get answers, everything is fine. ChatML preset.
Then I write to the bot directly from another user who is not in the list of admins and telegram_users.txt is empty, then the bot starts responding ("X typing" during generating), but I don't see any response.
The script gives the error ERROR:root:thread_get_messagelist index out of range('list index out of range',) and continues to work.
Where could I make a mistake?
Or is it necessary to use only group chat for different users ?
full log
Dec 16 19:38:22 openchat run.sh[5771]: INFO:root:### TelegramBotWrapper INIT DONE ###
Dec 16 19:38:22 openchat run.sh[5771]: INFO:root:### !!! READY !!! ###
Dec 16 19:38:22 openchat run.sh[5771]: INFO:aiogram.dispatcher.dispatcher:Start polling.
Dec 16 19:39:30 openchat run.sh[5771]: llama_print_timings: load time = 21282.28 ms
Dec 16 19:39:30 openchat run.sh[5771]: llama_print_timings: sample time = 211.07 ms / 86 runs ( 2.45 ms per token, 407.44 tokens per second)
Dec 16 19:39:30 openchat run.sh[5771]: llama_print_timings: prompt eval time = 46708.79 ms / 1087 tokens ( 42.97 ms per token, 23.27 tokens per second)
Dec 16 19:39:30 openchat run.sh[5771]: llama_print_timings: eval time = 20697.02 ms / 85 runs ( 243.49 ms per token, 4.11 tokens per second)
Dec 16 19:39:30 openchat run.sh[5771]: llama_print_timings: total time = 67857.71 ms
Dec 16 19:39:30 openchat run.sh[5771]: Llama.generate: prefix-match hit
Dec 16 19:39:33 openchat run.sh[5771]: llama_print_timings: load time = 21282.28 ms
Dec 16 19:39:33 openchat run.sh[5771]: llama_print_timings: sample time = 29.15 ms / 12 runs ( 2.43 ms per token, 411.69 tokens per second)
Dec 16 19:39:33 openchat run.sh[5771]: llama_print_timings: prompt eval time = 1117.42 ms / 32 tokens ( 34.92 ms per token, 28.64 tokens per second)
Dec 16 19:39:33 openchat run.sh[5771]: llama_print_timings: eval time = 1610.27 ms / 11 runs ( 146.39 ms per token, 6.83 tokens per second)
Dec 16 19:39:33 openchat run.sh[5771]: llama_print_timings: total time = 2788.88 ms
Dec 16 19:39:34 openchat run.sh[5771]: ERROR:root:thread_get_messagelist index out of range('list index out of range',)
Dec 16 19:39:51 openchat run.sh[5771]: Llama.generate: prefix-match hit
Dec 16 19:39:59 openchat run.sh[5771]: llama_print_timings: load time = 21282.28 ms
Dec 16 19:39:59 openchat run.sh[5771]: llama_print_timings: sample time = 86.89 ms / 36 runs ( 2.41 ms per token, 414.30 tokens per second)
Dec 16 19:39:59 openchat run.sh[5771]: llama_print_timings: prompt eval time = 2698.25 ms / 24 tokens ( 112.43 ms per token, 8.89 tokens per second)
Dec 16 19:39:59 openchat run.sh[5771]: llama_print_timings: eval time = 5428.98 ms / 35 runs ( 155.11 ms per token, 6.45 tokens per second)
Dec 16 19:39:59 openchat run.sh[5771]: llama_print_timings: total time = 8316.26 ms
Dec 16 19:40:00 openchat run.sh[5771]: ERROR:root:thread_get_messagelist index out of range('list index out of range',)
The text was updated successfully, but these errors were encountered: