You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been having this problem since yesterday. I thought my installation was outdated so I installed the most recent version. I still had the same problem. Since I was planning to reinstall Windows 10 anyway, I now have a fresh install of Windows and I tried using TagGUI again. But I'm still getting this exact same error. I've search here in this repo and elsewhere on the internet and I haven't found a solution. I don't understand the meaning of the output so I can't really guess what the problem is.
Here is the output:
Loading llava-hf/llava-v1.6-mistral-7b-hf...
processor_config.json: 0%| | 0.00/176 [00:00<?, ?B/s]
processor_config.json: 100%|##########| 176/176 [00:00<?, ?B/s]
preprocessor_config.json: 0%| | 0.00/772 [00:00<?, ?B/s]
preprocessor_config.json: 100%|##########| 772/772 [00:00<?, ?B/s]
tokenizer_config.json: 0%| | 0.00/1.98k [00:00<?, ?B/s]
tokenizer_config.json: 100%|##########| 1.98k/1.98k [00:00<?, ?B/s]
tokenizer.model: 0%| | 0.00/493k [00:00<?, ?B/s]
tokenizer.model: 100%|##########| 493k/493k [00:00<00:00, 7.91MB/s]
tokenizer.json: 0%| | 0.00/1.80M [00:00<?, ?B/s]
tokenizer.json: 100%|##########| 1.80M/1.80M [00:00<00:00, 16.4MB/s]
tokenizer.json: 100%|##########| 1.80M/1.80M [00:00<00:00, 16.4MB/s]
added_tokens.json: 0%| | 0.00/41.0 [00:00<?, ?B/s]
added_tokens.json: 100%|##########| 41.0/41.0 [00:00<?, ?B/s]
special_tokens_map.json: 0%| | 0.00/552 [00:00<?, ?B/s]
special_tokens_map.json: 100%|##########| 552/552 [00:00<?, ?B/s]
Traceback (most recent call last):
File "auto_captioning\captioning_thread.py", line 145, in run
File "auto_captioning\captioning_thread.py", line 141, in run
File "auto_captioning\captioning_thread.py", line 88, in run_captioning
File "auto_captioning\auto_captioning_model.py", line 141, in load_processor_and_model
File "auto_captioning\models\llava_next.py", line 6, in get_processor
File "auto_captioning\auto_captioning_model.py", line 84, in get_processor
File "transformers\models\auto\processing_auto.py", line 316, in from_pretrained
File "transformers\processing_utils.py", line 468, in from_pretrained
File "transformers\processing_utils.py", line 393, in from_args_and_dict
TypeError
:
LlavaNextProcessor.__init__() got an unexpected keyword argument 'image_token'
Any help is welcome.
The text was updated successfully, but these errors were encountered:
It seems like there was an update to llava-hf/llava-v1.6-mistral-7b-hf that made it incompatible with the Transformers version that TagGUI uses (v4.41.2).
Updating Transformers to the latest version (v4.46.3) fixes the problem for this model, but it breaks CogVLM. I think the best solution would be to patch the CogVLM code as described here to make it compatible with the new Transformers version. I will look into it when I have time.
I've been having this problem since yesterday. I thought my installation was outdated so I installed the most recent version. I still had the same problem. Since I was planning to reinstall Windows 10 anyway, I now have a fresh install of Windows and I tried using TagGUI again. But I'm still getting this exact same error. I've search here in this repo and elsewhere on the internet and I haven't found a solution. I don't understand the meaning of the output so I can't really guess what the problem is.
Here is the output:
Any help is welcome.
The text was updated successfully, but these errors were encountered: