Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LlavaNextProcessor.__init__() got an unexpected keyword argument 'image_token' #303

Open
FugueSegue opened this issue Nov 21, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@FugueSegue
Copy link

I've been having this problem since yesterday. I thought my installation was outdated so I installed the most recent version. I still had the same problem. Since I was planning to reinstall Windows 10 anyway, I now have a fresh install of Windows and I tried using TagGUI again. But I'm still getting this exact same error. I've search here in this repo and elsewhere on the internet and I haven't found a solution. I don't understand the meaning of the output so I can't really guess what the problem is.

Here is the output:

Loading llava-hf/llava-v1.6-mistral-7b-hf...
processor_config.json:   0%|          | 0.00/176 [00:00<?, ?B/s]
processor_config.json: 100%|##########| 176/176 [00:00<?, ?B/s]
preprocessor_config.json:   0%|          | 0.00/772 [00:00<?, ?B/s]
preprocessor_config.json: 100%|##########| 772/772 [00:00<?, ?B/s]
tokenizer_config.json:   0%|          | 0.00/1.98k [00:00<?, ?B/s]
tokenizer_config.json: 100%|##########| 1.98k/1.98k [00:00<?, ?B/s]
tokenizer.model:   0%|          | 0.00/493k [00:00<?, ?B/s]
tokenizer.model: 100%|##########| 493k/493k [00:00<00:00, 7.91MB/s]
tokenizer.json:   0%|          | 0.00/1.80M [00:00<?, ?B/s]
tokenizer.json: 100%|##########| 1.80M/1.80M [00:00<00:00, 16.4MB/s]
tokenizer.json: 100%|##########| 1.80M/1.80M [00:00<00:00, 16.4MB/s]
added_tokens.json:   0%|          | 0.00/41.0 [00:00<?, ?B/s]
added_tokens.json: 100%|##########| 41.0/41.0 [00:00<?, ?B/s]
special_tokens_map.json:   0%|          | 0.00/552 [00:00<?, ?B/s]
special_tokens_map.json: 100%|##########| 552/552 [00:00<?, ?B/s]
Traceback (most recent call last):
File "auto_captioning\captioning_thread.py", line 145, in run
File "auto_captioning\captioning_thread.py", line 141, in run
File "auto_captioning\captioning_thread.py", line 88, in run_captioning
File "auto_captioning\auto_captioning_model.py", line 141, in load_processor_and_model
File "auto_captioning\models\llava_next.py", line 6, in get_processor
File "auto_captioning\auto_captioning_model.py", line 84, in get_processor
File "transformers\models\auto\processing_auto.py", line 316, in from_pretrained
File "transformers\processing_utils.py", line 468, in from_pretrained
File "transformers\processing_utils.py", line 393, in from_args_and_dict
TypeError
:
LlavaNextProcessor.__init__() got an unexpected keyword argument 'image_token'

Any help is welcome.

@jhc13
Copy link
Owner

jhc13 commented Nov 22, 2024

It seems like there was an update to llava-hf/llava-v1.6-mistral-7b-hf that made it incompatible with the Transformers version that TagGUI uses (v4.41.2).

Updating Transformers to the latest version (v4.46.3) fixes the problem for this model, but it breaks CogVLM. I think the best solution would be to patch the CogVLM code as described here to make it compatible with the new Transformers version. I will look into it when I have time.

@jhc13 jhc13 added the bug Something isn't working label Nov 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants