Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Saved models out of Forge gives "'NoneType' object is not iterable" when used to generate in both A1111 and Comfy #129

Open
CCpt5 opened this issue Mar 18, 2024 · 5 comments
Assignees
Labels
bug Something isn't working

Comments

@CCpt5
Copy link

CCpt5 commented Mar 18, 2024

Not sure when this started happening but noticed an issue w/ models saved out using Forge giving me the following error. I get this when trying to use the saved model in both A1111 (main and Forge) and also a similar error about tokenizer in ComfyUI. I tried saving to CKPT, unpruned, f16 (and not) etc. I downgraded diffusers and safetensors and rolled back the extension a few commits but couldn't find a way to get it to work.

After a bunch of troubleshooting I tried saving in regular A1111 w/ current version and the model works in other UIs fine.

rapped.transformer.text_model.encoder.layers.7.self_attn.out_proj.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.7.self_attn.q_proj.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.7.self_attn.q_proj.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.7.self_attn.v_proj.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.7.self_attn.v_proj.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.layer_norm1.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.layer_norm1.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.layer_norm2.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.layer_norm2.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.mlp.fc1.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.mlp.fc1.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.mlp.fc2.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.mlp.fc2.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.self_attn.k_proj.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.self_attn.k_proj.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.self_attn.out_proj.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.self_attn.out_proj.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.self_attn.q_proj.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.self_attn.q_proj.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.self_attn.v_proj.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.8.self_attn.v_proj.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.layer_norm1.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.layer_norm1.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.layer_norm2.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.layer_norm2.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.mlp.fc1.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.mlp.fc1.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.mlp.fc2.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.mlp.fc2.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.self_attn.k_proj.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.self_attn.k_proj.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.self_attn.out_proj.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.self_attn.out_proj.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.self_attn.q_proj.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.self_attn.q_proj.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.self_attn.v_proj.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.encoder.layers.9.self_attn.v_proj.weight', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.final_layer_norm.bias', 'cond_stage_model.embedders.1.wrapped.transformer.text_model.final_layer_norm.weight'])
Loading VAE weights specified in settings: D:\stable-diffusion-webui-forge\models\VAE\sdxl_vae.safetensors
To load target model SDXLClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  22845.8193359375
[Memory Management] Model Memory (MB) =  1903.1046981811523
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  19918.714637756348
Moving model(s) has taken 0.27 seconds
Model loaded in 9.1s (unload existing model: 2.5s, calculate hash: 4.3s, load weights from disk: 0.3s, forge load real models: 1.5s, load VAE: 0.3s, calculate empty prompt: 0.3s).
To load target model SDXL
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  21080.38623046875
[Memory Management] Model Memory (MB) =  4897.086494445801
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  15159.29973602295
Moving model(s) has taken 0.78 seconds
  0%|                                                                                                                                                                                                                                                             | 0/25 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "D:\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
    task.work()
  File "D:\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "D:\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function
    processed = processing.process_images(p)
  File "D:\stable-diffusion-webui-forge\modules\processing.py", line 752, in process_images
    res = process_images_inner(p)
  File "D:\stable-diffusion-webui-forge\modules\processing.py", line 921, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "D:\stable-diffusion-webui-forge\modules\processing.py", line 1273, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "D:\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "D:\stable-diffusion-webui-forge\modules\sd_samplers_common.py", line 263, in launch_sampling
    return func()
  File "D:\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 251, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\stable-diffusion-webui-forge\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\stable-diffusion-webui-forge\modules\sd_samplers_cfg_denoiser.py", line 182, in forward
    denoised = forge_sampler.forge_sample(self, denoiser_params=denoiser_params,
  File "D:\stable-diffusion-webui-forge\modules_forge\forge_sampler.py", line 83, in forge_sample
    denoised = sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options, seed)
  File "D:\stable-diffusion-webui-forge\ldm_patched\modules\samplers.py", line 289, in sampling_function
    cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
  File "D:\stable-diffusion-webui-forge\ldm_patched\modules\samplers.py", line 258, in calc_cond_uncond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
  File "D:\stable-diffusion-webui-forge\ldm_patched\modules\model_base.py", line 90, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\stable-diffusion-webui-forge\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 849, in forward
    assert (y is not None) == (
AssertionError: must specify y if and only if the model is class-conditional
must specify y if and only if the model is class-conditional
*** Error completing request
*** Arguments: ('task(p4jno6xx41fihsc)', <gradio.routes.Request object at 0x000002003E399ED0>, 'Running ', '', [], 25, 'DPM++ 2M Karras', 1, 1, 6.5, 1000, 1048, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', <scripts.animatediff_ui.AnimateDiffProcess object at 0x000002003E399FF0>, False, 0.6, 0.9, 0.25, 1, True, False, False, 'sd_xl_base_0.9.safetensors', 'None', 5, '', {'save_settings': ['fp16', 'prune', 'safetensors'], 'calc_settings': ['GPU']}, True, False, False, False, False, 'z-mixer-2023-12-08-DaveMatthews 5 model merge - lots of realvision - good samples.fp16.safetensors', 'None', 'None', 'None', 'None', 'Sum', 'Sum', 'Sum', 'Sum', 'Sum', 0.5, 0.5, 0.5, 0.5, 0.5, True, True, True, True, True, [], [], [], [], [], [], [], [], [], [], '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', False, False, False, False, False, '', '', '', '', '', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "D:\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
    TypeError: 'NoneType' object is not iterable

---

Comfy Error when trying to use a model saved w/ MM out of Forge:

Error occurred when executing CLIPTextEncodeSDXL:

'NoneType' object has no attribute 'tokenize'

File "D:\SDXL\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\SDXL\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\SDXL\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\SDXL\ComfyUI\comfy_extras\nodes_clip_sdxl.py", line 42, in encode
tokens = clip.tokenize(text_g)  
  

If you cannot reproduce and/or need more information (system settings/etc) I can try to provide later. Heading away from PC for a bit.

@CCpt5
Copy link
Author

CCpt5 commented Mar 18, 2024

Well now I'm also getting an error in regular Webui when just trying to merge/generate. Maybe it's something w/ the models? I am attempting to merge "Lightning" models and also models trained w/ OneTrainer. Not sure if either of those would cause problems. I have successfully merged lightning models w/ non-lightning models using the really basic A1111 checkpoint merger feature.

Errors I'm seeing: File does not contain tensor conditioner.embedders.0.transformer.text_model.embeddings.position_ids

Details

venv "D:\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.8.0-222-g8ac4a207
Commit hash: 8ac4a207f3a57c31f7fe4dbae2f256eb39453169
Launching Web UI with arguments: --opt-sdp-attention --no-half-vae --opt-channelslast --skip-torch-cuda-test --skip-version-check --ckpt-dir e:\Stable Diffusion Checkpoints
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
ControlNet preprocessor location: D:\stable-diffusion-webui\extensions\3sd-webui-controlnet\annotator\downloads
2024-03-18 13:52:18,846 - ControlNet - INFO - ControlNet v1.1.441
2024-03-18 13:52:18,901 - ControlNet - INFO - ControlNet v1.1.441
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
Loading weights [bd1c246b5f] from e:\Stable Diffusion Checkpoints\SDXL\2024-03-18 - Lightning 4 - Save 1 - Dreamyvibes - Topnotch - PXR - ETC - 4models.safetensors
Creating model from config: D:\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Loading VAE weights specified in settings: D:\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: sdp-no-mem... done.
Model loaded in 2.9s (create model: 0.2s, apply weights to model: 2.3s, calculate empty prompt: 0.1s).
2024-03-18 13:52:23,063 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 12.2s (prepare environment: 2.4s, import torch: 2.1s, import gradio: 0.5s, setup paths: 0.4s, initialize shared: 0.1s, other imports: 0.2s, list SD models: 0.4s, load scripts: 1.4s, refresh VAE: 0.1s, create ui: 3.7s, gradio launch: 0.3s, app_started_callback: 0.4s).
Error parsing "sv_negative: "
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00,  4.72it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:01<00:00,  2.39it/s]
debugs =  ['elemental merge']██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:01<00:00,  8.23it/s]
use_extra_elements =  True
 - mm_max_models =  5
config hash =  5c74d0535e03eaba0ac50c02fcfb9ea6cac4b99984621bf327f902874788daa8
  - mm_use [True, True, True, True, False]
  - model_a SDXL\2024-03-18 - Lightning 4 - Save 1 - Dreamyvibes - Topnotch - PXR - ETC - 4models.safetensors [bd1c246b5f]
  - base_model None
  - max_models 5
  - models ['SDXL\\2024-03-18 - Lightning 4 - PXR Artstyle added 3-16 full Pixar and top - 3000- Use 8steps -  - added.safetensors.safetensors [81ec90779e]', 'SDXL\\2024-03-16 - OT - Pxr artstyle (no captions now) - 7742-10-18.safetensors [1e2af43306]', 'SDXL\\2024-02-08 - OT - Dreamyvibes Artstyle - Using Old Save (Trained w captions then just dreamyvibes here) - 30epocs.safetensors', 'SDXL\\2024-03-15 - Topnotch Artstyle - 7e-6 - stochastic - b4 - 16img set -step00001000.safetensors [2fb7c48613]']
  - modes ['Sum', 'DARE', 'Sum', 'DARE']
  - calcmodes ['Normal', 'Normal', 'Normal', 'Normal']
  - usembws [[], [], [], []]
  - weights ['0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5']
  - alpha [0.5, 0.5, 0.5, 0.5]
  - adjust
  - use elemental [False, False, False, False]
  - elementals ['', '', '', '']
  - Parse elemental merge...
model_a = SDXL_2024-03-18 - Lightning 4 - Save 1 - Dreamyvibes - Topnotch - PXR - ETC - 4models
Loading SDXL\2024-03-18 - Lightning 4 - Save 1 - Dreamyvibes - Topnotch - PXR - ETC - 4models.safetensors [bd1c246b5f] from loaded model...
 - loading script.patches...
 - base lora_patch
Applying attention optimization: sdp-no-mem... done.
isxl = True , sd2 = False
compact_mode =  False
 - check possible UNet partial update...
 - partial changed blocks =  ['BASE', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08']
 - UNet partial update mode
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-18 - Lightning 4 - PXR Artstyle added 3-16 full Pixar and top - 3000- Use 8steps -  - added.safetensors.safetensors...
mode = Sum, alpha = 0.5
Stage #1/5: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2263/2263 [00:04<00:00, 541.52it/s]
Check uninitialized #2/5: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2263/2263 [00:00<00:00, 565723.56it/s]
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-16 - OT - Pxr artstyle (no captions now) - 7742-10-18.safetensors...
mode = DARE, alpha = 0.5
Stage #3/5:  74%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████                                                           | 1676/2263 [00:13<00:04, 128.91it/s]
*** Error running before_process: D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "D:\stable-diffusion-webui\modules\scripts.py", line 809, in before_process
        script.before_process(p, *script_args)
      File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 3912, in before_process
        theta_1[key] = theta_1f.get_tensor(key)
    safetensors_rust.SafetensorError: File does not contain tensor conditioner.embedders.0.transformer.text_model.embeddings.position_ids

---
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00,  6.31it/s]
Total progress:  25%|█████████████████████████████████████████████████████████▎                                                                                                                                                                           | 4/16 [00:01<00:05,  2.13it/s]
debugs =  ['elemental merge']█████████████████████████████████████████████████▎                                                                                                                                                                           | 4/16 [00:01<00:01,  8.22it/s]
use_extra_elements =  True
 - mm_max_models =  5
config hash =  d9c1152ce8536725ebe1f3e35fc042dee858a0736af7fe2f3540bb87b11a9c54
  - mm_use [True, True, True, True, False]
  - model_a SDXL\2024-03-18 - Lightning 4 - Save 1 - Dreamyvibes - Topnotch - PXR - ETC - 4models.safetensors [bd1c246b5f]
  - base_model None
  - max_models 5
  - models ['SDXL\\2024-03-18 - Lightning 4 - PXR Artstyle added 3-16 full Pixar and top - 3000- Use 8steps -  - added.safetensors.safetensors [81ec90779e]', 'SDXL\\2024-03-16 - OT - Pxr artstyle (no captions now) - 7742-10-18.safetensors [1e2af43306]', 'SDXL\\2024-02-08 - OT - Dreamyvibes Artstyle - Using Old Save (Trained w captions then just dreamyvibes here) - 30epocs.safetensors', 'SDXL\\2024-03-15 - Topnotch Artstyle - 7e-6 - stochastic - b4 - 16img set -step00001000.safetensors [2fb7c48613]']
  - modes ['Sum', 'Sum', 'Sum', 'Sum']
  - calcmodes ['Normal', 'Normal', 'Normal', 'Normal']
  - usembws [[], [], [], []]
  - weights ['0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5']
  - alpha [0.5, 0.5, 0.5, 0.5]
  - adjust
  - use elemental [False, False, False, False]
  - elementals ['', '', '', '']
  - Parse elemental merge...
model_a = SDXL_2024-03-18 - Lightning 4 - Save 1 - Dreamyvibes - Topnotch - PXR - ETC - 4models
Loading SDXL\2024-03-18 - Lightning 4 - Save 1 - Dreamyvibes - Topnotch - PXR - ETC - 4models.safetensors [bd1c246b5f] from loaded model...
 - base lora_patch
Applying attention optimization: sdp-no-mem... done.
isxl = True , sd2 = False
compact_mode =  False
 - check possible UNet partial update...
 - partial changed blocks =  ['BASE', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08']
 - UNet partial update mode
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-18 - Lightning 4 - PXR Artstyle added 3-16 full Pixar and top - 3000- Use 8steps -  - added.safetensors.safetensors...
mode = Sum, alpha = 0.5
Stage #1/5: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2263/2263 [00:03<00:00, 709.33it/s]
Check uninitialized #2/5: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2263/2263 [00:00<00:00, 565521.33it/s]
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-16 - OT - Pxr artstyle (no captions now) - 7742-10-18.safetensors...
mode = Sum, alpha = 0.5
Stage #3/5:  74%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████                                                           | 1676/2263 [00:02<00:00, 714.53it/s]
*** Error running before_process: D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "D:\stable-diffusion-webui\modules\scripts.py", line 809, in before_process
        script.before_process(p, *script_args)
      File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 3912, in before_process
        theta_1[key] = theta_1f.get_tensor(key)
    safetensors_rust.SafetensorError: File does not contain tensor conditioner.embedders.0.transformer.text_model.embeddings.position_ids

---
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00,  6.33it/s]
Total progress:  25%|█████████████████████████████████████████████████████████▎                                                                                                                                                                           | 4/16 [00:01<00:05,  2.16it/s]
debugs =  ['elemental merge']█████████████████████████████████████████████████▎                                                                                                                                                                           | 4/16 [00:01<00:01,  8.28it/s]
use_extra_elements =  True
 - mm_max_models =  5
config hash =  c30ec25892516487fefab4419d42dfab247db59fee010eed4457092d3ee475b2
  - mm_use [True, False, True, True, False]
  - model_a SDXL\2024-03-18 - Lightning 4 - Save 1 - Dreamyvibes - Topnotch - PXR - ETC - 4models.safetensors [bd1c246b5f]
  - base_model None
  - max_models 5
  - models ['SDXL\\2024-03-18 - Lightning 4 - PXR Artstyle added 3-16 full Pixar and top - 3000- Use 8steps -  - added.safetensors.safetensors [81ec90779e]', 'SDXL\\2024-02-08 - OT - Dreamyvibes Artstyle - Using Old Save (Trained w captions then just dreamyvibes here) - 30epocs.safetensors', 'SDXL\\2024-03-15 - Topnotch Artstyle - 7e-6 - stochastic - b4 - 16img set -step00001000.safetensors [2fb7c48613]']
  - modes ['Sum', 'Sum', 'Sum']
  - calcmodes ['Normal', 'Normal', 'Normal']
  - usembws [[], [], []]
  - weights ['0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5']
  - alpha [0.5, 0.5, 0.5]
  - adjust
  - use elemental [False, False, False]
  - elementals ['', '', '']
  - Parse elemental merge...
model_a = SDXL_2024-03-18 - Lightning 4 - Save 1 - Dreamyvibes - Topnotch - PXR - ETC - 4models
Loading SDXL\2024-03-18 - Lightning 4 - Save 1 - Dreamyvibes - Topnotch - PXR - ETC - 4models.safetensors [bd1c246b5f] from loaded model...
 - base lora_patch
Applying attention optimization: sdp-no-mem... done.
isxl = True , sd2 = False
compact_mode =  False
 - check possible UNet partial update...
 - partial changed blocks =  ['BASE', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08']
 - UNet partial update mode
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-18 - Lightning 4 - PXR Artstyle added 3-16 full Pixar and top - 3000- Use 8steps -  - added.safetensors.safetensors...
mode = Sum, alpha = 0.5
Stage #1/4: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2263/2263 [00:02<00:00, 801.70it/s]
Check uninitialized #2/4: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2263/2263 [00:00<00:00, 452546.48it/s]
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-02-08 - OT - Dreamyvibes Artstyle - Using Old Save (Trained w captions then just dreamyvibes here) - 30epocs.safetensors...
Calculating sha256 for e:\Stable Diffusion Checkpoints\SDXL\2024-02-08 - OT - Dreamyvibes Artstyle - Using Old Save (Trained w captions then just dreamyvibes here) - 30epocs.safetensors: 44c76219ce05c878fe65a66cc985fc148e5d68991d500557466098af949585b6
mode = Sum, alpha = 0.5
Stage #3/4:  74%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████                                                           | 1676/2263 [00:02<00:00, 699.66it/s]
*** Error running before_process: D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "D:\stable-diffusion-webui\modules\scripts.py", line 809, in before_process
        script.before_process(p, *script_args)
      File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 3912, in before_process
        theta_1[key] = theta_1f.get_tensor(key)
    safetensors_rust.SafetensorError: File does not contain tensor conditioner.embedders.0.transformer.text_model.embeddings.position_ids

---
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00,  6.42it/s]
Total progress:  33%|████████████████████████████████████████████████████████████████████████████▎                                                                                                                                                        | 4/12 [00:01<00:03,  2.17it/s]
debugs =  ['elemental merge']████████████████████████████████████████████████████████████████████▎                                                                                                                                                        | 4/12 [00:01<00:00,  8.34it/s]
use_extra_elements =  True
 - mm_max_models =  5
config hash =  194ad1051a9ebcc6e691719bc9a6c3caf5e42bc797b43911cd1b116b2f7151c9
  - mm_use [True, False, False, True, False]
  - model_a SDXL\2024-03-18 - Lightning 4 - Save 1 - Dreamyvibes - Topnotch - PXR - ETC - 4models.safetensors [bd1c246b5f]
  - base_model None
  - max_models 5
  - models ['SDXL\\2024-03-18 - Lightning 4 - PXR Artstyle added 3-16 full Pixar and top - 3000- Use 8steps -  - added.safetensors.safetensors [81ec90779e]', 'SDXL\\2024-03-15 - Topnotch Artstyle - 7e-6 - stochastic - b4 - 16img set -step00001000.safetensors [2fb7c48613]']
  - modes ['Sum', 'Sum']
  - calcmodes ['Normal', 'Normal']
  - usembws [[], []]
  - weights ['0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5']
  - alpha [0.5, 0.5]
  - adjust
  - use elemental [False, False]
  - elementals ['', '']
  - Parse elemental merge...
model_a = SDXL_2024-03-18 - Lightning 4 - Save 1 - Dreamyvibes - Topnotch - PXR - ETC - 4models
Loading SDXL\2024-03-18 - Lightning 4 - Save 1 - Dreamyvibes - Topnotch - PXR - ETC - 4models.safetensors [bd1c246b5f] from loaded model...
 - base lora_patch
Applying attention optimization: sdp-no-mem... done.
isxl = True , sd2 = False
compact_mode =  False
 - check possible UNet partial update...
 - partial changed blocks =  ['BASE', 'IN00', 'IN01', 'IN02', 'IN03', 'IN04', 'IN05', 'IN06', 'IN07', 'IN08', 'M00', 'OUT00', 'OUT01', 'OUT02', 'OUT03', 'OUT04', 'OUT05', 'OUT06', 'OUT07', 'OUT08']
 - UNet partial update mode
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-18 - Lightning 4 - PXR Artstyle added 3-16 full Pixar and top - 3000- Use 8steps -  - added.safetensors.safetensors...
mode = Sum, alpha = 0.5
Stage #1/3: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2263/2263 [00:02<00:00, 800.22it/s]
Check uninitialized #2/3: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2263/2263 [00:00<00:00, 452611.22it/s]
Open state_dict from file e:\Stable Diffusion Checkpoints\SDXL\2024-03-15 - Topnotch Artstyle - 7e-6 - stochastic - b4 - 16img set -step00001000.safetensors...
mode = Sum, alpha = 0.5
Stage #3/3:  74%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████                                                           | 1676/2263 [00:03<00:01, 498.97it/s]
*** Error running before_process: D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "D:\stable-diffusion-webui\modules\scripts.py", line 809, in before_process
        script.before_process(p, *script_args)
      File "D:\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 3912, in before_process
        theta_1[key] = theta_1f.get_tensor(key)
    safetensors_rust.SafetensorError: File does not contain tensor conditioner.embedders.0.transformer.text_model.embeddings.position_ids

---
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00,  6.30it/s]
Total progress:  50%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████                                                                                                                   | 4/8 [00:01<00:01,  2.17it/s]
Total progress:  50%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████                                                                                                                   | 4/8 [00:01<00:00,  8.28it/s]

@Enferlain
Copy link

There was some discussion regarding key naming related stuff in various mergers/comfy/a1111 etc in the discord server for meh, you can try joining and asking there, maybe its related

Alternatively try some other xl models and see if the result remains the same.

@CCpt5
Copy link
Author

CCpt5 commented Mar 19, 2024

Gotcha.

Well, for now going back to A1111 from Forge and checking out Jan 31's 1ed7346 has gotten me back to a point where I can merge and export to checkpoints w/o any issues.

@CCpt5
Copy link
Author

CCpt5 commented Mar 20, 2024

It's very likely related to this issue in Forge itself (I commented about a week ago): lllyasviel/stable-diffusion-webui-forge#505 (comment)

Didn't pick up on this earlier.....

Edit:

Someone mentioned if you roll Forge back to this commit if fixes that error of Forge (not specific to Model Mixer or saving out models), so likely an update needed on the Forge side: lllyasviel/stable-diffusion-webui-forge@b59deaa

@wkpark wkpark self-assigned this Mar 20, 2024
@wkpark wkpark added the bug Something isn't working label Mar 20, 2024
@wkpark
Copy link
Owner

wkpark commented Jul 6, 2024

part of issue resolved by commit 1701dd3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants