Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue where model in merged but not save leads to "Cannot copy out of meta tensor; no data!" - persists through restart - had to disable app to continue #1

Open
CCpt5 opened this issue Sep 5, 2023 · 15 comments
Assignees
Labels
bug Something isn't working help wanted Extra attention is needed workaround

Comments

@CCpt5
Copy link

CCpt5 commented Sep 5, 2023

Hey, first off congratulations on your extension. Having the option to merge up 5 models and also to have the option right on the TXT2IMG page is pretty cool.

I don't have time to troubleshoot this right now, but I did want to send in the error I received in case it maybe makes sense to you. I had been merging 5 models and generating images for an hour or so, then left the PC. When I came back and tried to generate another image I got an error of "cannot copy out of meta tensor; no data!".

When I closed A1111 (latest version) and restarted it it seemed to try to reload the last checkpoint which is listed as a huge string txt referencing the temporary merged model you use w/o saving. (btw also noticed the string is huge in PNG info & below the preview). I'm not sure if there are too many characters to handle or if it's just upset it can't find that file perhaps, but it seems like it tries to reload that "file" on restart and is unable. It doesn't release that request though, so even changing models keeps that same error popping up. I restarted multiple times and wasn't able to get an image to generate using any model w/o finally disabling your app.

One thing of note: I do seem to have a persistent recall of my last prompt into the txt input bar when I start A1111. In the past I used to use an extension called state that would restore your exact state the last time you closed A1111 after restated it. I don't have that installed anymore as it stopped working with v1.6. However, it does seem like prompts still get recalled post-reset so perhaps a remnant of that extension is related.

Here's a copy paste of a few starts and error - hope it's insightful:



100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.53it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.62it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.68it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.69it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.68it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.69it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.61it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.60it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.43it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.45it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.49it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:07<00:00, 5.71it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████| 2840/2840 [10:11<00:00, 4.65it/s]
Unloading model 4 over the limit of 2: SDXL\2023-09-02 - Topnotch (Good Series) and #25 Gorge n1 supermerged - did lots of images.safetensors [b687629de1]
Unloading model 3 over the limit of 2: SDXL_2023-09-01 - Topnotch Artstyle - #2.5 - During Gorge N1 Stream - 14img - TXT ON - B8 - 1e5-step00001500 + SDXL_2023-08-27 - SDXL-Merge - Topnotch - 3 models (8-25 - 8-24 - 8-26) + SDXL_2023-08-24 - Topnotch Artstyle - 10img-TXT off - 1500 (Cont from 1k) + SDXL_2023-08-31 - Topnotch Artstyle - 12img - TXT on - 20rep - Batch 4 - bucket on -(Good Series) - 2000 steps + SDXL_2023-08-28 - SDXL Merge - 8k Topnotch 20 doubled dif smooth - Use .2 for weight then good.safetensors [6fc4c1bd77]
Reusing loaded model SDXL_2023-09-03 - Supermerge - add dif - 2 + SDXL_2023-08-27 - SDXL-Merge - Topnotch - 3 models (8-25 - 8-24 - 8-26) + SDXL_Topnotch Artstyle 20img-20rep-Txt-On-step00001500 + SDXL_2023-08-31 - Topnotch Artstyle (Mj greed theme park 3 - TXT enc on) - 12img-step00002000 + SDXL_2023-08-31 - Topnotch Artstyle - 12img - TXT on - 20rep - Batch 4 - bucket on -(Good Series) - 2000 steps + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001800.safetensors [d72e289c4d] to load SDXL\2023-09-04 - topnotch artstyle - 20img - TXT ON - B2 - 1e5-step00003200.safetensors
changing setting sd_model_checkpoint to SDXL\2023-09-04 - topnotch artstyle - 20img - TXT ON - B2 - 1e5-step00003200.safetensors: NotImplementedError
Traceback (most recent call last):
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\options.py", line 140, in set
option.onchange()
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 13, in f
res = func(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\initialize_util.py", line 170, in
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 738, in reload_model_weights
send_model_to_cpu(sd_model)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 544, in send_model_to_cpu
m.to(devices.cpu)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
return super().to(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!

Checkpoint SDXL_2023-09-02 - Topnotch Artstyle #26 - (15img top sdxl) - TXT ON - B4 - 1e5-step00001800 + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001500 + SDXL_Alf Person - TXT encoder off-step00002500 + SDXL_2023-08-28 - Topnotch Artstyle - 20 new img - Reg-Txt on - 40repeats-step00008000 + SDXL_2023-08-25 - Topnotch Artstyle - 20img-20rep -TXT off-step00003000 + SDXL_2023-08-27 - SDXL Merge - Topnotch- Add Diff Smooth - mag graphs.safetensors [3c4b692f29] not found; loading fallback 1- Good - Tuckercarlson - 2022-10-12T18-46-24_tuckercarlson_-2000_continued-16_changed_images-_default_reg_16_training_images_4000_max_training_steps_tuckercarlson_token_person_class_word-0047-0000-0396.safetensors [27411f7a80]
*** Error completing request
*** Arguments: ('task(whp4o86efmaabuf)', 'topnotch artstyle, location, HouseholdDevice', '', [], 40, 'DPM++ 2M Karras', 1, 1, 5, 1208, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001DE80617DF0>, 0, False, 'SDXL\sd_xl_refiner_1.0.safetensors [7440042bbd]', 0.8, -1, False, -1, 0, 0, 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE80616950>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE805ED270>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE46F2AA40>, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'SDXL_2023-09-02 - Topnotch Artstyle #26 - (15img top sdxl) - TXT ON - B4 - 1e5-step00001800 + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001500 + SDXL_Alf Person - TXT encoder off-step00002500 + SDXL_2023-08-28 - Topnotch Artstyle - 20 new img - Reg-Txt on - 40repeats-step00008000 + SDXL_2023-08-25 - Topnotch Artstyle - 20img-20rep -TXT off-step00003000 + SDXL_2023-08-27 - SDXL Merge - Topnotch- Add Diff Smooth - mag graphs.safetensors [3c4b692f29]', 'None', 5, True, False, False, False, False, 'None', 'None', 'None', 'None', 'None', 'Sum', 'Sum', 'Sum', 'Sum', 'Sum', 0.5, 0.5, 0.5, 0.5, 0.5, True, True, True, True, True, [], [], [], [], [], [], [], [], [], [], '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', None, None, False, None, None, False, None, None, False, 50, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
processed = processing.process_images(p)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\processing.py", line 719, in process_images
sd_models.reload_model_weights()
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 732, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 681, in reuse_model_from_already_loaded
send_model_to_cpu(sd_model)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 544, in send_model_to_cpu
m.to(devices.cpu)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
return super().to(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!


changing setting sd_model_checkpoint to 2023-05-17 - Topnotch (Electronics Test 20 img) - [.50 Normal Flip] - 2500 - epoc.ckpt [3f056ed8bb]: NotImplementedError
Traceback (most recent call last):
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\options.py", line 140, in set
option.onchange()
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 13, in f
res = func(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\initialize_util.py", line 170, in
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 732, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 681, in reuse_model_from_already_loaded
send_model_to_cpu(sd_model)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 544, in send_model_to_cpu
m.to(devices.cpu)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
return super().to(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!

Checkpoint SDXL_2023-09-02 - Topnotch Artstyle #26 - (15img top sdxl) - TXT ON - B4 - 1e5-step00001800 + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001500 + SDXL_Alf Person - TXT encoder off-step00002500 + SDXL_2023-08-28 - Topnotch Artstyle - 20 new img - Reg-Txt on - 40repeats-step00008000 + SDXL_2023-08-25 - Topnotch Artstyle - 20img-20rep -TXT off-step00003000 + SDXL_2023-08-27 - SDXL Merge - Topnotch- Add Diff Smooth - mag graphs.safetensors [3c4b692f29] not found; loading fallback 1- Good - Tuckercarlson - 2022-10-12T18-46-24_tuckercarlson_-2000_continued-16_changed_images-_default_reg_16_training_images_4000_max_training_steps_tuckercarlson_token_person_class_word-0047-0000-0396.safetensors [27411f7a80]
*** Error completing request
*** Arguments: ('task(xo3ghwul7h6e9gt)', 'topnotch artstyle, location, HouseholdDevice', '', [], 40, 'DPM++ 2M Karras', 1, 1, 5, 1208, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001DDA1185C90>, 0, False, 'SDXL\sd_xl_refiner_1.0.safetensors [7440042bbd]', 0.8, -1, False, -1, 0, 0, 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DDA1184D00>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DDA1184970>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE805F2440>, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'SDXL_2023-09-02 - Topnotch Artstyle #26 - (15img top sdxl) - TXT ON - B4 - 1e5-step00001800 + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001500 + SDXL_Alf Person - TXT encoder off-step00002500 + SDXL_2023-08-28 - Topnotch Artstyle - 20 new img - Reg-Txt on - 40repeats-step00008000 + SDXL_2023-08-25 - Topnotch Artstyle - 20img-20rep -TXT off-step00003000 + SDXL_2023-08-27 - SDXL Merge - Topnotch- Add Diff Smooth - mag graphs.safetensors [3c4b692f29]', 'None', 5, True, False, False, False, False, 'None', 'None', 'None', 'None', 'None', 'Sum', 'Sum', 'Sum', 'Sum', 'Sum', 0.5, 0.5, 0.5, 0.5, 0.5, True, True, True, True, True, [], [], [], [], [], [], [], [], [], [], '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', None, None, False, None, None, False, None, None, False, 50, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
processed = processing.process_images(p)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\processing.py", line 719, in process_images
sd_models.reload_model_weights()
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 732, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 681, in reuse_model_from_already_loaded
send_model_to_cpu(sd_model)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 544, in send_model_to_cpu
m.to(devices.cpu)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
return super().to(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!


Checkpoint SDXL_2023-09-02 - Topnotch Artstyle #26 - (15img top sdxl) - TXT ON - B4 - 1e5-step00001800 + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001500 + SDXL_Alf Person - TXT encoder off-step00002500 + SDXL_2023-08-28 - Topnotch Artstyle - 20 new img - Reg-Txt on - 40repeats-step00008000 + SDXL_2023-08-25 - Topnotch Artstyle - 20img-20rep -TXT off-step00003000 + SDXL_2023-08-27 - SDXL Merge - Topnotch- Add Diff Smooth - mag graphs.safetensors [3c4b692f29] not found; loading fallback 1- Good - Tuckercarlson - 2022-10-12T18-46-24_tuckercarlson_-2000_continued-16_changed_images-_default_reg_16_training_images_4000_max_training_steps_tuckercarlson_token_person_class_word-0047-0000-0396.safetensors [27411f7a80]
*** Error completing request
*** Arguments: ('task(zj8ljzmohy43u64)', 'topnotch artstyle, location, HouseholdDevice', '', [], 40, 'DPM++ 2M Karras', 1, 1, 5, 1208, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001DE805F3F40>, 0, False, 'SDXL\sd_xl_refiner_1.0.safetensors [7440042bbd]', 0.8, -1, False, -1, 0, 0, 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE36B98550>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE36B98700>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE36BA4340>, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'SDXL_2023-09-02 - Topnotch Artstyle #26 - (15img top sdxl) - TXT ON - B4 - 1e5-step00001800 + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001500 + SDXL_Alf Person - TXT encoder off-step00002500 + SDXL_2023-08-28 - Topnotch Artstyle - 20 new img - Reg-Txt on - 40repeats-step00008000 + SDXL_2023-08-25 - Topnotch Artstyle - 20img-20rep -TXT off-step00003000 + SDXL_2023-08-27 - SDXL Merge - Topnotch- Add Diff Smooth - mag graphs.safetensors [3c4b692f29]', 'None', 5, True, False, False, False, False, 'None', 'None', 'None', 'None', 'None', 'Sum', 'Sum', 'Sum', 'Sum', 'Sum', 0.5, 0.5, 0.5, 0.5, 0.5, True, True, True, True, True, [], [], [], [], [], [], [], [], [], [], '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', None, None, False, None, None, False, None, None, False, 50, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
processed = processing.process_images(p)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\processing.py", line 719, in process_images
sd_models.reload_model_weights()
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 732, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 681, in reuse_model_from_already_loaded
send_model_to_cpu(sd_model)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 544, in send_model_to_cpu
m.to(devices.cpu)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
return super().to(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!


Checkpoint SDXL_2023-09-02 - Topnotch Artstyle #26 - (15img top sdxl) - TXT ON - B4 - 1e5-step00001800 + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001500 + SDXL_Alf Person - TXT encoder off-step00002500 + SDXL_2023-08-28 - Topnotch Artstyle - 20 new img - Reg-Txt on - 40repeats-step00008000 + SDXL_2023-08-25 - Topnotch Artstyle - 20img-20rep -TXT off-step00003000 + SDXL_2023-08-27 - SDXL Merge - Topnotch- Add Diff Smooth - mag graphs.safetensors [3c4b692f29] not found; loading fallback 1- Good - Tuckercarlson - 2022-10-12T18-46-24_tuckercarlson_-2000_continued-16_changed_images-_default_reg_16_training_images_4000_max_training_steps_tuckercarlson_token_person_class_word-0047-0000-0396.safetensors [27411f7a80]
*** Error completing request
*** Arguments: ('task(2ps0463ll0ovgkt)', 'topnotch artstyle, location, HouseholdDevice', '', [], 40, 'DPM++ 2M Karras', 1, 1, 5, 1208, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001DE411E6440>, 0, False, 'SDXL\sd_xl_refiner_1.0.safetensors [7440042bbd]', 0.8, -1, False, -1, 0, 0, 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE805F3F70>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE805F2350>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001DE8060A440>, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'SDXL_2023-09-02 - Topnotch Artstyle #26 - (15img top sdxl) - TXT ON - B4 - 1e5-step00001800 + SDXL_2023-09-01 - Topnotch Artstyle #25 - 25img - TXT ON - B4 - 1e5-step00001500 + SDXL_Alf Person - TXT encoder off-step00002500 + SDXL_2023-08-28 - Topnotch Artstyle - 20 new img - Reg-Txt on - 40repeats-step00008000 + SDXL_2023-08-25 - Topnotch Artstyle - 20img-20rep -TXT off-step00003000 + SDXL_2023-08-27 - SDXL Merge - Topnotch- Add Diff Smooth - mag graphs.safetensors [3c4b692f29]', 'None', 5, True, False, False, False, False, 'None', 'None', 'None', 'None', 'None', 'Sum', 'Sum', 'Sum', 'Sum', 'Sum', 0.5, 0.5, 0.5, 0.5, 0.5, True, True, True, True, True, [], [], [], [], [], [], [], [], [], [], '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', None, None, False, None, None, False, None, None, False, 50, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
processed = processing.process_images(p)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\processing.py", line 719, in process_images
sd_models.reload_model_weights()
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 732, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 681, in reuse_model_from_already_loaded
send_model_to_cpu(sd_model)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\modules\sd_models.py", line 544, in send_model_to_cpu
m.to(devices.cpu)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
return super().to(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!


@wkpark wkpark self-assigned this Sep 5, 2023
@wkpark
Copy link
Owner

wkpark commented Sep 5, 2023

when I come across a similar fatal error. "Extensions tab" -> "Apply and restart UI" will help for some cases.

Hey, first off congratulations on your extension. Having the option to merge up 5 models and also to have the option right on the TXT2IMG page is pretty cool.

I don't have time to troubleshoot this right now, but I did want to send in the error I received in case it maybe makes sense to you. I had been merging 5 models and generating images for an hour or so, then left the PC. When I came back and tried to generate another image I got an error of "cannot copy out of meta tensor; no data!".

When I closed A1111 (latest version) and restarted it it seemed to try to reload the last checkpoint which is listed as a huge string txt referencing the temporary merged model you use w/o saving. (btw also noticed the string is huge in PNG info & below the preview). I'm not sure if there are too many characters to handle or if it's just upset it can't find that file perhaps, but it seems like it tries to reload that "file" on restart and is unable. It doesn't release that request though, so even changing models keeps that same error popping up. I restarted multiple times and wasn't able to get an image to generate using any model w/o finally disabling your app.

this is a known issue. from time to time similar issues arise.

I guess It is caused by, non existed checkpoint (named "Fake checkpoint") + checkpoint cache.

One thing of note: I do seem to have a persistent recall of my last prompt into the txt input bar when I start A1111. In the past I used to use an extension called state that would restore your exact state the last time you closed A1111 after restated it. I don't have that installed anymore as it stopped working with v1.6. However, it does seem like prompts still get recalled post-reset so perhaps a remnant of that extension is related.

thank you for your reporting, I will check it soon!

@catboxanon
Copy link

catboxanon commented Sep 16, 2023

--disable-model-loading-ram-optimization disables the meta device functionality which should fix this. You might also need to disable the model cache entirely (setting Maximum number of checkpoints to 1) because that also has some issues upstream currently.
AUTOMATIC1111/stable-diffusion-webui#12937

@wkpark
Copy link
Owner

wkpark commented Sep 23, 2023

Please reopen it or open a new issue if you encounter similar problems.

@wkpark wkpark closed this as completed Sep 23, 2023
@CCpt5
Copy link
Author

CCpt5 commented Sep 28, 2023

--disable-model-loading-ram-optimization disables the meta device functionality which should fix this. You might also need to disable the model cache entirely (setting Maximum number of checkpoints to 1) because that also has some issues upstream currently. AUTOMATIC1111/stable-diffusion-webui#12937

Re Error: "NotImplementedError: Cannot copy out of meta tensor; no data!"

Is this something that could be corrected on the A1111 side in the future? I've so far avoided turning off model caching as it's a great speed boosting option if you have the resources (I have 64GB Ram + 24gb VRam). I'll look into the first option as I'm not sure what that entails.

That said I've still been getting this error almost everytime I use model mixer now and then try to do A1111 functions w/o it. I'm sure there are others who likely get stuck w/ this issue also who likely don't visit github to find out why models aren't loading. Wish there was a clear fix that doesn't require disabling disabling caching. A flushing of the model cache when the extension is untoggled wouldn't be a potential workaround?

Thx again for your time.

(Another copy/paste of a different console log from just before posting this comment):

`*** Error running before_process: D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\scripts.py", line 611, in before_process
        script.before_process(p, *script_args)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 1706, in before_process
        models['model_a'] = load_state_dict(checkpoint_info)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 1673, in load_state_dict
        sd_models.send_model_to_cpu(shared.sd_model)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_models.py", line 544, in send_model_to_cpu
        m.to(devices.cpu)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
        return super().to(*args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
        return self._apply(convert)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
        module._apply(fn)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
        module._apply(fn)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
        module._apply(fn)
      [Previous line repeated 1 more time]
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
        param_applied = fn(param)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
        return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
    NotImplementedError: Cannot copy out of meta tensor; no data!

---
*** Error completing request
*** Arguments: ('task(i4l7998doywsuib)', 'A music concert, grgamp artstyle, lcas artstyle  <lora:Lucasarts Artstyle - (Trigger is lcas artstyle):2> <lora:2023-09-03 - Gorge Grgamp Artstyle - 23img - TXT ON - B4 - 1e5-step00001600:1.5>', '', [], 35, 'DPM++ 2M Karras', 9, 1, 7.5, 1024, 1024, False, 0.2, 1.25, 'Lanczos', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000002884BB78C40>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Meli/GPT2-Prompt', '', False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': 'davematthews person', 'ad_negative_prompt': '', 'ad_confidence': 0.49, 'ad_mask_k_largest': 1, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000028A57652710>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002884BB845B0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002884BB7B970>, False, 0, 1, 0, 1.2, 0.9, 0, 0.5, 0, 1, 1.4, 0.2, 0, 0.5, 0, 1, 1, 1, 0, 0.5, 0, 1, True, 'SDXL\\2023-09-27 - Davematthews Person - 14 img (9-17 set) - 10repeat try - step 1500.safetensors [83aa2f73f5]', 'None', 5, '', True, True, False, False, False, 'SDXL\\Davematthews Person Try1 - 66img - TXT off-step00015000.safetensors [07aeccb843]', 'SDXL\\2023-09-27 - Davematthews Person - 14 img (9-17 set) -  40repeats step 1500.safetensors [e473db8d0b]', 'SDXL\\2023-09-17 - DaveMatthews person - 14img - b2 - yes bucket - step00001200 - Pretty Good.safetensors [99ae5aafa0]', 'None', 'None', 'Sum', 'Sum', 'Sum', 'Sum', 'Sum', 0.5, 0.5, 0.5, 0.5, 0.5, True, True, True, True, True, [], [], [], [], [], [], [], [], [], [], '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', False, False, False, False, False, '', '', '', '', '', False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
    Traceback (most recent call last):
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\processing.py", line 719, in process_images
        sd_models.reload_model_weights()
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_models.py", line 732, in reload_model_weights
        sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_models.py", line 681, in reuse_model_from_already_loaded
        send_model_to_cpu(sd_model)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_models.py", line 544, in send_model_to_cpu
        m.to(devices.cpu)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
        return super().to(*args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
        return self._apply(convert)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
        module._apply(fn)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
        module._apply(fn)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
        module._apply(fn)
      [Previous line repeated 1 more time]
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
        param_applied = fn(param)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
        return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
    NotImplementedError: Cannot copy out of meta tensor; no data!

---

@wkpark wkpark reopened this Sep 28, 2023
@catboxanon
Copy link

catboxanon commented Sep 28, 2023

--disable-model-loading-ram-optimization will fix the issue you're having.

@CCpt5
Copy link
Author

CCpt5 commented Oct 3, 2023

--disable-model-loading-ram-optimization will fix the issue you're having.

Sadly I've been trying this for a week and it doesn't fix the problem. At the moment if I want to use Model Mixer I have to plan on a necessary restart after using it as there is no way to change the model after. I'm not sure if maybe it's something else I have installed causing this, but I hope sometime down the road a future version will work.

Gave it another shot and here is my log. I did a merge then tried changing models. Tried using the setting menu's unload checkpoint button.....then tried running Model mixer itself again. Everything errors until restart.


`venv "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.6.0
Commit hash: 5ef669de080814067961f28357256e8fe27544f4
Faceswaplab : Use GPU requirements
Checking faceswaplab requirements
0.0023378000005322974
Launching Web UI with arguments: --disable-model-loading-ram-optimization --opt-sdp-attention --no-half-vae --no-half --opt-channelslast --skip-torch-cuda-test --skip-version-check --ckpt-dir e:\Stable Diffusion Checkpoints
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
[-] ADetailer initialized. version: 23.9.3, num models: 9
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
2023-10-02 18:25:24,702 - ControlNet - INFO - ControlNet v1.1.410
ControlNet preprocessor location: D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2023-10-02 18:25:24,795 - ControlNet - INFO - ControlNet v1.1.410
[sd-webui-freeu] Controlnet support: *enabled*
Loading weights [9e4453ecaa] from e:\Stable Diffusion Checkpoints\SDXL\SDXL Grab-bag - Dalle Style (dlle artstyle is trigger).fp16.ckpt
Creating model from config: D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Loading VAE weights specified in settings: D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: sdp... done.
Model loaded in 7.1s (load weights from disk: 1.7s, create model: 0.4s, apply weights to model: 0.9s, apply channels_last: 0.2s, move model to device: 3.2s, calculate empty prompt: 0.6s).
checkpoint title =  SDXL\SDXL Grab-bag - Dalle Style (dlle artstyle is trigger).fp16.ckpt [9e4453ecaa]
checkpoint title =  SDXL\SDXL Grab-bag - Dalle Style (dlle artstyle is trigger).fp16.ckpt [9e4453ecaa]
loading settings: JSONDecodeError
Traceback (most recent call last):
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\ui_loadsave.py", line 30, in __init__
    self.ui_settings = self.read_from_file()
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\ui_loadsave.py", line 132, in read_from_file
    return json.load(file)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\json\__init__.py", line 293, in load
    return loads(fp.read(),
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\json\__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 353, in raw_decode
    obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 2931 column 5 (char 168585)

Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 17.9s (prepare environment: 2.9s, import torch: 1.6s, import gradio: 0.6s, setup paths: 0.5s, initialize shared: 0.2s, other imports: 0.5s, load scripts: 3.3s, create ui: 7.8s, gradio launch: 0.3s).
Loading model SDXL\sd_xl_base_1.0.safetensors [31e35c80fc] (2 out of 3)
Loading weights [31e35c80fc] from e:\Stable Diffusion Checkpoints\SDXL\sd_xl_base_1.0.safetensors
Creating model from config: D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Loading VAE weights specified in settings: D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: sdp... done.
Model loaded in 7.3s (load weights from disk: 0.6s, create model: 0.3s, apply weights to model: 2.5s, apply channels_last: 0.2s, load VAE: 0.1s, move model to device: 3.5s).
debugs =  ['elemental merge']
use_extra_elements =  True
config hash =  c684d775414c1cbe07481c327bbd3dd54c17b27e69747859120de23e112de43d
  - mm_use [True, True, False, False, False]
  - model_a SDXL\sd_xl_base_1.0.safetensors [31e35c80fc]
  - base_model None
  - max_models 5
  - models ['SDXL\\2023-10-02 - Topnotch Artstyle - 26img (half 1024 resized) - b4-step00003200.safetensors [59f0e744f8]', 'SDXL\\2023-09-22 - Topnotch Artstyle - 42img - b3-step00001800.safetensors']
  - modes ['Sum', 'Sum']
  - usembws [[], []]
  - weights ['0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5']
  - alpha [0.5, 0.5]
  - adjust
  - use elemental [False, False]
  - elementals ['', '']
  - Parse elemental merge...
model_a = SDXL_sd_xl_base_1.0
Loading SDXL\sd_xl_base_1.0.safetensors [31e35c80fc] from loaded model...
Applying attention optimization: sdp... done.
isxl = True
compact_mode =  False
Loading model SDXL_2023-10-02 - Topnotch Artstyle - 26img (half 1024 resized) - b4-step00003200...
Loading from file e:\Stable Diffusion Checkpoints\SDXL\2023-10-02 - Topnotch Artstyle - 26img (half 1024 resized) - b4-step00003200.safetensors...
mode = Sum, alpha = 0.5
Stage #1/3: 100%|█████████████████████████████████████████████████████████████████| 2516/2516 [00:08<00:00, 302.83it/s]
Check uninitialized #2/3: 100%|███████████████████████████████████████████████| 2516/2516 [00:00<00:00, 1258091.19it/s]
Loading model SDXL_2023-09-22 - Topnotch Artstyle - 42img - b3-step00001800...
Loading from file e:\Stable Diffusion Checkpoints\SDXL\2023-09-22 - Topnotch Artstyle - 42img - b3-step00001800.safetensors...
Calculating sha256 for e:\Stable Diffusion Checkpoints\SDXL\2023-09-22 - Topnotch Artstyle - 42img - b3-step00001800.safetensors: 8d4b1657106b15c45de307c93562ba361b8df34ffbb55880d8ce0f2de8af19bc
mode = Sum, alpha = 0.5
Stage #3/3: 100%|█████████████████████████████████████████████████████████████████| 2516/2516 [00:07<00:00, 338.06it/s]
Save unchanged weights #3/3: 0it [00:00, ?it/s]
Creating model from config: D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Loading VAE weights specified in settings: D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: sdp... done.
Model loaded in 7.4s (unload existing model: 0.1s, create model: 0.3s, apply weights to model: 2.4s, apply channels_last: 0.2s, load VAE: 1.2s, move model to device: 3.1s).
INFO:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 16 images in a total of 16 batches.
100%|██████████████████████████████████████████████████████████████████████████████████| 41/41 [00:12<00:00,  3.17it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 41/41 [00:12<00:00,  3.25it/s]
 98%|████████████████████████████████████████████████████████████████████████████████  | 40/41 [00:12<00:00,  3.23it/s]
Total progress:  19%|███████████▉                                                    | 122/656 [00:40<02:58,  2.99it/s]
Reusing loaded model SDXL\SDXL Grab-bag - Dalle Style (dlle artstyle is trigger).fp16.ckpt [9e4453ecaa] to load SDXL\juggernautXL_version5.safetensors [70229e1d56]
Loading weights [70229e1d56] from e:\Stable Diffusion Checkpoints\SDXL\juggernautXL_version5.safetensors
Loading VAE weights specified in settings: D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: sdp... done.
Weights loaded in 8.0s (send model to cpu: 2.4s, load weights from disk: 0.6s, apply weights to model: 2.4s, load VAE: 0.1s, move model to device: 2.5s).
debugs =  ['elemental merge']
use_extra_elements =  True
config hash =  c684d775414c1cbe07481c327bbd3dd54c17b27e69747859120de23e112de43d
  - mm_use [True, True, False, False, False]
  - model_a SDXL\sd_xl_base_1.0.safetensors [31e35c80fc]
  - base_model None
  - max_models 5
  - models ['SDXL\\2023-10-02 - Topnotch Artstyle - 26img (half 1024 resized) - b4-step00003200.safetensors [59f0e744f8]', 'SDXL\\2023-09-22 - Topnotch Artstyle - 42img - b3-step00001800.safetensors']
  - modes ['Sum', 'Sum']
  - usembws [[], []]
  - weights ['0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5']
  - alpha [0.5, 0.5]
  - adjust
  - use elemental [False, False]
  - elementals ['', '']
  - Parse elemental merge...
model_a = SDXL_sd_xl_base_1.0
Loading from file e:\Stable Diffusion Checkpoints\SDXL\sd_xl_base_1.0.safetensors...
isxl = True
compact_mode =  False
Loading model SDXL_2023-10-02 - Topnotch Artstyle - 26img (half 1024 resized) - b4-step00003200...
Loading from file e:\Stable Diffusion Checkpoints\SDXL\2023-10-02 - Topnotch Artstyle - 26img (half 1024 resized) - b4-step00003200.safetensors...
mode = Sum, alpha = 0.5
Stage #1/3: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2515/2515 [00:08<00:00, 284.20it/s]
Check uninitialized #2/3: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2515/2515 [00:00<00:00, 1257891.08it/s]
Loading model SDXL_2023-09-22 - Topnotch Artstyle - 42img - b3-step00001800...
Loading from file e:\Stable Diffusion Checkpoints\SDXL\2023-09-22 - Topnotch Artstyle - 42img - b3-step00001800.safetensors...
mode = Sum, alpha = 0.5
Stage #3/3: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2515/2515 [00:07<00:00, 342.67it/s]
Save unchanged weights #3/3: 0it [00:00, ?it/s]
Creating model from config: D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Loading VAE weights specified in settings: D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: sdp... done.
Model loaded in 5.8s (unload existing model: 0.1s, create model: 0.4s, apply weights to model: 1.1s, apply channels_last: 0.2s, load VAE: 1.2s, move model to device: 2.7s).
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 41/41 [00:12<00:00,  3.21it/s]
Total progress: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 41/41 [00:13<00:00,  3.03it/s]
Unloading model 4 over the limit of 3: (((SDXL_sd_xl_base_1.0) x (1 - alpha_0) + (SDXL_2023-10-02 - Topnotch Artstyle - 26img (half 1024 resized) - b4-step00003200) x alpha_0)) x (1 - alpha_1) + (SDXL_2023-09-22 - Topnotch Artstyle - 42img - b3-step00001800) x alpha_1(0.5),(0.5).safetensors [c684d77541]
changing setting sd_model_checkpoint to SDXL\sd_xl_base_1.0.safetensors [31e35c80fc]: NotImplementedError
Traceback (most recent call last):
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\options.py", line 140, in set
    option.onchange()
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\call_queue.py", line 13, in f
    res = func(*args, **kwargs)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\initialize_util.py", line 170, in <lambda>
    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_models.py", line 732, in reload_model_weights
    sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_models.py", line 685, in reuse_model_from_already_loaded
    send_model_to_device(already_loaded)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_models.py", line 560, in send_model_to_device
    m.to(shared.device)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
    return super().to(*args, **kwargs)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
    return self._apply(convert)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  [Previous line repeated 1 more time]
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
    param_applied = fn(param)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!

  0%|                                                                                                                                                                                                                | 0/41 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(ys8dpxqlgkqlsa5)', 'testing ', '', [], 41, 'DPM++ 2M Karras', 1, 1, 8, 1024, 1024, False, 0.2, 1.25, 'Lanczos', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001F57EBBDF60>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Meli/GPT2-Prompt', '', False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001F2AA22AEC0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001F2DD5DA2C0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001F2DD5DB160>, None, '', None, True, False, False, False, False, False, 0, 0, '0', 0, False, True, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, 'None', 1, 1, '', False, False, False, 1, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, '', None, True, False, False, False, False, False, 0, 0, '0', 0, False, True, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, 'None', 1, 1, '', False, False, False, 1, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, '', None, True, False, False, False, False, False, 0, 0, '0', 0, False, True, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, 'None', 1, 1, '', False, False, False, 1, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, 'CodeFormer', 1, 1, 'None', 1, 1, ['After Upscaling/Before Restore Face'], 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, False, 0, 1, 0, 1.2, 0.9, 0, 0.5, 0, 1, 1.4, 0.2, 0, 0.5, 0, 1, 1, 1, 0, 0.5, 0, 1, False, 'SDXL\\sd_xl_base_1.0.safetensors [31e35c80fc]', 'None', 5, '', True, True, False, False, False, 'SDXL\\2023-10-02 - Topnotch Artstyle - 26img (half 1024 resized) - b4-step00003200.safetensors [59f0e744f8]', 'SDXL\\2023-09-22 - Topnotch Artstyle - 42img - b3-step00001800.safetensors', 'None', 'None', 'None', 'Sum', 'Sum', 'Sum', 'Sum', 'Sum', 0.5, 0.5, 0.5, 0.5, 0.5, True, True, True, True, True, [], [], [], [], [], [], [], [], [], [], '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', False, False, False, False, False, '', '', '', '', '', False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
    Traceback (most recent call last):
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\processing.py", line 867, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\processing.py", line 1140, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 169, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_models_xl.py", line 37, in apply_model
        return self.model(x, t, cond)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\wrappers.py", line 28, in forward
        return self.diffusion_model(
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 984, in forward
        emb = self.time_embed(t_emb)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
        input = module(input)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 429, in network_Linear_forward
        return originals.Linear_forward(self, input)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
        return F.linear(input, self.weight, self.bias)
    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)

---
Reusing loaded model (((SDXL_sd_xl_base_1.0) x (1 - alpha_0) + (SDXL_2023-10-02 - Topnotch Artstyle - 26img (half 1024 resized) - b4-step00003200) x alpha_0)) x (1 - alpha_1) + (SDXL_2023-09-22 - Topnotch Artstyle - 42img - b3-step00001800) x alpha_1(0.5),(0.5).safetensors [c684d77541] to load SDXL\2023-10-01 - Dalle-3 set of 41 - (dlle artstyle) - b4-step00004000.safetensors [d5fb9379f5]
changing setting sd_model_checkpoint to SDXL\2023-10-01 - Dalle-3 set of 41 - (dlle artstyle) - b4-step00004000.safetensors [d5fb9379f5]: NotImplementedError
Traceback (most recent call last):
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\options.py", line 140, in set
    option.onchange()
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\call_queue.py", line 13, in f
    res = func(*args, **kwargs)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\initialize_util.py", line 170, in <lambda>
    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_models.py", line 738, in reload_model_weights
    send_model_to_cpu(sd_model)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_models.py", line 544, in send_model_to_cpu
    m.to(devices.cpu)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
    return super().to(*args, **kwargs)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
    return self._apply(convert)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  [Previous line repeated 1 more time]
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
    param_applied = fn(param)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!

changing setting sd_model_checkpoint to 2023-09-18 - Dave Matthews - (Model Mixer) - Good Merge.fp16 - SD1.5 -.ckpt: NotImplementedError
Traceback (most recent call last):
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\options.py", line 140, in set
    option.onchange()
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\call_queue.py", line 13, in f
    res = func(*args, **kwargs)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\initialize_util.py", line 170, in <lambda>
    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_models.py", line 732, in reload_model_weights
    sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_models.py", line 681, in reuse_model_from_already_loaded
    send_model_to_cpu(sd_model)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_models.py", line 544, in send_model_to_cpu
    m.to(devices.cpu)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
    return super().to(*args, **kwargs)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
    return self._apply(convert)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  [Previous line repeated 1 more time]
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
    param_applied = fn(param)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!

Traceback (most recent call last):
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_models.py", line 783, in unload_model_weights
    model_data.sd_model.to(devices.cpu)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
    return super().to(*args, **kwargs)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
    return self._apply(convert)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  [Previous line repeated 1 more time]
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
    param_applied = fn(param)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!
Traceback (most recent call last):
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_models.py", line 783, in unload_model_weights
    model_data.sd_model.to(devices.cpu)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
    return super().to(*args, **kwargs)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
    return self._apply(convert)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  [Previous line repeated 1 more time]
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
    param_applied = fn(param)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!
debugs =  ['elemental merge']
use_extra_elements =  True
config hash =  c684d775414c1cbe07481c327bbd3dd54c17b27e69747859120de23e112de43d
  - use current mixed model c684d775414c1cbe07481c327bbd3dd54c17b27e69747859120de23e112de43d
*** Error completing request
*** Arguments: ('task(x5mmatu0s1muquc)', 'testing ', '', [], 41, 'DPM++ 2M Karras', 1, 1, 8, 1024, 1024, False, 0.2, 1.25, 'Lanczos', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001F4468BDAB0>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Meli/GPT2-Prompt', '', False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001F4468BE440>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001F59DD92E60>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001F59DD91AB0>, None, '', None, True, False, False, False, False, False, 0, 0, '0', 0, False, True, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, 'None', 1, 1, '', False, False, False, 1, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, '', None, True, False, False, False, False, False, 0, 0, '0', 0, False, True, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, 'None', 1, 1, '', False, False, False, 1, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, '', None, True, False, False, False, False, False, 0, 0, '0', 0, False, True, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, 'None', 1, 1, '', False, False, False, 1, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, 'CodeFormer', 1, 1, 'None', 1, 1, ['After Upscaling/Before Restore Face'], 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, False, 0, 1, 0, 1.2, 0.9, 0, 0.5, 0, 1, 1.4, 0.2, 0, 0.5, 0, 1, 1, 1, 0, 0.5, 0, 1, True, 'SDXL\\sd_xl_base_1.0.safetensors [31e35c80fc]', 'None', 5, '', True, True, False, False, False, 'SDXL\\2023-10-02 - Topnotch Artstyle - 26img (half 1024 resized) - b4-step00003200.safetensors [59f0e744f8]', 'SDXL\\2023-09-22 - Topnotch Artstyle - 42img - b3-step00001800.safetensors', 'None', 'None', 'None', 'Sum', 'Sum', 'Sum', 'Sum', 'Sum', 0.5, 0.5, 0.5, 0.5, 0.5, True, True, True, True, True, [], [], [], [], [], [], [], [], [], [], '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', False, False, False, False, False, '', '', '', '', '', False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
    Traceback (most recent call last):
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\processing.py", line 856, in process_images_inner
        p.setup_conds()
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\processing.py", line 1309, in setup_conds
        super().setup_conds()
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\processing.py", line 469, in setup_conds
        self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, total_steps, [self.cached_uc], self.extra_network_data)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\processing.py", line 455, in get_conds_with_caching
        cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\prompt_parser.py", line 189, in get_learned_conditioning
        conds = model.get_learned_conditioning(texts)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_models_xl.py", line 31, in get_learned_conditioning
        c = self.conditioner(sdxl_conds, force_zero_embeddings=['txt'] if force_zero_negative_prompt else [])
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\repositories\generative-models\sgm\modules\encoders\modules.py", line 141, in forward
        emb_out = embedder(batch[embedder.input_key])
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_hijack_clip.py", line 234, in forward
        z = self.process_tokens(tokens, multipliers)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_hijack_clip.py", line 273, in process_tokens
        z = self.encode_with_transformers(tokens)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_hijack_clip.py", line 349, in encode_with_transformers
        outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=self.wrapped.layer == "hidden")
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
        return self.text_model(
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 734, in forward
        causal_attention_mask = _make_causal_mask(input_shape, hidden_states.dtype, device=hidden_states.device)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 684, in _make_causal_mask
        mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min, device=device), device=device)
    NotImplementedError: Could not run 'aten::_local_scalar_dense' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_local_scalar_dense' is only available for these backends: [CPU, CUDA, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].

    CPU: registered at aten\src\ATen\RegisterCPU.cpp:31034 [kernel]
    CUDA: registered at aten\src\ATen\RegisterCUDA.cpp:43986 [kernel]
    BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
    Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:144 [backend fallback]
    FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:491 [backend fallback]
    Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:280 [backend fallback]
    Named: fallthrough registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:11 [kernel]
    Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
    Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback]
    ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
    ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:63 [backend fallback]
    AutogradOther: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradCPU: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradCUDA: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradHIP: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradXLA: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradMPS: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradIPU: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradXPU: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradHPU: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradVE: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradLazy: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradMeta: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradMTIA: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradPrivateUse1: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradPrivateUse2: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradPrivateUse3: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradNestedTensor: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    Tracer: registered at ..\torch\csrc\autograd\generated\TraceType_2.cpp:16726 [kernel]
    AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:487 [backend fallback]
    AutocastCUDA: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:354 [backend fallback]
    FuncTorchBatched: registered at ..\aten\src\ATen\functorch\BatchRulesDynamic.cpp:64 [kernel]
    FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
    Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1073 [backend fallback]
    VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
    FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:210 [backend fallback]
    PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:152 [backend fallback]
    FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:487 [backend fallback]
    PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:148 [backend fallback]


---
*** Error completing request
*** Arguments: ('task(8s0k6560p1hyhuo)', 'testing ', '', [], 41, 'DPM++ 2M Karras', 1, 1, 8, 1024, 1024, False, 0.2, 1.25, 'Lanczos', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001F2F8D9CFD0>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Meli/GPT2-Prompt', '', False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001F2F8D9EE60>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001F59DD93340>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001F59DD920B0>, None, '', None, True, False, False, False, False, False, 0, 0, '0', 0, False, True, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, 'None', 1, 1, '', False, False, False, 1, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, '', None, True, False, False, False, False, False, 0, 0, '0', 0, False, True, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, 'None', 1, 1, '', False, False, False, 1, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, '', None, True, False, False, False, False, False, 0, 0, '0', 0, False, True, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, 'None', 1, 1, '', False, False, False, 1, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, 'CodeFormer', 1, 1, 'None', 1, 1, ['After Upscaling/Before Restore Face'], 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, False, 0, 1, 0, 1.2, 0.9, 0, 0.5, 0, 1, 1.4, 0.2, 0, 0.5, 0, 1, 1, 1, 0, 0.5, 0, 1, False, 'SDXL\\sd_xl_base_1.0.safetensors [31e35c80fc]', 'None', 5, '', True, True, False, False, False, 'SDXL\\2023-10-02 - Topnotch Artstyle - 26img (half 1024 resized) - b4-step00003200.safetensors [59f0e744f8]', 'SDXL\\2023-09-22 - Topnotch Artstyle - 42img - b3-step00001800.safetensors', 'None', 'None', 'None', 'Sum', 'Sum', 'Sum', 'Sum', 'Sum', 0.5, 0.5, 0.5, 0.5, 0.5, True, True, True, True, True, [], [], [], [], [], [], [], [], [], [], '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', '0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', False, False, False, False, False, '', '', '', '', '', False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
    Traceback (most recent call last):
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\processing.py", line 856, in process_images_inner
        p.setup_conds()
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\processing.py", line 1309, in setup_conds
        super().setup_conds()
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\processing.py", line 469, in setup_conds
        self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, total_steps, [self.cached_uc], self.extra_network_data)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\processing.py", line 455, in get_conds_with_caching
        cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\prompt_parser.py", line 189, in get_learned_conditioning
        conds = model.get_learned_conditioning(texts)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_models_xl.py", line 31, in get_learned_conditioning
        c = self.conditioner(sdxl_conds, force_zero_embeddings=['txt'] if force_zero_negative_prompt else [])
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\repositories\generative-models\sgm\modules\encoders\modules.py", line 141, in forward
        emb_out = embedder(batch[embedder.input_key])
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_hijack_clip.py", line 234, in forward
        z = self.process_tokens(tokens, multipliers)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_hijack_clip.py", line 273, in process_tokens
        z = self.encode_with_transformers(tokens)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_hijack_clip.py", line 349, in encode_with_transformers
        outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=self.wrapped.layer == "hidden")
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
        return self.text_model(
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 734, in forward
        causal_attention_mask = _make_causal_mask(input_shape, hidden_states.dtype, device=hidden_states.device)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 684, in _make_causal_mask
        mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min, device=device), device=device)
    NotImplementedError: Could not run 'aten::_local_scalar_dense' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_local_scalar_dense' is only available for these backends: [CPU, CUDA, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].

    CPU: registered at aten\src\ATen\RegisterCPU.cpp:31034 [kernel]
    CUDA: registered at aten\src\ATen\RegisterCUDA.cpp:43986 [kernel]
    BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
    Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:144 [backend fallback]
    FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:491 [backend fallback]
    Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:280 [backend fallback]
    Named: fallthrough registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:11 [kernel]
    Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
    Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback]
    ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
    ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:63 [backend fallback]
    AutogradOther: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradCPU: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradCUDA: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradHIP: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradXLA: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradMPS: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradIPU: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradXPU: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradHPU: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradVE: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradLazy: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradMeta: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradMTIA: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradPrivateUse1: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradPrivateUse2: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradPrivateUse3: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    AutogradNestedTensor: registered at ..\torch\csrc\autograd\generated\VariableType_2.cpp:17472 [autograd kernel]
    Tracer: registered at ..\torch\csrc\autograd\generated\TraceType_2.cpp:16726 [kernel]
    AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:487 [backend fallback]
    AutocastCUDA: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:354 [backend fallback]
    FuncTorchBatched: registered at ..\aten\src\ATen\functorch\BatchRulesDynamic.cpp:64 [kernel]
    FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
    Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1073 [backend fallback]
    VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
    FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:210 [backend fallback]
    PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:152 [backend fallback]
    FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:487 [backend fallback]
    PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:148 [backend fallback]


---
changing setting sd_model_checkpoint to SDXL\2023-10-01 - Dalle-3 set of 41 - (dlle artstyle) - b4-step00002800.safetensors [cd31b2a25b]: NotImplementedError
Traceback (most recent call last):
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\options.py", line 140, in set
    option.onchange()
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\call_queue.py", line 13, in f
    res = func(*args, **kwargs)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\initialize_util.py", line 170, in <lambda>
    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_models.py", line 732, in reload_model_weights
    sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_models.py", line 681, in reuse_model_from_already_loaded
    send_model_to_cpu(sd_model)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_models.py", line 544, in send_model_to_cpu
    m.to(devices.cpu)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
    return super().to(*args, **kwargs)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
    return self._apply(convert)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  [Previous line repeated 1 more time]
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
    param_applied = fn(param)
  File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!



`

@yoyoinneverland
Copy link

yoyoinneverland commented Oct 6, 2023

Think I fixed this with another issue. Basically I stopped webui from extracting data from one model onto the next.
AUTOMATIC1111/stable-diffusion-webui#13516
In my case, it's an SD 1.5 problem, so my fix was editing line 673 on the file sd_models.py into

if len(model_data.loaded_sd_models) > 0: 

This will empty the model container and they I deleted

if shared.opts.sd_checkpoints_keep_in_cpu:
    send_model_to_cpu(sd_model)
    timer.record("send model to cpu")

from the same file too. It probably is coded in a very similar way for SDXL.

You might only need to alter the second bit, maybe.

@CCpt5
Copy link
Author

CCpt5 commented Oct 6, 2023

Think I fixed this with another issue. Basically I stopped webui from extracting data from one model onto the next. AUTOMATIC1111/stable-diffusion-webui#13516 In my case, it's an SD 1.5 problem, so my fix was editing line 673 on the file sd_models.py into

if len(model_data.loaded_sd_models) > 0: 

This will empty the model container and they I deleted

if shared.opts.sd_checkpoints_keep_in_cpu:
    send_model_to_cpu(sd_model)
    timer.record("send model to cpu")

from the same file too. It probably is coded in a very similar way for SDXL.

You might only need to alter the second bit, maybe.

Wow, thanks! I'm not a coder, but I'll give it a try will use ChatGPT4 for backup.

Appreciate you commenting!!

@yoyoinneverland
Copy link

So with the suggestion above I was able to get beyond this. I used the new "Advanced Data Analysis" pluging for ChatGPT-4 and uploaded this full thread (tried HTML download/PDF print but ultimately just copy/pasting all the text to a .txt file and uploading that let it read it all). Also gave it the .py files for the extension and the sd_models.py mentioned above.

It gave a few versions of sd_models.py and eventually got it to go! After disabling the initial warning the error that remained, that after the first fix didn't cause the model to get stuck, was:

*** Error running before_process: D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py
    Traceback (most recent call last):
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\scripts.py", line 615, in before_process
        script.before_process(p, *script_args)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 1706, in before_process
        models['model_a'] = load_state_dict(checkpoint_info)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 1673, in load_state_dict
        sd_models.send_model_to_cpu(shared.sd_model)
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_models.py", line 562, in send_model_to_cpu
        if not isinstance(m, torch.nn.Module) or any(p.is_meta() for p in m.parameters()):
      File "D:\Stable-Diffusion-Webui-Dev\sdxl\stable-diffusion-webui\modules\sd_models.py", line 562, in <genexpr>
        if not isinstance(m, torch.nn.Module) or any(p.is_meta() for p in m.parameters()):
    TypeError: 'bool' object is not callable

So this is where you're stuck at, now?

@yoyoinneverland
Copy link

yoyoinneverland commented Oct 7, 2023

So this is where you're stuck at, now?

No, continued w/ ChatGPT-4 (w/ the file upload analyzer) and it tweaked a that sd_models files a few more times. Thinks it's working now w/o any probs. Used it for 30min switching models, merge combinations, etc and didn't get any errors. So very happy!

I'd upload the modified file, but I had just yesterday pulled the latest developer branch of A1111, and per chatgpt there were differencess (from 1.6 to the current dev build). I uploaded both to it and it compared/worked based on the conversation in this thread. That said if the developer wants any caps/logs of that chat lmk and I'll post whatever.

Thanks again for the tip on where the error was coming from!

I'm glad you got it working then. Have fun.

@PhreakHeaven
Copy link

Think I fixed this with another issue. Basically I stopped webui from extracting data from one model onto the next. AUTOMATIC1111/stable-diffusion-webui#13516 In my case, it's an SD 1.5 problem, so my fix was editing line 673 on the file sd_models.py into

if len(model_data.loaded_sd_models) > 0: 

This will empty the model container and they I deleted

if shared.opts.sd_checkpoints_keep_in_cpu:
    send_model_to_cpu(sd_model)
    timer.record("send model to cpu")

from the same file too. It probably is coded in a very similar way for SDXL.

You might only need to alter the second bit, maybe.

This solution worked for me; thanks so much!

@wkpark
Copy link
Owner

wkpark commented Oct 10, 2023

AUTOMATIC1111/stable-diffusion-webui#13582

there are two issues exist.

one is the issue already mentioned in this thread.

diff --git a/modules/sd_models.py b/modules/sd_models.py
index 0f1fb265..e466ef95 100644
--- a/modules/sd_models.py
+++ b/modules/sd_models.py
@@ -758,12 +758,13 @@ def reuse_model_from_already_loaded(sd_model, checkpoint_info, timer):
             send_model_to_trash(loaded_model)
             timer.record("send model to trash")

-        if shared.opts.sd_checkpoints_keep_in_cpu:
-            send_model_to_cpu(sd_model)
-            timer.record("send model to cpu")
+    if sd_model and shared.opts.sd_checkpoints_keep_in_cpu:
+        send_model_to_cpu(sd_model)
+        timer.record("send model to cpu")

     if already_loaded is not None:
         send_model_to_device(already_loaded)

and the other issue is that for a reusing model: before using a reusing model, hijack() and apply_unet() calls were missing.

diff --git a/modules/sd_models.py b/modules/sd_models.py
index d2ab060e..0f1fb265 100644
--- a/modules/sd_models.py
+++ b/modules/sd_models.py
@@ -815,6 +815,8 @@ def reload_model_weights(sd_model=None, info=None):

     sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
     if sd_model is not None and sd_model.sd_checkpoint_info.filename == checkpoint_info.filename:
+        sd_hijack.model_hijack.hijack(sd_model)
+        sd_unet.apply_unet()
         return sd_model

AUTOMATIC1111/stable-diffusion-webui#13582

@yoyoinneverland
Copy link

AUTOMATIC1111/stable-diffusion-webui#13582

there are two issues exist.

one is the issue already mentioned in this thread.

diff --git a/modules/sd_models.py b/modules/sd_models.py
index 0f1fb265..e466ef95 100644
--- a/modules/sd_models.py
+++ b/modules/sd_models.py
@@ -758,12 +758,13 @@ def reuse_model_from_already_loaded(sd_model, checkpoint_info, timer):
             send_model_to_trash(loaded_model)
             timer.record("send model to trash")

-        if shared.opts.sd_checkpoints_keep_in_cpu:
-            send_model_to_cpu(sd_model)
-            timer.record("send model to cpu")
+    if sd_model and shared.opts.sd_checkpoints_keep_in_cpu:
+        send_model_to_cpu(sd_model)
+        timer.record("send model to cpu")

     if already_loaded is not None:
         send_model_to_device(already_loaded)

and the other issue is that for a reusing model: before using a reusing model, hijack() and apply_unet() calls were missing.

diff --git a/modules/sd_models.py b/modules/sd_models.py
index d2ab060e..0f1fb265 100644
--- a/modules/sd_models.py
+++ b/modules/sd_models.py
@@ -815,6 +815,8 @@ def reload_model_weights(sd_model=None, info=None):

     sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
     if sd_model is not None and sd_model.sd_checkpoint_info.filename == checkpoint_info.filename:
+        sd_hijack.model_hijack.hijack(sd_model)
+        sd_unet.apply_unet()
         return sd_model

AUTOMATIC1111/stable-diffusion-webui#13582

Aha! Excellent work.

@wkpark
Copy link
Owner

wkpark commented Oct 12, 2023

the main cause NotImplementedError: Cannot copy out of meta tensor; no data! error has been fixed by PR #29

@wkpark
Copy link
Owner

wkpark commented Oct 15, 2023

AUTOMATIC1111/stable-diffusion-webui#13582
PR closed, but bugs have been fixed - the main cause of the bug was fixed by the author @AUTOMATIC1111
thanks @AUTOMATIC1111 and thanks to everyone for reporting bugs and being patient!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Extra attention is needed workaround
Projects
None yet
Development

No branches or pull requests

5 participants