Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm) #54

Open
CmoneBK opened this issue Dec 22, 2024 · 20 comments

Comments

@CmoneBK
Copy link

CmoneBK commented Dec 22, 2024

I am getting this error.
Everything seems up to date. It seems do to be linked to ComfyUI-PuLID-Flux-Enhanced

Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)

Also found it unsolved here: comfyanonymous/ComfyUI#5763
and here: comfyanonymous/ComfyUI#5862

And a similar already here: #35, but there are differences:
mat2 and not weight
wrapper_CUDA_mm not wrapper_CUDA__native_layer_norm
However, I tried solutions from there and they didn't work.

Error Report

ComfyUI Error Report

Error Details

  • Node ID: 589
  • Node Type: SamplerCustomAdvanced
  • Exception Type: RuntimeError
  • Exception Message: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)

Stack Trace

  File "D:\Automatic1111 2\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "D:\Automatic1111 2\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "D:\Automatic1111 2\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "D:\Automatic1111 2\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))

  File "D:\Automatic1111 2\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 633, in sample
    samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 740, in sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 719, in inner_sample
    samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 100, in KSAMPLER_sample
    return orig_fn(*args, **kwargs)

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 34, in KSAMPLER_sample
    return orig_fn(*args, **kwargs)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 624, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)

  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)

  File "D:\Automatic1111 2\ComfyUI\comfy\k_diffusion\sampling.py", line 1058, in sample_deis
    denoised = model(x_cur, t_cur * s_in, **extra_args)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 299, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 706, in __call__
    return self.predict_noise(*args, **kwargs)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 709, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 175, in sampling_function
    out = orig_fn(*args, **kwargs)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 279, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)

  File "D:\Automatic1111 2\ComfyUI\comfy\model_base.py", line 145, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()

  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)

  File "D:\Automatic1111 2\ComfyUI\comfy\ldm\flux\model.py", line 184, in forward
    out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control, transformer_options)

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced\pulidflux.py", line 136, in forward_orig
    img = img + node_data['weight'] * self.pulid_ca[ca_idx](node_data['embedding'], img)

  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced\encoders_flux.py", line 57, in forward
    q = self.to_q(latents)

  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)

  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)

System Information

  • ComfyUI Version: v0.3.4
  • Arguments: main.py
  • OS: nt
  • Python Version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
  • Embedded Python: false
  • PyTorch Version: 2.1.1+cu121

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 3080 : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 12884377600
    • VRAM Free: 1224219096
    • Torch VRAM Total: 10066329600
    • Torch VRAM Free: 66308568

Logs

2024-12-22T11:00:50.868237 -  [DONE]2024-12-22T11:00:50.869230 - 
2024-12-22T11:00:50.873737 - Install model '4x_foolhardy_Remacri' into 'D:\Automatic1111 2\ComfyUI\models\upscale_models\4x_foolhardy_Remacri.pth'2024-12-22T11:00:50.873737 - 
2024-12-22T11:00:52.486759 - Downloading https://cdn-lfs.hf-mirror.com/repos/ec/ee/eceeed2a0e8a9141e6d7535f06f502877d9c21e33ed536ec902a38f876756416/e1a73bd89c2da1ae494774746398689048b5a892bd9653e146713f9df8bca86a?response-content-disposition=inline%3B+filename*%3DUTF-8%27%274x_foolhardy_Remacri.pth%3B+filename%3D%224x_foolhardy_Remacri.pth%22%3B&Expires=1735120190&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTczNTEyMDE5MH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5oZi5jby9yZXBvcy9lYy9lZS9lY2VlZWQyYTBlOGE5MTQxZTZkNzUzNWYwNmY1MDI4NzdkOWMyMWUzM2VkNTM2ZWM5MDJhMzhmODc2NzU2NDE2L2UxYTczYmQ4OWMyZGExYWU0OTQ3NzQ3NDYzOTg2ODkwNDhiNWE4OTJiZDk2NTNlMTQ2NzEzZjlkZjhiY2E4NmE%7EcmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qIn1dfQ__&Signature=eUnPOUsT8WVf67iU-Glhy3MCMg%7E7mkxf7aG6UWe1Watz%7EItBZ8KTyrlktTMuqMvryYe31actOFRmTUgsu0C75CwWSdQhDIrJJEJg0kz3UC6DlBHW8%7E9wCbwwbwxk-Mcdrm0vbJSIsaN9TrH1MDCooEVtareA1TzBuksX1UlDVQZ-opHwXHKxwMQ%7E1hIWH6WNnUusd%7Eo3Ohcmdc%7EKlyw95TUDq4FAtqJ3l8IhtXjc3ggku5oCBuqFhX5xuO9whjBw5lS4ZoUOkaj1TYIHAUFVNwp44UzD-rnbrmpXZHpE4HCn2gvFElAcDVOAAvVr0NYMecd0lejNB5hI6yEgXpR1%7EA__&Key-Pair-Id=K3RPWS32NSSJCE to D:\Automatic1111 2\ComfyUI\models\upscale_models\4x_foolhardy_Remacri.pth2024-12-22T11:00:52.486759 - 
2024-12-22T11:00:55.235550 - 
 97%|████████████████████████████████████████████████████████████████████▏ | 65241088/67025055 [00:02<00:00, 24901625.10it/s]2024-12-22T11:00:55.303664 - 
100%|██████████████████████████████████████████████████████████████████████| 67025055/67025055 [00:02<00:00, 24849563.19it/s]2024-12-22T11:00:55.303664 - 
2024-12-22T11:01:01.000597 - got prompt
2024-12-22T11:01:01.011075 - Failed to validate prompt for output 358:
2024-12-22T11:01:01.011075 - * UNETLoader 762:
2024-12-22T11:01:01.011075 -   - Value not in list: unet_name: 'flux1-dev.safetensors' not in ['flux1-dev-fp8.safetensors', 'flux1-dev.sft', 'flux1-schnell-fp8.safetensors', 'flux\\flux1-dev-fp8-e4m3fn.safetensors']
2024-12-22T11:01:01.011075 - * ControlNetLoader 647:
2024-12-22T11:01:01.011075 -   - Value not in list: control_net_name: 'Flux-Controlnet-Union.safetensors' not in (list of length 70)
2024-12-22T11:01:01.011075 - Output will be ignored
2024-12-22T11:01:01.011075 - Failed to validate prompt for output 756:
2024-12-22T11:01:01.012075 - Output will be ignored
2024-12-22T11:01:01.012075 - Failed to validate prompt for output 140:
2024-12-22T11:01:01.012075 - Output will be ignored
2024-12-22T11:01:01.012075 - Failed to validate prompt for output 258:
2024-12-22T11:01:01.012075 - Output will be ignored
2024-12-22T11:01:01.012075 - Failed to validate prompt for output 84:
2024-12-22T11:01:01.012075 - Output will be ignored
2024-12-22T11:01:01.012075 - Failed to validate prompt for output 299:
2024-12-22T11:01:01.012075 - Output will be ignored
2024-12-22T11:01:01.012075 - Failed to validate prompt for output 179:
2024-12-22T11:01:01.012075 - Output will be ignored
2024-12-22T11:01:01.012075 - Failed to validate prompt for output 138:
2024-12-22T11:01:01.013075 - Output will be ignored
2024-12-22T11:01:01.013075 - Failed to validate prompt for output 354:
2024-12-22T11:01:01.013075 - Output will be ignored
2024-12-22T11:01:01.013075 - Failed to validate prompt for output 300:
2024-12-22T11:01:01.013075 - Output will be ignored
2024-12-22T11:01:01.014075 - Failed to validate prompt for output 301:
2024-12-22T11:01:01.014075 - Output will be ignored
2024-12-22T11:01:01.014075 - Failed to validate prompt for output 758:
2024-12-22T11:01:01.014075 - Output will be ignored
2024-12-22T11:01:01.014075 - Failed to validate prompt for output 757:
2024-12-22T11:01:01.014075 - Output will be ignored
2024-12-22T11:01:01.014075 - Failed to validate prompt for output 146:
2024-12-22T11:01:01.014075 - Output will be ignored
2024-12-22T11:01:01.014075 - Failed to validate prompt for output 141:
2024-12-22T11:01:01.014075 - Output will be ignored
2024-12-22T11:01:01.014075 - Failed to validate prompt for output 346:
2024-12-22T11:01:01.014075 - Output will be ignored
2024-12-22T11:01:01.014075 - Failed to validate prompt for output 584:
2024-12-22T11:01:01.014075 - Output will be ignored
2024-12-22T11:01:01.014075 - Failed to validate prompt for output 145:
2024-12-22T11:01:01.014075 - Output will be ignored
2024-12-22T11:01:01.014075 - Failed to validate prompt for output 755:
2024-12-22T11:01:01.014075 - Output will be ignored
2024-12-22T11:01:01.015578 - Failed to validate prompt for output 637:
2024-12-22T11:01:01.015578 - Output will be ignored
2024-12-22T11:01:01.015578 - Failed to validate prompt for output 147:
2024-12-22T11:01:01.015578 - Output will be ignored
2024-12-22T11:01:01.015578 - Failed to validate prompt for output 440:
2024-12-22T11:01:01.016582 - Output will be ignored
2024-12-22T11:01:01.016582 - Failed to validate prompt for output 447:
2024-12-22T11:01:01.016582 - Output will be ignored
2024-12-22T11:01:01.016582 - Failed to validate prompt for output 433:
2024-12-22T11:01:01.016582 - Output will be ignored
2024-12-22T11:01:01.017255 - Failed to validate prompt for output 356:
2024-12-22T11:01:01.017255 - Output will be ignored
2024-12-22T11:01:01.017255 - invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
2024-12-22T11:01:11.689591 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json2024-12-22T11:01:11.690598 - 2024-12-22T11:01:11.741164 -  [DONE]2024-12-22T11:01:11.741164 - 
2024-12-22T11:01:11.757675 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/ControlNet-LLLite-ComfyUI/models2024-12-22T11:01:11.757675 - 
2024-12-22T11:01:11.759676 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/pfg-ComfyUI/models2024-12-22T11:01:11.761185 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/pfg-ComfyUI/models2024-12-22T11:01:11.761185 - 
2024-12-22T11:01:11.761185 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/ComfyUI_FaceAnalysis/dlib2024-12-22T11:01:11.761185 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/pfg-ComfyUI/models2024-12-22T11:01:11.761185 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/ComfyUI_FaceAnalysis/dlib2024-12-22T11:01:11.762185 - 
2024-12-22T11:01:11.762185 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/ComfyUI-YoloWorld-EfficientSAM2024-12-22T11:01:11.762185 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/ComfyUI-YoloWorld-EfficientSAM2024-12-22T11:01:11.762185 - 
2024-12-22T11:01:11.762185 - 
2024-12-22T11:01:11.762185 - 
2024-12-22T11:01:11.763185 - 
2024-12-22T11:01:11.763185 - 
2024-12-22T11:01:11.763185 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/ComfyUI_ID_Animator/models/animatediff_models2024-12-22T11:01:11.763185 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/ComfyUI_ID_Animator/models/image_encoder2024-12-22T11:01:11.763185 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/ComfyUI_CustomNet/pretrain2024-12-22T11:01:11.763185 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/ComfyUI_ID_Animator/models2024-12-22T11:01:11.764184 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/ComfyUI-ToonCrafter/ToonCrafter/checkpoints/tooncrafter_512_interp_v12024-12-22T11:01:11.764184 - 
2024-12-22T11:01:11.764184 - 
2024-12-22T11:01:11.764184 - 
2024-12-22T11:01:11.764184 - 
2024-12-22T11:01:11.765184 - 
2024-12-22T11:01:11.766184 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/comfyui-SegGPT2024-12-22T11:01:11.766184 - 
2024-12-22T11:02:59.785023 - got prompt
2024-12-22T11:02:59.795530 - Failed to validate prompt for output 358:
2024-12-22T11:02:59.795530 - * UNETLoader 762:
2024-12-22T11:02:59.795530 -   - Value not in list: unet_name: 'flux1-dev.safetensors' not in ['flux1-dev-fp8.safetensors', 'flux1-dev.sft', 'flux1-schnell-fp8.safetensors', 'flux\\flux1-dev-fp8-e4m3fn.safetensors']
2024-12-22T11:02:59.795530 - * BasicGuider 618:
2024-12-22T11:02:59.795530 -   - Required input is missing: model
2024-12-22T11:02:59.795530 - Output will be ignored
2024-12-22T11:02:59.796528 - Failed to validate prompt for output 756:
2024-12-22T11:02:59.796528 - Output will be ignored
2024-12-22T11:02:59.796528 - Failed to validate prompt for output 140:
2024-12-22T11:02:59.796528 - Output will be ignored
2024-12-22T11:02:59.796528 - Failed to validate prompt for output 258:
2024-12-22T11:02:59.796528 - Output will be ignored
2024-12-22T11:02:59.796528 - Failed to validate prompt for output 84:
2024-12-22T11:02:59.796528 - Output will be ignored
2024-12-22T11:02:59.796528 - Failed to validate prompt for output 299:
2024-12-22T11:02:59.796528 - Output will be ignored
2024-12-22T11:02:59.797528 - Failed to validate prompt for output 179:
2024-12-22T11:02:59.797528 - Output will be ignored
2024-12-22T11:02:59.797528 - Failed to validate prompt for output 138:
2024-12-22T11:02:59.797528 - Output will be ignored
2024-12-22T11:02:59.798528 - Failed to validate prompt for output 354:
2024-12-22T11:02:59.799528 - Output will be ignored
2024-12-22T11:02:59.799528 - Failed to validate prompt for output 300:
2024-12-22T11:02:59.799528 - Output will be ignored
2024-12-22T11:02:59.799528 - Failed to validate prompt for output 301:
2024-12-22T11:02:59.801542 - Output will be ignored
2024-12-22T11:02:59.801542 - Failed to validate prompt for output 758:
2024-12-22T11:02:59.801542 - Output will be ignored
2024-12-22T11:02:59.801542 - Failed to validate prompt for output 757:
2024-12-22T11:02:59.801542 - Output will be ignored
2024-12-22T11:02:59.801542 - Failed to validate prompt for output 146:
2024-12-22T11:02:59.801542 - Output will be ignored
2024-12-22T11:02:59.802542 - Failed to validate prompt for output 141:
2024-12-22T11:02:59.802542 - Output will be ignored
2024-12-22T11:02:59.802542 - Failed to validate prompt for output 346:
2024-12-22T11:02:59.802542 - Output will be ignored
2024-12-22T11:02:59.802542 - Failed to validate prompt for output 584:
2024-12-22T11:02:59.802542 - Output will be ignored
2024-12-22T11:02:59.802542 - Failed to validate prompt for output 145:
2024-12-22T11:02:59.802542 - Output will be ignored
2024-12-22T11:02:59.802542 - Failed to validate prompt for output 755:
2024-12-22T11:02:59.802542 - Output will be ignored
2024-12-22T11:02:59.803542 - Failed to validate prompt for output 637:
2024-12-22T11:02:59.803542 - Output will be ignored
2024-12-22T11:02:59.803542 - Failed to validate prompt for output 147:
2024-12-22T11:02:59.803542 - Output will be ignored
2024-12-22T11:02:59.803542 - Failed to validate prompt for output 440:
2024-12-22T11:02:59.803542 - Output will be ignored
2024-12-22T11:02:59.803542 - Failed to validate prompt for output 447:
2024-12-22T11:02:59.803542 - Output will be ignored
2024-12-22T11:02:59.804542 - Failed to validate prompt for output 433:
2024-12-22T11:02:59.804542 - Output will be ignored
2024-12-22T11:02:59.804542 - Failed to validate prompt for output 356:
2024-12-22T11:02:59.804542 - Output will be ignored
2024-12-22T11:02:59.804542 - invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
2024-12-22T11:04:01.203325 - got prompt
2024-12-22T11:04:01.212751 - Failed to validate prompt for output 358:
2024-12-22T11:04:01.212751 - * BasicGuider 618:
2024-12-22T11:04:01.213752 -   - Required input is missing: model
2024-12-22T11:04:01.213752 - Output will be ignored
2024-12-22T11:04:01.213752 - Failed to validate prompt for output 756:
2024-12-22T11:04:01.213752 - Output will be ignored
2024-12-22T11:04:01.213752 - Failed to validate prompt for output 140:
2024-12-22T11:04:01.213752 - Output will be ignored
2024-12-22T11:04:01.214751 - Failed to validate prompt for output 258:
2024-12-22T11:04:01.214751 - Output will be ignored
2024-12-22T11:04:01.214751 - Failed to validate prompt for output 84:
2024-12-22T11:04:01.214751 - Output will be ignored
2024-12-22T11:04:01.214751 - Failed to validate prompt for output 299:
2024-12-22T11:04:01.214751 - Output will be ignored
2024-12-22T11:04:01.214751 - Failed to validate prompt for output 179:
2024-12-22T11:04:01.214751 - Output will be ignored
2024-12-22T11:04:01.214751 - Failed to validate prompt for output 138:
2024-12-22T11:04:01.215751 - Output will be ignored
2024-12-22T11:04:01.215751 - Failed to validate prompt for output 354:
2024-12-22T11:04:01.215751 - Output will be ignored
2024-12-22T11:04:01.215751 - Failed to validate prompt for output 300:
2024-12-22T11:04:01.215751 - Output will be ignored
2024-12-22T11:04:01.215751 - Failed to validate prompt for output 301:
2024-12-22T11:04:01.215751 - Output will be ignored
2024-12-22T11:04:01.215751 - Failed to validate prompt for output 758:
2024-12-22T11:04:01.215751 - Output will be ignored
2024-12-22T11:04:01.215751 - Failed to validate prompt for output 757:
2024-12-22T11:04:01.215751 - Output will be ignored
2024-12-22T11:04:01.215751 - Failed to validate prompt for output 146:
2024-12-22T11:04:01.216750 - Output will be ignored
2024-12-22T11:04:01.216750 - Failed to validate prompt for output 141:
2024-12-22T11:04:01.216750 - Output will be ignored
2024-12-22T11:04:01.216750 - Failed to validate prompt for output 346:
2024-12-22T11:04:01.216750 - Output will be ignored
2024-12-22T11:04:01.217751 - Failed to validate prompt for output 584:
2024-12-22T11:04:01.217751 - Output will be ignored
2024-12-22T11:04:01.217751 - Failed to validate prompt for output 145:
2024-12-22T11:04:01.217751 - Output will be ignored
2024-12-22T11:04:01.217751 - Failed to validate prompt for output 755:
2024-12-22T11:04:01.217751 - Output will be ignored
2024-12-22T11:04:01.217751 - Failed to validate prompt for output 637:
2024-12-22T11:04:01.217751 - Output will be ignored
2024-12-22T11:04:01.217751 - Failed to validate prompt for output 147:
2024-12-22T11:04:01.217751 - Output will be ignored
2024-12-22T11:04:01.217751 - Failed to validate prompt for output 440:
2024-12-22T11:04:01.217751 - Output will be ignored
2024-12-22T11:04:01.219254 - Failed to validate prompt for output 447:
2024-12-22T11:04:01.219254 - Output will be ignored
2024-12-22T11:04:01.219254 - Failed to validate prompt for output 433:
2024-12-22T11:04:01.219254 - Output will be ignored
2024-12-22T11:04:01.219254 - Failed to validate prompt for output 356:
2024-12-22T11:04:01.219254 - Output will be ignored
2024-12-22T11:04:01.219254 - invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
2024-12-22T11:04:15.735177 - got prompt
2024-12-22T11:04:15.744696 - Failed to validate prompt for output 358:
2024-12-22T11:04:15.745695 - * BasicGuider 618:
2024-12-22T11:04:15.745695 -   - Required input is missing: model
2024-12-22T11:04:15.745695 - Output will be ignored
2024-12-22T11:04:15.745695 - Failed to validate prompt for output 756:
2024-12-22T11:04:15.745695 - Output will be ignored
2024-12-22T11:04:15.746696 - Failed to validate prompt for output 140:
2024-12-22T11:04:15.746696 - Output will be ignored
2024-12-22T11:04:15.746696 - Failed to validate prompt for output 258:
2024-12-22T11:04:15.746696 - Output will be ignored
2024-12-22T11:04:15.746696 - Failed to validate prompt for output 84:
2024-12-22T11:04:15.746696 - Output will be ignored
2024-12-22T11:04:15.747694 - Failed to validate prompt for output 299:
2024-12-22T11:04:15.747694 - Output will be ignored
2024-12-22T11:04:15.747694 - Failed to validate prompt for output 179:
2024-12-22T11:04:15.747694 - Output will be ignored
2024-12-22T11:04:15.747694 - Failed to validate prompt for output 138:
2024-12-22T11:04:15.747694 - Output will be ignored
2024-12-22T11:04:15.748694 - Failed to validate prompt for output 354:
2024-12-22T11:04:15.748694 - Output will be ignored
2024-12-22T11:04:15.748694 - Failed to validate prompt for output 300:
2024-12-22T11:04:15.748694 - Output will be ignored
2024-12-22T11:04:15.748694 - Failed to validate prompt for output 301:
2024-12-22T11:04:15.748694 - Output will be ignored
2024-12-22T11:04:15.748694 - Failed to validate prompt for output 758:
2024-12-22T11:04:15.748694 - Output will be ignored
2024-12-22T11:04:15.749698 - Failed to validate prompt for output 757:
2024-12-22T11:04:15.749698 - Output will be ignored
2024-12-22T11:04:15.749698 - Failed to validate prompt for output 146:
2024-12-22T11:04:15.749698 - Output will be ignored
2024-12-22T11:04:15.749698 - Failed to validate prompt for output 141:
2024-12-22T11:04:15.749698 - Output will be ignored
2024-12-22T11:04:15.749698 - Failed to validate prompt for output 346:
2024-12-22T11:04:15.749698 - Output will be ignored
2024-12-22T11:04:15.749698 - Failed to validate prompt for output 584:
2024-12-22T11:04:15.749698 - Output will be ignored
2024-12-22T11:04:15.750894 - Failed to validate prompt for output 145:
2024-12-22T11:04:15.750894 - Output will be ignored
2024-12-22T11:04:15.750894 - Failed to validate prompt for output 755:
2024-12-22T11:04:15.750894 - Output will be ignored
2024-12-22T11:04:15.750894 - Failed to validate prompt for output 637:
2024-12-22T11:04:15.750894 - Output will be ignored
2024-12-22T11:04:15.750894 - Failed to validate prompt for output 147:
2024-12-22T11:04:15.750894 - Output will be ignored
2024-12-22T11:04:15.751893 - Failed to validate prompt for output 440:
2024-12-22T11:04:15.751893 - Output will be ignored
2024-12-22T11:04:15.751893 - Failed to validate prompt for output 447:
2024-12-22T11:04:15.751893 - Output will be ignored
2024-12-22T11:04:15.752894 - Failed to validate prompt for output 433:
2024-12-22T11:04:15.752894 - Output will be ignored
2024-12-22T11:04:15.752894 - Failed to validate prompt for output 356:
2024-12-22T11:04:15.752894 - Output will be ignored
2024-12-22T11:04:15.752894 - invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
2024-12-22T11:04:29.666931 - got prompt
2024-12-22T11:04:32.346987 - Requested to load FluxClipModel_
2024-12-22T11:04:32.346987 - Loading 1 new model
2024-12-22T11:04:44.268525 - loaded completely 0.0 4777.53759765625 True
2024-12-22T11:04:44.739906 - Warning torch.load doesn't support weights_only on this pytorch version, loading unsafely.
2024-12-22T11:04:47.510680 - Loading PuLID-Flux model.
2024-12-22T11:04:54.507511 - model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
2024-12-22T11:04:54.514762 - model_type FLUX
2024-12-22T11:07:38.488513 - C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\insightface\utils\transform.py:68: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
  P = np.linalg.lstsq(X_homo, Y)[0].T # Affine matrix. 3 x 4

2024-12-22T11:07:40.603622 - Requested to load Flux
2024-12-22T11:07:40.603622 - Loading 1 new model
2024-12-22T11:07:45.251781 - loaded partially 4364.382933044433 4361.963928222656 0
2024-12-22T11:07:45.322728 - 
  0%|                                                                                                 | 0/25 [00:00<?, ?it/s]2024-12-22T11:07:45.353635 - Requested to load AutoencodingEngine
2024-12-22T11:07:45.353635 - Loading 1 new model
2024-12-22T11:07:47.546526 - loaded completely 0.0 159.87335777282715 True
2024-12-22T11:07:47.881637 - Requested to load Flux
2024-12-22T11:07:47.881637 - Loading 1 new model
2024-12-22T11:07:49.486162 - loaded partially 5757.105179595947 5757.078186035156 0
2024-12-22T11:08:36.390379 - 
  0%|                                                                                                 | 0/25 [00:51<?, ?it/s]2024-12-22T11:08:36.390379 - 
2024-12-22T11:08:36.390379 - Processing interrupted
2024-12-22T11:08:36.396887 - Prompt executed in 246.71 seconds
2024-12-22T11:10:26.555383 - got prompt
2024-12-22T11:10:27.047238 - Requested to load Flux
2024-12-22T11:10:27.047238 - Loading 1 new model
2024-12-22T11:10:27.101210 - loaded partially 5821.078186035156 5820.086975097656 0
2024-12-22T11:10:27.120301 - 
  0%|                                                                                                 | 0/25 [00:00<?, ?it/s]2024-12-22T11:10:27.166668 - Requested to load AutoencodingEngine
2024-12-22T11:10:27.166668 - Loading 1 new model
2024-12-22T11:10:45.757388 - loaded completely 0.0 159.87335777282715 True
2024-12-22T11:10:49.102077 - loaded completely 8385.545609283447 6297.982421875 True
2024-12-22T11:10:49.647333 - loaded partially 2340.0543983459474 2336.3047485351562 0
2024-12-22T11:17:27.993365 - 
 40%|███████████████████████████████████▏                                                    | 10/25 [07:00<08:51, 35.45s/it]2024-12-22T11:17:31.963662 - 
 40%|███████████████████████████████████▏                                                    | 10/25 [07:04<10:37, 42.48s/it]2024-12-22T11:17:31.963662 - 
2024-12-22T11:17:31.976176 - !!! Exception during processing !!! Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
2024-12-22T11:17:31.980417 - Traceback (most recent call last):
  File "D:\Automatic1111 2\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "D:\Automatic1111 2\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "D:\Automatic1111 2\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "D:\Automatic1111 2\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "D:\Automatic1111 2\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 633, in sample
    samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 740, in sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 719, in inner_sample
    samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 100, in KSAMPLER_sample
    return orig_fn(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 34, in KSAMPLER_sample
    return orig_fn(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 624, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\comfy\k_diffusion\sampling.py", line 1058, in sample_deis
    denoised = model(x_cur, t_cur * s_in, **extra_args)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 299, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 706, in __call__
    return self.predict_noise(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 709, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 175, in sampling_function
    out = orig_fn(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 279, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
  File "D:\Automatic1111 2\ComfyUI\comfy\model_base.py", line 145, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\comfy\ldm\flux\model.py", line 184, in forward
    out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control, transformer_options)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced\pulidflux.py", line 136, in forward_orig
    img = img + node_data['weight'] * self.pulid_ca[ca_idx](node_data['embedding'], img)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced\encoders_flux.py", line 57, in forward
    q = self.to_q(latents)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)

2024-12-22T11:17:31.981974 - Prompt executed in 425.07 seconds

Workflow

Workflow
https://openart.ai/workflows/cat_untimely_42/flux-consistent-charactersinput-image/2uYrZP7Lq7A15lyXL2op

@kubilaykilinc
Copy link

Same problem

@broken-rotor
Copy link
Contributor

broken-rotor commented Dec 22, 2024

@CmoneBK are you sure you have the patch applied? From your stack trace:

File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced\pulidflux.py", line 136, in forward_orig
img = img + node_data['weight'] * self.pulid_ca[ca_idx](node_data['embedding'], img)

But the fix changes that line to:
img = img + node_data['weight'] * self.pulid_ca[ca_idx].to(device)(node_data['embedding'], img)

(the .to(device) is the critical bit, though you'll need the whole patch for device to be defined there)

@CmoneBK
Copy link
Author

CmoneBK commented Dec 22, 2024

@broken-rotor I had used comfyui manager to update to the version from 2024-11-23, which is, by my understanding, the current version of ComfyUI-PuLID-Flux-Enhanced.

Following @iltondf 's advice in #35 (comment)
I just changed the code of the

ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced\pulidflux.py 

into:

Code:
import torch
from torch import nn, Tensor
from torchvision import transforms
from torchvision.transforms import functional
import os
import logging
import folder_paths
import comfy.utils
from comfy.ldm.flux.layers import timestep_embedding
from insightface.app import FaceAnalysis
from facexlib.parsing import init_parsing_model
from facexlib.utils.face_restoration_helper import FaceRestoreHelper

import torch.nn.functional as F

from .eva_clip.constants import OPENAI_DATASET_MEAN, OPENAI_DATASET_STD
from .encoders_flux import IDFormer, PerceiverAttentionCA

INSIGHTFACE_DIR = os.path.join(folder_paths.models_dir, "insightface")

MODELS_DIR = os.path.join(folder_paths.models_dir, "pulid")
if "pulid" not in folder_paths.folder_names_and_paths:
    current_paths = [MODELS_DIR]
else:
    current_paths, _ = folder_paths.folder_names_and_paths["pulid"]
folder_paths.folder_names_and_paths["pulid"] = (current_paths, folder_paths.supported_pt_extensions)

from .online_train2 import online_train

class PulidFluxModel(nn.Module):
    def __init__(self):
        super().__init__()

        self.double_interval = 2
        self.single_interval = 4

		# Init encoder
		self.pulid_encoder = IDFormer()

		# Init attention
		num_ca = 19 // self.double_interval + 38 // self.single_interval
		if 19 % self.double_interval != 0:
			num_ca += 1
		if 38 % self.single_interval != 0:
			num_ca += 1
		self.pulid_ca = nn.ModuleList([
			PerceiverAttentionCA() for _ in range(num_ca)
		])

	def from_pretrained(self, path: str):
		state_dict = comfy.utils.load_torch_file(path, safe_load=True)
		state_dict_dict = {}
		for k, v in state_dict.items():
			module = k.split('.')[0]
			state_dict_dict.setdefault(module, {})
			new_k = k[len(module) + 1:]
			state_dict_dict[module][new_k] = v

		for module in state_dict_dict:
			getattr(self, module).load_state_dict(state_dict_dict[module], strict=True)

		del state_dict
		del state_dict_dict

	def get_embeds(self, face_embed, clip_embeds):
		return self.pulid_encoder(face_embed, clip_embeds)

def forward_orig(
    self,
    img: Tensor,
    img_ids: Tensor,
    txt: Tensor,
    txt_ids: Tensor,
    timesteps: Tensor,
    y: Tensor,
    guidance: Tensor = None,
    control=None,
    transformer_options={}
) -> Tensor:
	device = img.device # Certificar-se de que tudo está no mesmo dispositivo
    patches_replace = transformer_options.get("patches_replace", {})

	if img.ndim != 3 or txt.ndim != 3:
		raise ValueError("Input img and txt tensors must have 3 dimensions.")

	img = self.img_in(img)
	vec = self.time_in(timestep_embedding(timesteps.to(device), 256).to(img.dtype))
	if self.params.guidance_embed:
		if guidance is None:
			raise ValueError("Didn't get guidance strength for guidance distilled model.")
		vec = vec + self.guidance_in(timestep_embedding(guidance.to(device), 256).to(img.dtype))

	vec = vec + self.vector_in(y.to(device))
	txt = self.txt_in(txt)

	ids = torch.cat((txt_ids, img_ids), dim=1)
	pe = self.pe_embedder(ids.to(device))

	ca_idx = 0
	blocks_replace = patches_replace.get("dit", {})

	for i, block in enumerate(self.double_blocks):
		if ("double_block", i) in blocks_replace:
			def block_wrap(args):
				out = {}
				out["img"], out["txt"] = block(
					img=args["img"], txt=args["txt"], vec=args["vec"], pe=args["pe"])
				return out

			out = blocks_replace[("double_block", i)](
				{"img": img, "txt": txt, "vec": vec, "pe": pe}, {"original_block": block_wrap})
			txt = out["txt"]
			img = out["img"]
		else:
			img, txt = block(img=img, txt=txt, vec=vec, pe=pe)

		if control is not None:  # Controlnet
			control_i = control.get("input")
			if i < len(control_i):
				add = control_i[i]
				if add is not None:
					img += add.to(device)

		if self.pulid_data:
			if i % self.pulid_double_interval == 0:
				for _, node_data in self.pulid_data.items():
					condition_start = node_data['sigma_start'] >= timesteps
					condition_end = timesteps >= node_data['sigma_end']
					condition = torch.logical_and(condition_start, condition_end).all()

					if condition:
						img = img + node_data['weight'] * self.pulid_ca[ca_idx](node_data['embedding'], img)

				ca_idx += 1

	img = torch.cat((txt, img), 1)

	for i, block in enumerate(self.single_blocks):
		img = block(img, vec=vec, pe=pe)

		if control is not None:  # Controlnet
			control_o = control.get("output")
			if i < len(control_o):
				add = control_o[i]
				if add is not None:
					img[:, txt.shape[1]:, ...] += add.to(device)

		if self.pulid_data:
			real_img, txt = img[:, txt.shape[1]:, ...], img[:, :txt.shape[1], ...]
			if i % self.pulid_single_interval == 0:
				for _, node_data in self.pulid_data.items():
					condition_start = node_data['sigma_start'] >= timesteps
					condition_end = timesteps >= node_data['sigma_end']
					condition = torch.logical_and(condition_start, condition_end).all()

					if condition:
						real_img = real_img + node_data['weight'] * self.pulid_ca[ca_idx](node_data['embedding'], real_img)
				ca_idx += 1
			img = torch.cat((txt, real_img), 1)

	img = img[:, txt.shape[1]:, ...]
	img = self.final_layer(img, vec)  # (N, T, patch_size ** 2 * out_channels)
	return img

def tensor_to_image(tensor):
    image = tensor.mul(255).clamp(0, 255).byte().cpu()
    image = image[..., [2, 1, 0]].numpy()
    return image

def image_to_tensor(image):
    tensor = torch.clamp(torch.from_numpy(image).float() / 255., 0, 1)
    tensor = tensor[..., [2, 1, 0]]
    return tensor

def resize_with_pad(img, target_size): # image: 1, h, w, 3
    img = img.permute(0, 3, 1, 2)
    H, W = target_size
    
    h, w = img.shape[2], img.shape[3]
    scale_h = H / h
    scale_w = W / w
    scale = min(scale_h, scale_w)

    new_h = int(min(h * scale,H))
    new_w = int(min(w * scale,W))
    new_size = (new_h, new_w)
    
    img = F.interpolate(img, size=new_size, mode='bicubic', align_corners=False)
    
    pad_top = (H - new_h) // 2
    pad_bottom = (H - new_h) - pad_top
    pad_left = (W - new_w) // 2
    pad_right = (W - new_w) - pad_left
    img = F.pad(img, pad=(pad_left, pad_right, pad_top, pad_bottom), mode='constant', value=0)
    
    return img.permute(0, 2, 3, 1)

def to_gray(img):
    x = 0.299 * img[:, 0:1] + 0.587 * img[:, 1:2] + 0.114 * img[:, 2:3]
    x = x.repeat(1, 3, 1, 1)
    return x

"""
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 Nodes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""

class PulidFluxModelLoader:
    @classmethod
    def INPUT_TYPES(s):
        return {"required": {"pulid_file": (folder_paths.get_filename_list("pulid"), )}}

    RETURN_TYPES = ("PULIDFLUX",)
    FUNCTION = "load_model"
    CATEGORY = "pulid"

    def load_model(self, pulid_file):
        model_path = folder_paths.get_full_path("pulid", pulid_file)

        # Also initialize the model, takes longer to load but then it doesn't have to be done every time you change parameters in the apply node
        model = PulidFluxModel()

        logging.info("Loading PuLID-Flux model.")
        model.from_pretrained(path=model_path)

        return (model,)

class PulidFluxInsightFaceLoader:
    @classmethod
    def INPUT_TYPES(s):
        return {
            "required": {
                "provider": (["CPU", "CUDA", "ROCM"], ),
            },
        }

    RETURN_TYPES = ("FACEANALYSIS",)
    FUNCTION = "load_insightface"
    CATEGORY = "pulid"

    def load_insightface(self, provider):
        model = FaceAnalysis(name="antelopev2", root=INSIGHTFACE_DIR, providers=[provider + 'ExecutionProvider',]) # alternative to buffalo_l
        model.prepare(ctx_id=0, det_size=(640, 640))

        return (model,)

class PulidFluxEvaClipLoader:
    @classmethod
    def INPUT_TYPES(s):
        return {
            "required": {},
        }

    RETURN_TYPES = ("EVA_CLIP",)
    FUNCTION = "load_eva_clip"
    CATEGORY = "pulid"

    def load_eva_clip(self):
        from .eva_clip.factory import create_model_and_transforms

        model, _, _ = create_model_and_transforms('EVA02-CLIP-L-14-336', 'eva_clip', force_custom_clip=True)

        model = model.visual

        eva_transform_mean = getattr(model, 'image_mean', OPENAI_DATASET_MEAN)
        eva_transform_std = getattr(model, 'image_std', OPENAI_DATASET_STD)
        if not isinstance(eva_transform_mean, (list, tuple)):
            model["image_mean"] = (eva_transform_mean,) * 3
        if not isinstance(eva_transform_std, (list, tuple)):
            model["image_std"] = (eva_transform_std,) * 3

        return (model,)

class ApplyPulidFlux:
    @classmethod
    def INPUT_TYPES(s):  
        return {
            "required": {
                "model": ("MODEL", ),
                "pulid_flux": ("PULIDFLUX", ),
                "eva_clip": ("EVA_CLIP", ),
                "face_analysis": ("FACEANALYSIS", ),
                "image": ("IMAGE", ),
                "weight": ("FLOAT", {"default": 1.0, "min": -1.0, "max": 5.0, "step": 0.05 }),
                "start_at": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.001 }),
                "end_at": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.001 }),
                "fusion": (["mean","concat","max","norm_id","max_token","auto_weight","train_weight"],),
                "fusion_weight_max": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 20.0, "step": 0.1 }),
                "fusion_weight_min": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 20.0, "step": 0.1 }),
                "train_step": ("INT", {"default": 1000, "min": 0, "max": 20000, "step": 1 }),
                "use_gray": ("BOOLEAN", {"default": True, "label_on": "enabled", "label_off": "disabled"}),
            },
            "optional": {
                "attn_mask": ("MASK", ),
                "prior_image": ("IMAGE",), # for train weight, as the target
            },
            "hidden": {
                "unique_id": "UNIQUE_ID"
            },
        }

    RETURN_TYPES = ("MODEL",)
    FUNCTION = "apply_pulid_flux"
    CATEGORY = "pulid"

    def __init__(self):
        self.pulid_data_dict = None

    def apply_pulid_flux(self, model, pulid_flux, eva_clip, face_analysis, image, weight, start_at, end_at, prior_image=None,fusion="mean", fusion_weight_max=1.0, fusion_weight_min=0.0, train_step=1000, use_gray=True, attn_mask=None, unique_id=None):
        device = comfy.model_management.get_torch_device()
        # Why should I care what args say, when the unet model has a different dtype?!
        # Am I missing something?!
        #dtype = comfy.model_management.unet_dtype()
        dtype = model.model.diffusion_model.dtype
        # For 8bit use bfloat16 (because ufunc_add_CUDA is not implemented)
        if dtype in [torch.float8_e4m3fn, torch.float8_e5m2]:
            dtype = torch.bfloat16

        eva_clip.to(device, dtype=dtype)
        pulid_flux.to(device, dtype=dtype)

        # TODO: Add masking support!
        if attn_mask is not None:
            if attn_mask.dim() > 3:
                attn_mask = attn_mask.squeeze(-1)
            elif attn_mask.dim() < 3:
                attn_mask = attn_mask.unsqueeze(0)
            attn_mask = attn_mask.to(device, dtype=dtype)

        if prior_image is not None:
            prior_image = resize_with_pad(prior_image.to(image.device, dtype=image.dtype), target_size=(image.shape[1], image.shape[2]))
            image=torch.cat((prior_image,image),dim=0)
        image = tensor_to_image(image)

        face_helper = FaceRestoreHelper(
            upscale_factor=1,
            face_size=512,
            crop_ratio=(1, 1),
            det_model='retinaface_resnet50',
            save_ext='png',
            device=device,
        )

        face_helper.face_parse = None
        face_helper.face_parse = init_parsing_model(model_name='bisenet', device=device)

        bg_label = [0, 16, 18, 7, 8, 9, 14, 15]
        cond = []

        # Analyse multiple images at multiple sizes and combine largest area embeddings
        for i in range(image.shape[0]):
            # get insightface embeddings
            iface_embeds = None
            for size in [(size, size) for size in range(640, 256, -64)]:
                face_analysis.det_model.input_size = size
                face_info = face_analysis.get(image[i])
                if face_info:
                    # Only use the maximum face
                    # Removed the reverse=True from original code because we need the largest area not the smallest one!
                    # Sorts the list in ascending order (smallest to largest),
                    # then selects the last element, which is the largest face
                    face_info = sorted(face_info, key=lambda x: (x.bbox[2] - x.bbox[0]) * (x.bbox[3] - x.bbox[1]))[-1]
                    iface_embeds = torch.from_numpy(face_info.embedding).unsqueeze(0).to(device, dtype=dtype)
                    break
            else:
                # No face detected, skip this image
                logging.warning(f'Warning: No face detected in image {str(i)}')
                continue

            # get eva_clip embeddings
            face_helper.clean_all()
            face_helper.read_image(image[i])
            face_helper.get_face_landmarks_5(only_center_face=True)
            face_helper.align_warp_face()

            if len(face_helper.cropped_faces) == 0:
                # No face detected, skip this image
                continue

            # Get aligned face image
            align_face = face_helper.cropped_faces[0]
            # Convert bgr face image to tensor
            align_face = image_to_tensor(align_face).unsqueeze(0).permute(0, 3, 1, 2).to(device)
            parsing_out = face_helper.face_parse(functional.normalize(align_face, [0.485, 0.456, 0.406], [0.229, 0.224, 0.225]))[0]
            parsing_out = parsing_out.argmax(dim=1, keepdim=True)
            bg = sum(parsing_out == i for i in bg_label).bool()
            white_image = torch.ones_like(align_face)
            # Only keep the face features
            if use_gray:
                _align_face = to_gray(align_face)
            else:
                _align_face = align_face
            face_features_image = torch.where(bg, white_image, _align_face)

            # Transform img before sending to eva_clip
            # Apparently MPS only supports NEAREST interpolation?
            face_features_image = functional.resize(face_features_image, eva_clip.image_size, transforms.InterpolationMode.BICUBIC if 'cuda' in device.type else transforms.InterpolationMode.NEAREST).to(device, dtype=dtype)
            face_features_image = functional.normalize(face_features_image, eva_clip.image_mean, eva_clip.image_std)

            # eva_clip
            id_cond_vit, id_vit_hidden = eva_clip(face_features_image, return_all_features=False, return_hidden=True, shuffle=False)
            id_cond_vit = id_cond_vit.to(device, dtype=dtype)
            for idx in range(len(id_vit_hidden)):
                id_vit_hidden[idx] = id_vit_hidden[idx].to(device, dtype=dtype)

            id_cond_vit = torch.div(id_cond_vit, torch.norm(id_cond_vit, 2, 1, True))

            # Combine embeddings
            id_cond = torch.cat([iface_embeds, id_cond_vit], dim=-1)

            # Pulid_encoder
            cond.append(pulid_flux.get_embeds(id_cond, id_vit_hidden))

        if not cond:
            # No faces detected, return the original model
            logging.warning("PuLID warning: No faces detected in any of the given images, returning unmodified model.")
            return (model,)

        # fusion embeddings
        if fusion == "mean":
            cond = torch.cat(cond).to(device, dtype=dtype) # N,32,2048
            if cond.shape[0] > 1:
                cond = torch.mean(cond, dim=0, keepdim=True)
        elif fusion == "concat":
            cond = torch.cat(cond, dim=1).to(device, dtype=dtype)
        elif fusion == "max":
            cond = torch.cat(cond).to(device, dtype=dtype)
            if cond.shape[0] > 1:
                cond = torch.max(cond, dim=0, keepdim=True)[0]
        elif fusion == "norm_id":
            cond = torch.cat(cond).to(device, dtype=dtype)
            if cond.shape[0] > 1:
                norm=torch.norm(cond,dim=(1,2))
                norm=norm/torch.sum(norm)
                cond=torch.einsum("wij,w->ij",cond,norm).unsqueeze(0)
        elif fusion == "max_token":
            cond = torch.cat(cond).to(device, dtype=dtype)
            if cond.shape[0] > 1:
                norm=torch.norm(cond,dim=2)
                _,idx=torch.max(norm,dim=0)
                cond=torch.stack([cond[j,i] for i,j in enumerate(idx)]).unsqueeze(0)
        elif fusion == "auto_weight": # 🤔
            cond = torch.cat(cond).to(device, dtype=dtype)
            if cond.shape[0] > 1:
                norm=torch.norm(cond,dim=2)
                order=torch.argsort(norm,descending=False,dim=0)
                regular_weight=torch.linspace(fusion_weight_min,fusion_weight_max,norm.shape[0]).to(device, dtype=dtype)

                _cond=[]
                for i in range(cond.shape[1]):
                    o=order[:,i]
                    _cond.append(torch.einsum('ij,i->j',cond[:,i,:],regular_weight[o]))
                cond=torch.stack(_cond,dim=0).unsqueeze(0)
        elif fusion == "train_weight":
            cond = torch.cat(cond).to(device, dtype=dtype)
            if cond.shape[0] > 1:
                if train_step > 0:
                    with torch.inference_mode(False):
                        cond = online_train(cond, device=cond.device, step=train_step)
                else:
                    cond = torch.mean(cond, dim=0, keepdim=True)

        sigma_start = model.get_model_object("model_sampling").percent_to_sigma(start_at)
        sigma_end = model.get_model_object("model_sampling").percent_to_sigma(end_at)

        # Patch the Flux model (original diffusion_model)
        # Nah, I don't care for the official ModelPatcher because it's undocumented!
        # I want the end result now, and I don’t mind if I break other custom nodes in the process. 😄
        flux_model = model.model.diffusion_model
        # Let's see if we already patched the underlying flux model, if not apply patch
        if not hasattr(flux_model, "pulid_ca"):
            # Add perceiver attention, variables and current node data (weight, embedding, sigma_start, sigma_end)
            # The pulid_data is stored in Dict by unique node index,
            # so we can chain multiple ApplyPulidFlux nodes!
            flux_model.pulid_ca = pulid_flux.pulid_ca
            flux_model.pulid_double_interval = pulid_flux.double_interval
            flux_model.pulid_single_interval = pulid_flux.single_interval
            flux_model.pulid_data = {}
            # Replace model forward_orig with our own
            new_method = forward_orig.__get__(flux_model, flux_model.__class__)
            setattr(flux_model, 'forward_orig', new_method)

        # Patch is already in place, add data (weight, embedding, sigma_start, sigma_end) under unique node index
        flux_model.pulid_data[unique_id] = {
            'weight': weight,
            'embedding': cond,
            'sigma_start': sigma_start,
            'sigma_end': sigma_end,
        }

        # Keep a reference for destructor (if node is deleted the data will be deleted as well)
        self.pulid_data_dict = {'data': flux_model.pulid_data, 'unique_id': unique_id}

        return (model,)

    def __del__(self):
        # Destroy the data for this node
        if self.pulid_data_dict:
            del self.pulid_data_dict['data'][self.pulid_data_dict['unique_id']]
            del self.pulid_data_dict


NODE_CLASS_MAPPINGS = {
    "PulidFluxModelLoader": PulidFluxModelLoader,
    "PulidFluxInsightFaceLoader": PulidFluxInsightFaceLoader,
    "PulidFluxEvaClipLoader": PulidFluxEvaClipLoader,
    "ApplyPulidFlux": ApplyPulidFlux,
}

NODE_DISPLAY_NAME_MAPPINGS = {
    "PulidFluxModelLoader": "Load PuLID Flux Model",
    "PulidFluxInsightFaceLoader": "Load InsightFace (PuLID Flux)",
    "PulidFluxEvaClipLoader": "Load Eva Clip (PuLID Flux)",
    "ApplyPulidFlux": "Apply PuLID Flux",
}

And it still throws:

Error Report

Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)

ComfyUI Error Report

Error Details

  • Node ID: 82
  • Node Type: UltimateSDUpscale
  • Exception Type: RuntimeError
  • Exception Message: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)

Stack Trace

  File "D:\Automatic1111 2\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "D:\Automatic1111 2\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "D:\Automatic1111 2\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "D:\Automatic1111 2\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\nodes.py", line 151, in upscale
    processed = script.run(p=self.sdprocessing, _=None, tile_width=self.tile_width, tile_height=self.tile_height,

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\repositories\ultimate_sd_upscale\scripts\ultimate-upscale.py", line 565, in run
    upscaler.process()

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\repositories\ultimate_sd_upscale\scripts\ultimate-upscale.py", line 138, in process
    self.image = self.redraw.start(self.p, self.image, self.rows, self.cols)

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\repositories\ultimate_sd_upscale\scripts\ultimate-upscale.py", line 245, in start
    return self.linear_process(p, image, rows, cols)

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\repositories\ultimate_sd_upscale\scripts\ultimate-upscale.py", line 180, in linear_process
    processed = processing.process_images(p)

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\modules\processing.py", line 173, in process_images
    samples = sample(p.model, p.seed, p.steps, p.cfg, p.sampler_name, p.scheduler, positive_cropped,

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\modules\processing.py", line 109, in sample
    (samples,) = common_ksampler(model, seed, steps, cfg, sampler_name,

  File "D:\Automatic1111 2\ComfyUI\nodes.py", line 1424, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample
    raise e

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
    return original_sample(*args, **kwargs)  # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.

  File "D:\Automatic1111 2\ComfyUI\comfy\sample.py", line 43, in sample
    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 117, in KSampler_sample
    return orig_fn(*args, **kwargs)

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 51, in KSampler_sample
    return orig_fn(*args, **kwargs)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 855, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 135, in sample
    return orig_fn(*args, **kwargs)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 753, in sample
    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 740, in sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 719, in inner_sample
    samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 100, in KSAMPLER_sample
    return orig_fn(*args, **kwargs)

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 34, in KSAMPLER_sample
    return orig_fn(*args, **kwargs)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 624, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)

  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)

  File "D:\Automatic1111 2\ComfyUI\comfy\k_diffusion\sampling.py", line 155, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 299, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 706, in __call__
    return self.predict_noise(*args, **kwargs)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 709, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 175, in sampling_function
    out = orig_fn(*args, **kwargs)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 279, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)

  File "D:\Automatic1111 2\ComfyUI\comfy\model_base.py", line 145, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()

  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)

  File "D:\Automatic1111 2\ComfyUI\comfy\ldm\flux\model.py", line 184, in forward
    out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control, transformer_options)

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced\pulidflux.py", line 136, in forward_orig

  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced\encoders_flux.py", line 57, in forward
    q = self.to_q(latents)

  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)

  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)

System Information

  • ComfyUI Version: v0.3.4
  • Arguments: main.py
  • OS: nt
  • Python Version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
  • Embedded Python: false
  • PyTorch Version: 2.1.1+cu121

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 3080 : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 12884377600
    • VRAM Free: 823156756
    • Torch VRAM Total: 9932111872
    • Torch VRAM Free: 70963220

Logs

2024-12-22T21:45:14.073477 - �[34mWAS Node Suite: �[0m`CLIPTextEncode (BlenderNeko Advanced + NSP)` node enabled under `WAS Suite/Conditioning` menu.�[0m2024-12-22T21:45:14.073477 - 
2024-12-22T21:45:14.876528 - �[34mWAS Node Suite: �[0mOpenCV Python FFMPEG support is enabled�[0m2024-12-22T21:45:14.876528 - 
2024-12-22T21:45:14.876528 - �[34mWAS Node Suite �[93mWarning: �[0m`ffmpeg_bin_path` is not set in `D:\Automatic1111 2\ComfyUI\custom_nodes\was-node-suite-comfyui\was_suite_config.json` config file. Will attempt to use system ffmpeg binaries if available.�[0m2024-12-22T21:45:14.876528 - 
2024-12-22T21:45:15.678014 - �[34mWAS Node Suite: �[0mFinished.�[0m �[32mLoaded�[0m �[0m221�[0m �[32mnodes successfully.�[0m2024-12-22T21:45:15.678014 - 
2024-12-22T21:45:15.678014 - 
	�[3m�[93m"Art is the stored honey of the human soul."�[0m�[3m - Theodore Dreiser�[0m
2024-12-22T21:45:15.679015 - 
2024-12-22T21:45:15.695942 - 
Import times for custom nodes:
2024-12-22T21:45:15.695942 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\image_to_mask_node.py
2024-12-22T21:45:15.695942 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\invert_mask_node.py
2024-12-22T21:45:15.695942 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\image_to_contrast_mask_node.py
2024-12-22T21:45:15.695942 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\sharpness_ally.py
2024-12-22T21:45:15.695942 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\Pseudo_HDR_ally.py
2024-12-22T21:45:15.695942 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\vae_decode_preview.py
2024-12-22T21:45:15.695942 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\histogram_equalization.py
2024-12-22T21:45:15.695942 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\mosaic_node.py
2024-12-22T21:45:15.695942 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\SDXLMixSampler.py
2024-12-22T21:45:15.695942 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\LatentByRatio.py
2024-12-22T21:45:15.695942 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\imageflip_ally.py
2024-12-22T21:45:15.696943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\node_text_to_speech.py
2024-12-22T21:45:15.696943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\brightness_contrast_ally.py
2024-12-22T21:45:15.696943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\crop_node.py
2024-12-22T21:45:15.696943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\monocromatic_clip_node.py
2024-12-22T21:45:15.696943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\gaussian_blur_node.py
2024-12-22T21:45:15.696943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\nodes_phi_3_contitioning.py
2024-12-22T21:45:15.696943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\saturation_ally.py
2024-12-22T21:45:15.696943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\image2halftone.py
2024-12-22T21:45:15.696943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\gaussian_blur_ally.py
2024-12-22T21:45:15.696943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\animated_rotation_zoom.py
2024-12-22T21:45:15.697943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\invert_image_node.py
2024-12-22T21:45:15.697943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\websocket_image_save.py
2024-12-22T21:45:15.697943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\clip_text_encode_split.py
2024-12-22T21:45:15.697943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\EXT_AudioManipulation.py
2024-12-22T21:45:15.697943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\LoadLoraWithTags
2024-12-22T21:45:15.697943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-mxToolkit
2024-12-22T21:45:15.697943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\comfyui_lora_tag_loader
2024-12-22T21:45:15.697943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\cg-use-everywhere
2024-12-22T21:45:15.697943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_Noise
2024-12-22T21:45:15.697943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUi_NNLatentUpscale
2024-12-22T21:45:15.697943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\sd-dynamic-thresholding
2024-12-22T21:45:15.697943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\stability-ComfyUI-nodes
2024-12-22T21:45:15.698943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-SAI_API
2024-12-22T21:45:15.698943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_ADV_CLIP_emb
2024-12-22T21:45:15.698943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus
2024-12-22T21:45:15.698943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\efficiency-nodes-comfyui
2024-12-22T21:45:15.698943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\cg-image-picker
2024-12-22T21:45:15.698943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_Cutoff
2024-12-22T21:45:15.698943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\Comfyui_TTP_Toolset
2024-12-22T21:45:15.698943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\comfy_clip_blip_node
2024-12-22T21:45:15.698943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-Miaoshouai-Tagger
2024-12-22T21:45:15.698943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_TiledKSampler
2024-12-22T21:45:15.698943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-post-processing-nodes
2024-12-22T21:45:15.698943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite
2024-12-22T21:45:15.698943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-Logic
2024-12-22T21:45:15.699944 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-Lora-Auto-Trigger-Words
2024-12-22T21:45:15.699944 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_experiments
2024-12-22T21:45:15.699944 -    0.0 seconds (IMPORT FAILED): D:\Automatic1111 2\ComfyUI\custom_nodes\save_image_to_davinci.py
2024-12-22T21:45:15.699944 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-Impact-Subpack
2024-12-22T21:45:15.699944 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\comfyui_controlnet_aux
2024-12-22T21:45:15.699944 -    0.0 seconds (IMPORT FAILED): D:\Automatic1111 2\ComfyUI\custom_nodes\save_audio_to_davinci.py
2024-12-22T21:45:15.699944 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_SeeCoder
2024-12-22T21:45:15.699944 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_JPS-Nodes
2024-12-22T21:45:15.700943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\comfyui-portrait-master
2024-12-22T21:45:15.700943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-Loopchain
2024-12-22T21:45:15.700943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion
2024-12-22T21:45:15.700943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\comfyui-animatediff
2024-12-22T21:45:15.700943 -    0.0 seconds (IMPORT FAILED): D:\Automatic1111 2\ComfyUI\custom_nodes\EXT_VariationUtils.py
2024-12-22T21:45:15.700943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyMath
2024-12-22T21:45:15.700943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-Video-Matting
2024-12-22T21:45:15.700943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\comfyui-various
2024-12-22T21:45:15.700943 -    0.0 seconds (IMPORT FAILED): D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved
2024-12-22T21:45:15.700943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\virtuoso-nodes
2024-12-22T21:45:15.700943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\comfy-image-saver
2024-12-22T21:45:15.701943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_essentials
2024-12-22T21:45:15.701943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_Jags_VectorMagic
2024-12-22T21:45:15.701943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts
2024-12-22T21:45:15.701943 -    0.0 seconds (IMPORT FAILED): D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet
2024-12-22T21:45:15.701943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-Chibi-Nodes
2024-12-22T21:45:15.701943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-Impact-Pack
2024-12-22T21:45:15.701943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ControlAltAI-Nodes
2024-12-22T21:45:15.701943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\Derfuu_ComfyUI_ModdedNodes
2024-12-22T21:45:15.701943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-Sn0w-Scripts
2024-12-22T21:45:15.701943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale
2024-12-22T21:45:15.701943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-Frame-Interpolation
2024-12-22T21:45:15.702943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-Florence2
2024-12-22T21:45:15.702943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\rgthree-comfy
2024-12-22T21:45:15.702943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-GGUF
2024-12-22T21:45:15.702943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\comfyui-prompt-reader-node
2024-12-22T21:45:15.702943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-SaveImageWithMetaData
2024-12-22T21:45:15.702943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\x-flux-comfyui
2024-12-22T21:45:15.702943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-ImageMetadataExtension
2024-12-22T21:45:15.702943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\aegisflow_utility_nodes
2024-12-22T21:45:15.702943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_Comfyroll_CustomNodes
2024-12-22T21:45:15.702943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\animated_offset_pad.py
2024-12-22T21:45:15.702943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_tinyterraNodes
2024-12-22T21:45:15.702943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-MagickWand
2024-12-22T21:45:15.702943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\comfyui-dream-project
2024-12-22T21:45:15.702943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-RvTools
2024-12-22T21:45:15.702943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_smZNodes
2024-12-22T21:45:15.702943 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-KJNodes
2024-12-22T21:45:15.703944 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_LayerStyle
2024-12-22T21:45:15.703944 -    0.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\comfy_mtb
2024-12-22T21:45:15.703944 -    0.1 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack
2024-12-22T21:45:15.703944 -    0.1 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\comfyui-tensorops
2024-12-22T21:45:15.703944 -    0.1 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\PuLID_ComfyUI
2024-12-22T21:45:15.703944 -    0.1 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_TensorRT
2024-12-22T21:45:15.703944 -    0.1 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-AdvancedLivePortrait
2024-12-22T21:45:15.703944 -    0.2 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced
2024-12-22T21:45:15.703944 -    0.2 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\comfyui-propost
2024-12-22T21:45:15.703944 -    0.2 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_Fill-Nodes
2024-12-22T21:45:15.703944 -    0.2 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\clipseg.py
2024-12-22T21:45:15.703944 -    0.3 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-Crystools
2024-12-22T21:45:15.703944 -    0.3 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_Jags_Audiotools
2024-12-22T21:45:15.703944 -    0.4 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-Easy-Use
2024-12-22T21:45:15.703944 -    0.4 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-Manager
2024-12-22T21:45:15.703944 -    0.7 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\comfyui-art-venture
2024-12-22T21:45:15.704943 -    1.4 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_Custom_Nodes_AlekPet
2024-12-22T21:45:15.704943 -    1.6 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\was-node-suite-comfyui
2024-12-22T21:45:15.704943 -    2.4 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\comfyui-mixlab-nodes
2024-12-22T21:45:15.704943 -   13.0 seconds: D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-MotionDiff
2024-12-22T21:45:15.704943 - 
2024-12-22T21:45:15.721472 - Starting server

2024-12-22T21:45:15.722471 - To see the GUI go to: http://127.0.0.1:8188
2024-12-22T21:46:18.844318 - got prompt
2024-12-22T21:46:19.571929 - Using pytorch attention in VAE
2024-12-22T21:46:19.572435 - Using pytorch attention in VAE
2024-12-22T21:46:20.278135 - Warning torch.load doesn't support weights_only on this pytorch version, loading unsafely.
2024-12-22T21:46:32.572031 - clip missing: ['text_projection.weight']
2024-12-22T21:46:32.983769 - Requested to load FluxClipModel_
2024-12-22T21:46:32.984773 - Loading 1 new model
2024-12-22T21:46:33.856044 - loaded completely 0.0 4777.53759765625 True
2024-12-22T21:46:35.255723 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-12-22T21:46:35.255723 - 
2024-12-22T21:46:35.335664 - find model:2024-12-22T21:46:35.335664 -  2024-12-22T21:46:35.335664 - D:\Automatic1111 2\ComfyUI\models\insightface\models\antelopev2\1k3d68.onnx2024-12-22T21:46:35.335664 -  2024-12-22T21:46:35.336665 - landmark_3d_682024-12-22T21:46:35.336665 -  2024-12-22T21:46:35.336665 - ['None', 3, 192, 192]2024-12-22T21:46:35.336665 -  2024-12-22T21:46:35.336665 - 0.02024-12-22T21:46:35.338169 -  2024-12-22T21:46:35.338169 - 1.02024-12-22T21:46:35.338169 - 
2024-12-22T21:46:35.406753 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-12-22T21:46:35.407754 - 
2024-12-22T21:46:35.410753 - find model:2024-12-22T21:46:35.410753 -  2024-12-22T21:46:35.410753 - D:\Automatic1111 2\ComfyUI\models\insightface\models\antelopev2\2d106det.onnx2024-12-22T21:46:35.410753 -  2024-12-22T21:46:35.411753 - landmark_2d_1062024-12-22T21:46:35.411753 -  2024-12-22T21:46:35.411753 - ['None', 3, 192, 192]2024-12-22T21:46:35.411753 -  2024-12-22T21:46:35.411753 - 0.02024-12-22T21:46:35.411753 -  2024-12-22T21:46:35.411753 - 1.02024-12-22T21:46:35.411753 - 
2024-12-22T21:46:35.468071 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-12-22T21:46:35.469072 - 
2024-12-22T21:46:35.470071 - find model:2024-12-22T21:46:35.471072 -  2024-12-22T21:46:35.471072 - D:\Automatic1111 2\ComfyUI\models\insightface\models\antelopev2\genderage.onnx2024-12-22T21:46:35.471072 -  2024-12-22T21:46:35.471072 - genderage2024-12-22T21:46:35.471072 -  2024-12-22T21:46:35.471072 - ['None', 3, 96, 96]2024-12-22T21:46:35.471072 -  2024-12-22T21:46:35.471072 - 0.02024-12-22T21:46:35.471072 -  2024-12-22T21:46:35.471072 - 1.02024-12-22T21:46:35.471072 - 
2024-12-22T21:46:36.028081 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-12-22T21:46:36.028081 - 
2024-12-22T21:46:36.166573 - find model:2024-12-22T21:46:36.166573 -  2024-12-22T21:46:36.166573 - D:\Automatic1111 2\ComfyUI\models\insightface\models\antelopev2\glintr100.onnx2024-12-22T21:46:36.167076 -  2024-12-22T21:46:36.167076 - recognition2024-12-22T21:46:36.167076 -  2024-12-22T21:46:36.167076 - ['None', 3, 112, 112]2024-12-22T21:46:36.167076 -  2024-12-22T21:46:36.167076 - 127.52024-12-22T21:46:36.168083 -  2024-12-22T21:46:36.168083 - 127.52024-12-22T21:46:36.168083 - 
2024-12-22T21:46:36.268151 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-12-22T21:46:36.268151 - 
2024-12-22T21:46:36.268151 - find model:2024-12-22T21:46:36.268151 -  2024-12-22T21:46:36.268151 - D:\Automatic1111 2\ComfyUI\models\insightface\models\antelopev2\scrfd_10g_bnkps.onnx2024-12-22T21:46:36.268151 -  2024-12-22T21:46:36.268151 - detection2024-12-22T21:46:36.268151 -  2024-12-22T21:46:36.268151 - [1, 3, '?', '?']2024-12-22T21:46:36.268151 -  2024-12-22T21:46:36.268151 - 127.52024-12-22T21:46:36.268151 -  2024-12-22T21:46:36.268151 - 128.02024-12-22T21:46:36.268151 - 
2024-12-22T21:46:36.269150 - set det-size:2024-12-22T21:46:36.269150 -  2024-12-22T21:46:36.269150 - (640, 640)2024-12-22T21:46:36.269150 - 
2024-12-22T21:46:36.269150 - Loaded EVA02-CLIP-L-14-336 model config.
2024-12-22T21:46:36.286814 - Shape of rope freq: torch.Size([576, 64])
2024-12-22T21:46:40.217074 - Loading pretrained EVA02-CLIP-L-14-336 weights (eva_clip).
2024-12-22T21:46:40.860821 - incompatible_keys.missing_keys: ['visual.rope.freqs_cos', 'visual.rope.freqs_sin', 'visual.blocks.0.attn.rope.freqs_cos', 'visual.blocks.0.attn.rope.freqs_sin', 'visual.blocks.1.attn.rope.freqs_cos', 'visual.blocks.1.attn.rope.freqs_sin', 'visual.blocks.2.attn.rope.freqs_cos', 'visual.blocks.2.attn.rope.freqs_sin', 'visual.blocks.3.attn.rope.freqs_cos', 'visual.blocks.3.attn.rope.freqs_sin', 'visual.blocks.4.attn.rope.freqs_cos', 'visual.blocks.4.attn.rope.freqs_sin', 'visual.blocks.5.attn.rope.freqs_cos', 'visual.blocks.5.attn.rope.freqs_sin', 'visual.blocks.6.attn.rope.freqs_cos', 'visual.blocks.6.attn.rope.freqs_sin', 'visual.blocks.7.attn.rope.freqs_cos', 'visual.blocks.7.attn.rope.freqs_sin', 'visual.blocks.8.attn.rope.freqs_cos', 'visual.blocks.8.attn.rope.freqs_sin', 'visual.blocks.9.attn.rope.freqs_cos', 'visual.blocks.9.attn.rope.freqs_sin', 'visual.blocks.10.attn.rope.freqs_cos', 'visual.blocks.10.attn.rope.freqs_sin', 'visual.blocks.11.attn.rope.freqs_cos', 'visual.blocks.11.attn.rope.freqs_sin', 'visual.blocks.12.attn.rope.freqs_cos', 'visual.blocks.12.attn.rope.freqs_sin', 'visual.blocks.13.attn.rope.freqs_cos', 'visual.blocks.13.attn.rope.freqs_sin', 'visual.blocks.14.attn.rope.freqs_cos', 'visual.blocks.14.attn.rope.freqs_sin', 'visual.blocks.15.attn.rope.freqs_cos', 'visual.blocks.15.attn.rope.freqs_sin', 'visual.blocks.16.attn.rope.freqs_cos', 'visual.blocks.16.attn.rope.freqs_sin', 'visual.blocks.17.attn.rope.freqs_cos', 'visual.blocks.17.attn.rope.freqs_sin', 'visual.blocks.18.attn.rope.freqs_cos', 'visual.blocks.18.attn.rope.freqs_sin', 'visual.blocks.19.attn.rope.freqs_cos', 'visual.blocks.19.attn.rope.freqs_sin', 'visual.blocks.20.attn.rope.freqs_cos', 'visual.blocks.20.attn.rope.freqs_sin', 'visual.blocks.21.attn.rope.freqs_cos', 'visual.blocks.21.attn.rope.freqs_sin', 'visual.blocks.22.attn.rope.freqs_cos', 'visual.blocks.22.attn.rope.freqs_sin', 'visual.blocks.23.attn.rope.freqs_cos', 'visual.blocks.23.attn.rope.freqs_sin']
2024-12-22T21:46:42.580028 - Loading PuLID-Flux model.
2024-12-22T21:46:50.987838 - model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
2024-12-22T21:46:50.988839 - model_type FLUX
2024-12-22T21:47:21.018462 - C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\insightface\utils\transform.py:68: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
  P = np.linalg.lstsq(X_homo, Y)[0].T # Affine matrix. 3 x 4

2024-12-22T21:47:41.249480 - Requested to load Flux
2024-12-22T21:47:41.252915 - Requested to load ControlNetFlux
2024-12-22T21:47:41.252915 - Loading 2 new models
2024-12-22T21:47:54.206672 - loaded partially 7245.314672851562 7242.456115722656 0
2024-12-22T21:47:54.225015 - loaded partially 64.0 62.365234375 0
2024-12-22T21:47:54.519918 - 
  0%|                                                                                                 | 0/25 [00:00<?, ?it/s]2024-12-22T21:47:54.538583 - Requested to load AutoencodingEngine
2024-12-22T21:47:54.539588 - Loading 1 new model
2024-12-22T21:47:57.061846 - loaded completely 0.0 159.87335777282715 True
2024-12-22T21:47:57.617528 - Requested to load ControlNetFlux
2024-12-22T21:47:57.617528 - Loading 1 new model
2024-12-22T21:47:59.316156 - loaded partially 5694.930374908447 5685.783203125 0
2024-12-22T22:15:02.910301 - 
100%|████████████████████████████████████████████████████████████████████████████████████████| 25/25 [27:08<00:00, 35.72s/it]2024-12-22T22:15:02.910301 - 
100%|████████████████████████████████████████████████████████████████████████████████████████| 25/25 [27:08<00:00, 65.14s/it]2024-12-22T22:15:02.911300 - 
2024-12-22T22:15:02.926330 - Requested to load AutoencodingEngine
2024-12-22T22:15:02.926330 - Loading 1 new model
2024-12-22T22:15:07.651686 - loaded completely 0.0 159.87335777282715 True
2024-12-22T22:15:09.005625 - Canva size: 2560x25602024-12-22T22:15:09.006625 - 
2024-12-22T22:15:09.007626 - Image size: 1280x12802024-12-22T22:15:09.008625 - 
2024-12-22T22:15:09.008625 - Scale factor: 22024-12-22T22:15:09.008625 - 
2024-12-22T22:15:09.008625 - Upscaling iteration 1 with scale factor 22024-12-22T22:15:09.009628 - 
2024-12-22T22:15:17.033093 - Tile size: 1024x10242024-12-22T22:15:17.033093 - 
2024-12-22T22:15:17.034596 - Tiles amount: 92024-12-22T22:15:17.038600 - 
2024-12-22T22:15:17.038600 - Grid: 3x32024-12-22T22:15:17.039600 - 
2024-12-22T22:15:17.039600 - Redraw enabled: True2024-12-22T22:15:17.042108 - 
2024-12-22T22:15:17.044115 - Seams fix mode: NONE2024-12-22T22:15:17.045115 - 
2024-12-22T22:15:21.388271 - 
  0%|                                                                                                 | 0/25 [00:00<?, ?it/s]2024-12-22T22:15:21.880718 - 
  0%|                                                                                                 | 0/25 [00:00<?, ?it/s]2024-12-22T22:15:21.880718 - 
2024-12-22T22:15:21.924435 - !!! Exception during processing !!! Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
2024-12-22T22:15:22.091670 - Traceback (most recent call last):
  File "D:\Automatic1111 2\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "D:\Automatic1111 2\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "D:\Automatic1111 2\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "D:\Automatic1111 2\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\nodes.py", line 151, in upscale
    processed = script.run(p=self.sdprocessing, _=None, tile_width=self.tile_width, tile_height=self.tile_height,
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\repositories\ultimate_sd_upscale\scripts\ultimate-upscale.py", line 565, in run
    upscaler.process()
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\repositories\ultimate_sd_upscale\scripts\ultimate-upscale.py", line 138, in process
    self.image = self.redraw.start(self.p, self.image, self.rows, self.cols)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\repositories\ultimate_sd_upscale\scripts\ultimate-upscale.py", line 245, in start
    return self.linear_process(p, image, rows, cols)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\repositories\ultimate_sd_upscale\scripts\ultimate-upscale.py", line 180, in linear_process
    processed = processing.process_images(p)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\modules\processing.py", line 173, in process_images
    samples = sample(p.model, p.seed, p.steps, p.cfg, p.sampler_name, p.scheduler, positive_cropped,
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\modules\processing.py", line 109, in sample
    (samples,) = common_ksampler(model, seed, steps, cfg, sampler_name,
  File "D:\Automatic1111 2\ComfyUI\nodes.py", line 1424, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample
    raise e
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
    return original_sample(*args, **kwargs)  # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
  File "D:\Automatic1111 2\ComfyUI\comfy\sample.py", line 43, in sample
    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 117, in KSampler_sample
    return orig_fn(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 51, in KSampler_sample
    return orig_fn(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 855, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 135, in sample
    return orig_fn(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 753, in sample
    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 740, in sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 719, in inner_sample
    samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 100, in KSAMPLER_sample
    return orig_fn(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 34, in KSAMPLER_sample
    return orig_fn(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 624, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\comfy\k_diffusion\sampling.py", line 155, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 299, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 706, in __call__
    return self.predict_noise(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 709, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 175, in sampling_function
    out = orig_fn(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 279, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
  File "D:\Automatic1111 2\ComfyUI\comfy\model_base.py", line 145, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\comfy\ldm\flux\model.py", line 184, in forward
    out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control, transformer_options)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced\pulidflux.py", line 136, in forward_orig
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced\encoders_flux.py", line 57, in forward
    q = self.to_q(latents)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)

2024-12-22T22:15:22.111890 - Prompt executed in 1743.25 seconds
2024-12-22T22:17:39.681678 - got prompt
2024-12-22T22:17:42.029218 - C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\insightface\utils\transform.py:68: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
  P = np.linalg.lstsq(X_homo, Y)[0].T # Affine matrix. 3 x 4

2024-12-22T22:17:57.800081 - Requested to load ControlNetFlux
2024-12-22T22:17:57.800081 - Loading 1 new model
2024-12-22T22:17:57.832185 - loaded partially 64.0 62.365234375 0
2024-12-22T22:17:57.872177 - 
  0%|                                                                                                 | 0/25 [00:00<?, ?it/s]2024-12-22T22:17:57.877691 - Requested to load AutoencodingEngine
2024-12-22T22:17:57.877691 - Loading 1 new model
2024-12-22T22:18:03.239973 - loaded completely 0.0 159.87335777282715 True
2024-12-22T22:18:03.734595 - Requested to load ControlNetFlux
2024-12-22T22:18:03.734595 - Loading 1 new model
2024-12-22T22:18:04.965201 - loaded partially 5568.889359283447 5568.736328125 0
2024-12-22T22:22:32.102028 - 
100%|████████████████████████████████████████████████████████████████████████████████████████| 25/25 [04:34<00:00,  9.94s/it]2024-12-22T22:22:32.102028 - 
100%|████████████████████████████████████████████████████████████████████████████████████████| 25/25 [04:34<00:00, 10.97s/it]2024-12-22T22:22:32.102028 - 
2024-12-22T22:22:32.104533 - Requested to load AutoencodingEngine
2024-12-22T22:22:32.104533 - Loading 1 new model
2024-12-22T22:22:35.074669 - loaded completely 0.0 159.87335777282715 True
2024-12-22T22:22:36.071906 - Canva size: 2560x25602024-12-22T22:22:36.071906 - 
2024-12-22T22:22:36.071906 - Image size: 1280x12802024-12-22T22:22:36.071906 - 
2024-12-22T22:22:36.071906 - Scale factor: 22024-12-22T22:22:36.071906 - 
2024-12-22T22:22:36.071906 - Upscaling iteration 1 with scale factor 22024-12-22T22:22:36.071906 - 
2024-12-22T22:22:45.574790 - Tile size: 1024x10242024-12-22T22:22:45.574790 - 
2024-12-22T22:22:45.574790 - Tiles amount: 92024-12-22T22:22:45.574790 - 
2024-12-22T22:22:45.575793 - Grid: 3x32024-12-22T22:22:45.575793 - 
2024-12-22T22:22:45.575793 - Redraw enabled: True2024-12-22T22:22:45.575793 - 
2024-12-22T22:22:45.575793 - Seams fix mode: NONE2024-12-22T22:22:45.575793 - 
2024-12-22T22:22:49.310890 - 
  0%|                                                                                                 | 0/25 [00:00<?, ?it/s]2024-12-22T22:22:49.620799 - 
  0%|                                                                                                 | 0/25 [00:00<?, ?it/s]2024-12-22T22:22:49.620799 - 
2024-12-22T22:22:49.633810 - !!! Exception during processing !!! Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
2024-12-22T22:22:49.635317 - Traceback (most recent call last):
  File "D:\Automatic1111 2\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "D:\Automatic1111 2\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "D:\Automatic1111 2\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "D:\Automatic1111 2\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\nodes.py", line 151, in upscale
    processed = script.run(p=self.sdprocessing, _=None, tile_width=self.tile_width, tile_height=self.tile_height,
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\repositories\ultimate_sd_upscale\scripts\ultimate-upscale.py", line 565, in run
    upscaler.process()
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\repositories\ultimate_sd_upscale\scripts\ultimate-upscale.py", line 138, in process
    self.image = self.redraw.start(self.p, self.image, self.rows, self.cols)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\repositories\ultimate_sd_upscale\scripts\ultimate-upscale.py", line 245, in start
    return self.linear_process(p, image, rows, cols)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\repositories\ultimate_sd_upscale\scripts\ultimate-upscale.py", line 180, in linear_process
    processed = processing.process_images(p)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\modules\processing.py", line 173, in process_images
    samples = sample(p.model, p.seed, p.steps, p.cfg, p.sampler_name, p.scheduler, positive_cropped,
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\modules\processing.py", line 109, in sample
    (samples,) = common_ksampler(model, seed, steps, cfg, sampler_name,
  File "D:\Automatic1111 2\ComfyUI\nodes.py", line 1424, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample
    raise e
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
    return original_sample(*args, **kwargs)  # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
  File "D:\Automatic1111 2\ComfyUI\comfy\sample.py", line 43, in sample
    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 117, in KSampler_sample
    return orig_fn(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 51, in KSampler_sample
    return orig_fn(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 855, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 135, in sample
    return orig_fn(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 753, in sample
    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 740, in sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 719, in inner_sample
    samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 100, in KSAMPLER_sample
    return orig_fn(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 34, in KSAMPLER_sample
    return orig_fn(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 624, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\comfy\k_diffusion\sampling.py", line 155, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 299, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 706, in __call__
    return self.predict_noise(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 709, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 175, in sampling_function
    out = orig_fn(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 279, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
  File "D:\Automatic1111 2\ComfyUI\comfy\model_base.py", line 145, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\comfy\ldm\flux\model.py", line 184, in forward
    out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control, transformer_options)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced\pulidflux.py", line 136, in forward_orig
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced\encoders_flux.py", line 57, in forward
    q = self.to_q(latents)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)

2024-12-22T22:22:49.637316 - Prompt executed in 309.59 seconds


</details>

@broken-rotor
Copy link
Contributor

That pulidflux.py looks different that just applying 95f1588

In particular, in your lines that say:
if condition:
img = img + node_data['weight'] * self.pulid_ca[ca_idx](node_data['embedding'], img)

You should have
if condition:
img = img + node_data['weight'] * self.pulid_ca[ca_idx].to(device)(node_data['embedding'], img)

I'm not sure where that other instructions point to, but I'd try that and see if it works.

@CmoneBK
Copy link
Author

CmoneBK commented Dec 22, 2024

I applied the whole patch to the original file now.

Code is now this
import torch
from torch import nn, Tensor
from torchvision import transforms
from torchvision.transforms import functional
import os
import logging
import folder_paths
import comfy.utils
from comfy.ldm.flux.layers import timestep_embedding
import comfy.model_management
from insightface.app import FaceAnalysis
from facexlib.parsing import init_parsing_model
from facexlib.utils.face_restoration_helper import FaceRestoreHelper

import torch.nn.functional as F

from .eva_clip.constants import OPENAI_DATASET_MEAN, OPENAI_DATASET_STD
from .encoders_flux import IDFormer, PerceiverAttentionCA

INSIGHTFACE_DIR = os.path.join(folder_paths.models_dir, "insightface")

MODELS_DIR = os.path.join(folder_paths.models_dir, "pulid")
if "pulid" not in folder_paths.folder_names_and_paths:
    current_paths = [MODELS_DIR]
else:
    current_paths, _ = folder_paths.folder_names_and_paths["pulid"]
folder_paths.folder_names_and_paths["pulid"] = (current_paths, folder_paths.supported_pt_extensions)

from .online_train2 import online_train

class PulidFluxModel(nn.Module):
    def __init__(self):
        super().__init__()

        self.double_interval = 2
        self.single_interval = 4

        # Init encoder
        self.pulid_encoder = IDFormer()

        # Init attention
        num_ca = 19 // self.double_interval + 38 // self.single_interval
        if 19 % self.double_interval != 0:
            num_ca += 1
        if 38 % self.single_interval != 0:
            num_ca += 1
        self.pulid_ca = nn.ModuleList([
            PerceiverAttentionCA() for _ in range(num_ca)
        ])

    def from_pretrained(self, path: str):
        state_dict = comfy.utils.load_torch_file(path, safe_load=True)
        state_dict_dict = {}
        for k, v in state_dict.items():
            module = k.split('.')[0]
            state_dict_dict.setdefault(module, {})
            new_k = k[len(module) + 1:]
            state_dict_dict[module][new_k] = v

        for module in state_dict_dict:
            getattr(self, module).load_state_dict(state_dict_dict[module], strict=True)

        del state_dict
        del state_dict_dict

    def get_embeds(self, face_embed, clip_embeds):
        return self.pulid_encoder(face_embed, clip_embeds)

def forward_orig(
    self,
    img: Tensor,
    img_ids: Tensor,
    txt: Tensor,
    txt_ids: Tensor,
    timesteps: Tensor,
    y: Tensor,
    guidance: Tensor = None,
    control=None,
    transformer_options={}
) -> Tensor:
    device = comfy.model_management.get_torch_device()
    patches_replace = transformer_options.get("patches_replace", {})

    if img.ndim != 3 or txt.ndim != 3:
        raise ValueError("Input img and txt tensors must have 3 dimensions.")

    # running on sequences img
    img = self.img_in(img)
    vec = self.time_in(timestep_embedding(timesteps, 256).to(img.dtype))
    if self.params.guidance_embed:
        if guidance is None:
            raise ValueError("Didn't get guidance strength for guidance distilled model.")
        vec = vec + self.guidance_in(timestep_embedding(guidance, 256).to(img.dtype))

    vec = vec + self.vector_in(y)
    txt = self.txt_in(txt)

    ids = torch.cat((txt_ids, img_ids), dim=1)
    pe = self.pe_embedder(ids)

    ca_idx = 0
    blocks_replace = patches_replace.get("dit", {})

    for i, block in enumerate(self.double_blocks):
        if ("double_block", i) in blocks_replace:
            def block_wrap(args):
                out = {}
                out["img"], out["txt"] = block(
                    img=args["img"], txt=args["txt"], vec=args["vec"], pe=args["pe"])
                return out

            out = blocks_replace[("double_block", i)](
                {"img": img, "txt": txt, "vec": vec, "pe": pe}, {"original_block": block_wrap})
            txt = out["txt"]
            img = out["img"]
        else:
            img, txt = block(img=img, txt=txt, vec=vec, pe=pe)

        if control is not None: # Controlnet
            control_i = control.get("input")
            if i < len(control_i):
                add = control_i[i]
                if add is not None:
                    img += add

        # PuLID attention
        if self.pulid_data:
            if i % self.pulid_double_interval == 0:
                # Will calculate influence of all pulid nodes at once
                for _, node_data in self.pulid_data.items():
                    condition_start = node_data['sigma_start'] >= timesteps
                    condition_end = timesteps >= node_data['sigma_end']
                    condition = torch.logical_and(
                        condition_start, condition_end).all()
                    
                    if condition:
                        img = img + node_data['weight'] * self.pulid_ca[ca_idx].to(device)(node_data['embedding'], img)
                ca_idx += 1

    img = torch.cat((txt, img), 1)

    for i, block in enumerate(self.single_blocks):
        img = block(img, vec=vec, pe=pe)

        if control is not None: # Controlnet
            control_o = control.get("output")
            if i < len(control_o):
                add = control_o[i]
                if add is not None:
                    img[:, txt.shape[1] :, ...] += add


        # PuLID attention
        if self.pulid_data:
            real_img, txt = img[:, txt.shape[1]:, ...], img[:, :txt.shape[1], ...]
            if i % self.pulid_single_interval == 0:
                # Will calculate influence of all nodes at once
                for _, node_data in self.pulid_data.items():
                    condition_start = node_data['sigma_start'] >= timesteps
                    condition_end = timesteps >= node_data['sigma_end']

                    # Combine conditions and reduce to a single boolean
                    condition = torch.logical_and(condition_start, condition_end).all()

                    if condition:
                        real_img = real_img + node_data['weight'] * self.pulid_ca[ca_idx].to(device)(node_data['embedding'], real_img)
                ca_idx += 1
            img = torch.cat((txt, real_img), 1)

    img = img[:, txt.shape[1] :, ...]

    img = self.final_layer(img, vec)  # (N, T, patch_size ** 2 * out_channels)
    return img

def tensor_to_image(tensor):
    image = tensor.mul(255).clamp(0, 255).byte().cpu()
    image = image[..., [2, 1, 0]].numpy()
    return image

def image_to_tensor(image):
    tensor = torch.clamp(torch.from_numpy(image).float() / 255., 0, 1)
    tensor = tensor[..., [2, 1, 0]]
    return tensor

def resize_with_pad(img, target_size): # image: 1, h, w, 3
    img = img.permute(0, 3, 1, 2)
    H, W = target_size
    
    h, w = img.shape[2], img.shape[3]
    scale_h = H / h
    scale_w = W / w
    scale = min(scale_h, scale_w)

    new_h = int(min(h * scale,H))
    new_w = int(min(w * scale,W))
    new_size = (new_h, new_w)
    
    img = F.interpolate(img, size=new_size, mode='bicubic', align_corners=False)
    
    pad_top = (H - new_h) // 2
    pad_bottom = (H - new_h) - pad_top
    pad_left = (W - new_w) // 2
    pad_right = (W - new_w) - pad_left
    img = F.pad(img, pad=(pad_left, pad_right, pad_top, pad_bottom), mode='constant', value=0)
    
    return img.permute(0, 2, 3, 1)

def to_gray(img):
    x = 0.299 * img[:, 0:1] + 0.587 * img[:, 1:2] + 0.114 * img[:, 2:3]
    x = x.repeat(1, 3, 1, 1)
    return x

"""
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 Nodes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""

class PulidFluxModelLoader:
    @classmethod
    def INPUT_TYPES(s):
        return {"required": {"pulid_file": (folder_paths.get_filename_list("pulid"), )}}

    RETURN_TYPES = ("PULIDFLUX",)
    FUNCTION = "load_model"
    CATEGORY = "pulid"

    def load_model(self, pulid_file):
        model_path = folder_paths.get_full_path("pulid", pulid_file)

        # Also initialize the model, takes longer to load but then it doesn't have to be done every time you change parameters in the apply node
        model = PulidFluxModel()

        logging.info("Loading PuLID-Flux model.")
        model.from_pretrained(path=model_path)

        return (model,)

class PulidFluxInsightFaceLoader:
    @classmethod
    def INPUT_TYPES(s):
        return {
            "required": {
                "provider": (["CPU", "CUDA", "ROCM"], ),
            },
        }

    RETURN_TYPES = ("FACEANALYSIS",)
    FUNCTION = "load_insightface"
    CATEGORY = "pulid"

    def load_insightface(self, provider):
        model = FaceAnalysis(name="antelopev2", root=INSIGHTFACE_DIR, providers=[provider + 'ExecutionProvider',]) # alternative to buffalo_l
        model.prepare(ctx_id=0, det_size=(640, 640))

        return (model,)

class PulidFluxEvaClipLoader:
    @classmethod
    def INPUT_TYPES(s):
        return {
            "required": {},
        }

    RETURN_TYPES = ("EVA_CLIP",)
    FUNCTION = "load_eva_clip"
    CATEGORY = "pulid"

    def load_eva_clip(self):
        from .eva_clip.factory import create_model_and_transforms

        model, _, _ = create_model_and_transforms('EVA02-CLIP-L-14-336', 'eva_clip', force_custom_clip=True)

        model = model.visual

        eva_transform_mean = getattr(model, 'image_mean', OPENAI_DATASET_MEAN)
        eva_transform_std = getattr(model, 'image_std', OPENAI_DATASET_STD)
        if not isinstance(eva_transform_mean, (list, tuple)):
            model["image_mean"] = (eva_transform_mean,) * 3
        if not isinstance(eva_transform_std, (list, tuple)):
            model["image_std"] = (eva_transform_std,) * 3

        return (model,)

class ApplyPulidFlux:
    @classmethod
    def INPUT_TYPES(s):  
        return {
            "required": {
                "model": ("MODEL", ),
                "pulid_flux": ("PULIDFLUX", ),
                "eva_clip": ("EVA_CLIP", ),
                "face_analysis": ("FACEANALYSIS", ),
                "image": ("IMAGE", ),
                "weight": ("FLOAT", {"default": 1.0, "min": -1.0, "max": 5.0, "step": 0.05 }),
                "start_at": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.001 }),
                "end_at": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.001 }),
                "fusion": (["mean","concat","max","norm_id","max_token","auto_weight","train_weight"],),
                "fusion_weight_max": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 20.0, "step": 0.1 }),
                "fusion_weight_min": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 20.0, "step": 0.1 }),
                "train_step": ("INT", {"default": 1000, "min": 0, "max": 20000, "step": 1 }),
                "use_gray": ("BOOLEAN", {"default": True, "label_on": "enabled", "label_off": "disabled"}),
            },
            "optional": {
                "attn_mask": ("MASK", ),
                "prior_image": ("IMAGE",), # for train weight, as the target
            },
            "hidden": {
                "unique_id": "UNIQUE_ID"
            },
        }

    RETURN_TYPES = ("MODEL",)
    FUNCTION = "apply_pulid_flux"
    CATEGORY = "pulid"

    def __init__(self):
        self.pulid_data_dict = None

    def apply_pulid_flux(self, model, pulid_flux, eva_clip, face_analysis, image, weight, start_at, end_at, prior_image=None,fusion="mean", fusion_weight_max=1.0, fusion_weight_min=0.0, train_step=1000, use_gray=True, attn_mask=None, unique_id=None):
        device = comfy.model_management.get_torch_device()
        # Why should I care what args say, when the unet model has a different dtype?!
        # Am I missing something?!
        #dtype = comfy.model_management.unet_dtype()
        dtype = model.model.diffusion_model.dtype
        # For 8bit use bfloat16 (because ufunc_add_CUDA is not implemented)
        if dtype in [torch.float8_e4m3fn, torch.float8_e5m2]:
            dtype = torch.bfloat16

        eva_clip.to(device, dtype=dtype)
        pulid_flux.to(device, dtype=dtype)

        # TODO: Add masking support!
        if attn_mask is not None:
            if attn_mask.dim() > 3:
                attn_mask = attn_mask.squeeze(-1)
            elif attn_mask.dim() < 3:
                attn_mask = attn_mask.unsqueeze(0)
            attn_mask = attn_mask.to(device, dtype=dtype)

        if prior_image is not None:
            prior_image = resize_with_pad(prior_image.to(image.device, dtype=image.dtype), target_size=(image.shape[1], image.shape[2]))
            image=torch.cat((prior_image,image),dim=0)
        image = tensor_to_image(image)

        face_helper = FaceRestoreHelper(
            upscale_factor=1,
            face_size=512,
            crop_ratio=(1, 1),
            det_model='retinaface_resnet50',
            save_ext='png',
            device=device,
        )

        face_helper.face_parse = None
        face_helper.face_parse = init_parsing_model(model_name='bisenet', device=device)

        bg_label = [0, 16, 18, 7, 8, 9, 14, 15]
        cond = []

        # Analyse multiple images at multiple sizes and combine largest area embeddings
        for i in range(image.shape[0]):
            # get insightface embeddings
            iface_embeds = None
            for size in [(size, size) for size in range(640, 256, -64)]:
                face_analysis.det_model.input_size = size
                face_info = face_analysis.get(image[i])
                if face_info:
                    # Only use the maximum face
                    # Removed the reverse=True from original code because we need the largest area not the smallest one!
                    # Sorts the list in ascending order (smallest to largest),
                    # then selects the last element, which is the largest face
                    face_info = sorted(face_info, key=lambda x: (x.bbox[2] - x.bbox[0]) * (x.bbox[3] - x.bbox[1]))[-1]
                    iface_embeds = torch.from_numpy(face_info.embedding).unsqueeze(0).to(device, dtype=dtype)
                    break
            else:
                # No face detected, skip this image
                logging.warning(f'Warning: No face detected in image {str(i)}')
                continue

            # get eva_clip embeddings
            face_helper.clean_all()
            face_helper.read_image(image[i])
            face_helper.get_face_landmarks_5(only_center_face=True)
            face_helper.align_warp_face()

            if len(face_helper.cropped_faces) == 0:
                # No face detected, skip this image
                continue

            # Get aligned face image
            align_face = face_helper.cropped_faces[0]
            # Convert bgr face image to tensor
            align_face = image_to_tensor(align_face).unsqueeze(0).permute(0, 3, 1, 2).to(device)
            parsing_out = face_helper.face_parse(functional.normalize(align_face, [0.485, 0.456, 0.406], [0.229, 0.224, 0.225]))[0]
            parsing_out = parsing_out.argmax(dim=1, keepdim=True)
            bg = sum(parsing_out == i for i in bg_label).bool()
            white_image = torch.ones_like(align_face)
            # Only keep the face features
            if use_gray:
                _align_face = to_gray(align_face)
            else:
                _align_face = align_face
            face_features_image = torch.where(bg, white_image, _align_face)

            # Transform img before sending to eva_clip
            # Apparently MPS only supports NEAREST interpolation?
            face_features_image = functional.resize(face_features_image, eva_clip.image_size, transforms.InterpolationMode.BICUBIC if 'cuda' in device.type else transforms.InterpolationMode.NEAREST).to(device, dtype=dtype)
            face_features_image = functional.normalize(face_features_image, eva_clip.image_mean, eva_clip.image_std)

            # eva_clip
            id_cond_vit, id_vit_hidden = eva_clip(face_features_image, return_all_features=False, return_hidden=True, shuffle=False)
            id_cond_vit = id_cond_vit.to(device, dtype=dtype)
            for idx in range(len(id_vit_hidden)):
                id_vit_hidden[idx] = id_vit_hidden[idx].to(device, dtype=dtype)

            id_cond_vit = torch.div(id_cond_vit, torch.norm(id_cond_vit, 2, 1, True))

            # Combine embeddings
            id_cond = torch.cat([iface_embeds, id_cond_vit], dim=-1)

            # Pulid_encoder
            cond.append(pulid_flux.get_embeds(id_cond, id_vit_hidden))

        if not cond:
            # No faces detected, return the original model
            logging.warning("PuLID warning: No faces detected in any of the given images, returning unmodified model.")
            return (model,)

        # fusion embeddings
        if fusion == "mean":
            cond = torch.cat(cond).to(device, dtype=dtype) # N,32,2048
            if cond.shape[0] > 1:
                cond = torch.mean(cond, dim=0, keepdim=True)
        elif fusion == "concat":
            cond = torch.cat(cond, dim=1).to(device, dtype=dtype)
        elif fusion == "max":
            cond = torch.cat(cond).to(device, dtype=dtype)
            if cond.shape[0] > 1:
                cond = torch.max(cond, dim=0, keepdim=True)[0]
        elif fusion == "norm_id":
            cond = torch.cat(cond).to(device, dtype=dtype)
            if cond.shape[0] > 1:
                norm=torch.norm(cond,dim=(1,2))
                norm=norm/torch.sum(norm)
                cond=torch.einsum("wij,w->ij",cond,norm).unsqueeze(0)
        elif fusion == "max_token":
            cond = torch.cat(cond).to(device, dtype=dtype)
            if cond.shape[0] > 1:
                norm=torch.norm(cond,dim=2)
                _,idx=torch.max(norm,dim=0)
                cond=torch.stack([cond[j,i] for i,j in enumerate(idx)]).unsqueeze(0)
        elif fusion == "auto_weight": # 🤔
            cond = torch.cat(cond).to(device, dtype=dtype)
            if cond.shape[0] > 1:
                norm=torch.norm(cond,dim=2)
                order=torch.argsort(norm,descending=False,dim=0)
                regular_weight=torch.linspace(fusion_weight_min,fusion_weight_max,norm.shape[0]).to(device, dtype=dtype)

                _cond=[]
                for i in range(cond.shape[1]):
                    o=order[:,i]
                    _cond.append(torch.einsum('ij,i->j',cond[:,i,:],regular_weight[o]))
                cond=torch.stack(_cond,dim=0).unsqueeze(0)
        elif fusion == "train_weight":
            cond = torch.cat(cond).to(device, dtype=dtype)
            if cond.shape[0] > 1:
                if train_step > 0:
                    with torch.inference_mode(False):
                        cond = online_train(cond, device=cond.device, step=train_step)
                else:
                    cond = torch.mean(cond, dim=0, keepdim=True)

        sigma_start = model.get_model_object("model_sampling").percent_to_sigma(start_at)
        sigma_end = model.get_model_object("model_sampling").percent_to_sigma(end_at)

        # Patch the Flux model (original diffusion_model)
        # Nah, I don't care for the official ModelPatcher because it's undocumented!
        # I want the end result now, and I don’t mind if I break other custom nodes in the process. 😄
        flux_model = model.model.diffusion_model
        # Let's see if we already patched the underlying flux model, if not apply patch
        if not hasattr(flux_model, "pulid_ca"):
            # Add perceiver attention, variables and current node data (weight, embedding, sigma_start, sigma_end)
            # The pulid_data is stored in Dict by unique node index,
            # so we can chain multiple ApplyPulidFlux nodes!
            flux_model.pulid_ca = pulid_flux.pulid_ca
            flux_model.pulid_double_interval = pulid_flux.double_interval
            flux_model.pulid_single_interval = pulid_flux.single_interval
            flux_model.pulid_data = {}
            # Replace model forward_orig with our own
            new_method = forward_orig.__get__(flux_model, flux_model.__class__)
            setattr(flux_model, 'forward_orig', new_method)

        # Patch is already in place, add data (weight, embedding, sigma_start, sigma_end) under unique node index
        flux_model.pulid_data[unique_id] = {
            'weight': weight,
            'embedding': cond,
            'sigma_start': sigma_start,
            'sigma_end': sigma_end,
        }

        # Keep a reference for destructor (if node is deleted the data will be deleted as well)
        self.pulid_data_dict = {'data': flux_model.pulid_data, 'unique_id': unique_id}

        return (model,)

    def __del__(self):
        # Destroy the data for this node
        if self.pulid_data_dict:
            del self.pulid_data_dict['data'][self.pulid_data_dict['unique_id']]
            del self.pulid_data_dict


NODE_CLASS_MAPPINGS = {
    "PulidFluxModelLoader": PulidFluxModelLoader,
    "PulidFluxInsightFaceLoader": PulidFluxInsightFaceLoader,
    "PulidFluxEvaClipLoader": PulidFluxEvaClipLoader,
    "ApplyPulidFlux": ApplyPulidFlux,
}

NODE_DISPLAY_NAME_MAPPINGS = {
    "PulidFluxModelLoader": "Load PuLID Flux Model",
    "PulidFluxInsightFaceLoader": "Load InsightFace (PuLID Flux)",
    "PulidFluxEvaClipLoader": "Load Eva Clip (PuLID Flux)",
    "ApplyPulidFlux": "Apply PuLID Flux",
}

It seems to work now.
So the soution was to replace the code of
ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced\pulidflux.py
with the one above.
For none coders: Use Notepad ++ or Visual Studio to open the .py file and replace the code completely (copy-paste)

@Lalimec
Copy link

Lalimec commented Dec 25, 2024

I applied the whole patch to the original file now.

Code is now this
It seems to work now. So the soution was to replace the code of ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced\pulidflux.py with the one above. For none coders: Use Notepad ++ or Visual Studio to open the .py file and replace the code completely (copy-paste)

Nope, didnt solve the issue for me.

@broken-rotor
Copy link
Contributor

I applied the whole patch to the original file now.
Code is now this
It seems to work now. So the soution was to replace the code of ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced\pulidflux.py with the one above. For none coders: Use Notepad ++ or Visual Studio to open the .py file and replace the code completely (copy-paste)

Nope, didnt solve the issue for me.

@Lalimec do you have a stack trace?

Just to confirm, you applied 95f1588 and not just the change for #35, correct?

@soundscrap
Copy link

soundscrap commented Dec 28, 2024

I applied the whole patch to the original file now.

Code is now this
It seems to work now. So the soution was to replace the code of ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced\pulidflux.py with the one above. For none coders: Use Notepad ++ or Visual Studio to open the .py file and replace the code completely (copy-paste)

this worked for me, thank you!! 95f1588

@CmoneBK
Copy link
Author

CmoneBK commented Dec 28, 2024

I applied the whole patch to the original file now.
Code is now this
It seems to work now. So the soution was to replace the code of ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced\pulidflux.py with the one above. For none coders: Use Notepad ++ or Visual Studio to open the .py file and replace the code completely (copy-paste)

Nope, didnt solve the issue for me.

@Lalimec You have to really make sure the new code is in the file when you run. So no node updating or anything.
Also make sure to restart your ComfyUI before doing anything or (even better) update the file while ComfyUI is not running.
Also make sure you edit the existing file and don't create a new one, as this can cause encoding issues.

@Lalimec
Copy link

Lalimec commented Dec 28, 2024

Yep, it didnt work. I mean cuda device problem went away but i got another error about attn_mask or something. So found another solution that was doing some changes in the core comfyui files and that fixed it all i guess.

@NeoAnthropocene
Copy link

Yep, it didnt work. I mean cuda device problem went away but i got another error about attn_mask or something. So found another solution that was doing some changes in the core comfyui files and that fixed it all i guess.

Hi @Lalimec, can you please point the solution in ComfyUI core files? I read it somewhere but did not pay attention. I couldn't find it right now. This error also haunting me for days.

@Lalimec
Copy link

Lalimec commented Dec 28, 2024 via email

@NeoAnthropocene
Copy link

NeoAnthropocene commented Dec 28, 2024

OK, I found how I solved the problem previously.

I'm trying to breakdown for easy tracking:

  1. I've got the "forward_orig() got an unexpected keyword argument 'attn_mask'" error first and I solved it by replacing the pulidflux.py file with Fix "forward_orig() got an unexpected keyword argument 'attn_mask'" #51
  2. Then I felt into the Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm) issue; I changed the missing code manually in pulidflux.py file with Fix tensors in different device error. #41

Now this solution works on my setup.

Once again I spent my hours to fix broken dependencies and fix un-merged commits 🤦‍♂️

⭐ TLDR:
You can download my latest pulidflux.py file and place into the ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced
pulidflux.zip

Edit:
I'm not complaining about the authors, it's just exhausting to fix broken dependencies on the ComfyUI venv. I think it's time to separate the ComfUI builds for different purposes to maintain the stability of the dependencies.

@kubilaykilinc
Copy link

@NeoAnthropocene It's working for me. Thank you so much.

@Alasundru
Copy link

OK, I found how I solved the problem previously.

I'm trying to breakdown for easy tracking:

  1. I've got the "forward_orig() got an unexpected keyword argument 'attn_mask'" error first and I solved it by replacing the pulidflux.py file with Fix "forward_orig() got an unexpected keyword argument 'attn_mask'" #51
  2. Then I felt into the Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm) issue; I changed the missing code manually in pulidflux.py file with Fix tensors in different device error. #41

Now this solution works on my setup.

Once again I spent my hours to fix broken dependencies and fix un-merged commits 🤦‍♂️

⭐ TLDR: You can download my latest pulidflux.py file and place into the ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced pulidflux.zip

Edit: I'm not complaining about the authors, it's just exhausting to fix broken dependencies on the ComfyUI venv. I think it's time to separate the ComfUI builds for different purposes to maintain the stability of the dependencies.

THANK-YOU! It is finally working. You are a godsend. Now the creator just needs to hurry up and implement this fix!

@NeoAnthropocene
Copy link

@Alasundru I'm happy that saved you time :)

@leihaolei
Copy link

一样的问题

@lione0
Copy link

lione0 commented Jan 6, 2025

I applied the whole patch to the original file now.

Code is now this
It seems to work now. So the soution was to replace the code of ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced\pulidflux.py with the one above. For none coders: Use Notepad ++ or Visual Studio to open the .py file and replace the code completely (copy-paste)

thanks, my bro!

@Woukim
Copy link

Woukim commented Jan 7, 2025

OK, I found how I solved the problem previously.

I'm trying to breakdown for easy tracking:

  1. I've got the "forward_orig() got an unexpected keyword argument 'attn_mask'" error first and I solved it by replacing the pulidflux.py file with Fix "forward_orig() got an unexpected keyword argument 'attn_mask'" #51
  2. Then I felt into the Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm) issue; I changed the missing code manually in pulidflux.py file with Fix tensors in different device error. #41

Now this solution works on my setup.

Once again I spent my hours to fix broken dependencies and fix un-merged commits 🤦‍♂️

⭐ TLDR: You can download my latest pulidflux.py file and place into the ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced pulidflux.zip

Edit: I'm not complaining about the authors, it's just exhausting to fix broken dependencies on the ComfyUI venv. I think it's time to separate the ComfUI builds for different purposes to maintain the stability of the dependencies.

Sorry, but I still have the problem after replacing the file with yours:
image

UPD. I just updated the ComfyUI and everything worked. thanks a lot!

@Zeusaus
Copy link

Zeusaus commented Jan 9, 2025

OK, I found how I solved the problem previously.

I'm trying to breakdown for easy tracking:

  1. I've got the "forward_orig() got an unexpected keyword argument 'attn_mask'" error first and I solved it by replacing the pulidflux.py file with Fix "forward_orig() got an unexpected keyword argument 'attn_mask'" #51
  2. Then I felt into the Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm) issue; I changed the missing code manually in pulidflux.py file with Fix tensors in different device error. #41

Now this solution works on my setup.

Once again I spent my hours to fix broken dependencies and fix un-merged commits 🤦‍♂️

⭐ TLDR: You can download my latest pulidflux.py file and place into the ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced pulidflux.zip

Edit: I'm not complaining about the authors, it's just exhausting to fix broken dependencies on the ComfyUI venv. I think it's time to separate the ComfUI builds for different purposes to maintain the stability of the dependencies.

it works. thank you.

after resolveing above mentioned problems.
If some of you guys getting SamplerCustomAdvanced - expected scalar type Half but found BFloat16 error. ( cuz i did get it)
use this to fix it - balazik/ComfyUI-PuLID-Flux@5867d66

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests