Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error running postprocess_image: D:\sd\stable-diffusion-webui\extensions\sd-webui-reactor-sfw\scripts\reactor_faceswap.py #20

Open
3 tasks done
xiongdaxsl opened this issue Feb 10, 2025 · 6 comments
Labels
bug Something isn't working new

Comments

@xiongdaxsl
Copy link

First, confirm

  • I have read the instruction carefully
  • I have searched the existing issues
  • I have updated the extension to the latest version

What happened?

Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

Steps to reproduce the problem

  1. Go to ....
  2. Press ....
  3. ...

Sysinfo

Error running postprocess_image: D:\sd\stable-diffusion-webui\extensions\sd-webui-reactor-sfw\scripts\reactor_faceswap.py
Traceback (most recent call last):
File "D:\sd\stable-diffusion-webui\modules\scripts.py", line 912, in postprocess_image
script.postprocess_image(p, pp, *script_args)
File "D:\sd\stable-diffusion-webui\extensions\sd-webui-reactor-sfw\scripts\reactor_faceswap.py", line 465, in postprocess_image
result, output, swapped = swap_face(
File "D:\sd\stable-diffusion-webui\extensions\sd-webui-reactor-sfw\scripts\reactor_swapper.py", line 391, in swap_face
if check_sfw_image(result_image) is None:
File "D:\sd\stable-diffusion-webui\extensions\sd-webui-reactor-sfw\scripts\reactor_swapper.py", line 359, in check_sfw_image
if not sfw.nsfw_image(tmp_img, NSFWDET_MODEL_PATH):
File "D:\sd\stable-diffusion-webui\extensions\sd-webui-reactor-sfw\scripts\reactor_sfw.py", line 15, in nsfw_image
result = predict(img)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\pipelines\image_classification.py", line 100, in call
return super().call(images, **kwargs)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\pipelines\base.py", line 1120, in call
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\pipelines\base.py", line 1127, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\pipelines\base.py", line 1026, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\pipelines\image_classification.py", line 108, in _forward
model_outputs = self.model(**model_inputs)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\models\vit\modeling_vit.py", line 804, in forward
outputs = self.vit(
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\models\vit\modeling_vit.py", line 583, in forward
embedding_output = self.embeddings(
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\models\vit\modeling_vit.py", line 122, in forward
embeddings = self.patch_embeddings(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\models\vit\modeling_vit.py", line 181, in forward
embeddings = self.projection(pixel_values).flatten(2).transpose(1, 2)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 599, in network_Conv2d_forward
return originals.Conv2d_forward(self, input)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
return self._conv_forward(input, self.weight, self.bias)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

Relevant console log

Error running postprocess_image: D:\sd\stable-diffusion-webui\extensions\sd-webui-reactor-sfw\scripts\reactor_faceswap.py
    Traceback (most recent call last):
      File "D:\sd\stable-diffusion-webui\modules\scripts.py", line 912, in postprocess_image
        script.postprocess_image(p, pp, *script_args)
      File "D:\sd\stable-diffusion-webui\extensions\sd-webui-reactor-sfw\scripts\reactor_faceswap.py", line 465, in postprocess_image
        result, output, swapped = swap_face(
      File "D:\sd\stable-diffusion-webui\extensions\sd-webui-reactor-sfw\scripts\reactor_swapper.py", line 391, in swap_face
        if check_sfw_image(result_image) is None:
      File "D:\sd\stable-diffusion-webui\extensions\sd-webui-reactor-sfw\scripts\reactor_swapper.py", line 359, in check_sfw_image
        if not sfw.nsfw_image(tmp_img, NSFWDET_MODEL_PATH):
      File "D:\sd\stable-diffusion-webui\extensions\sd-webui-reactor-sfw\scripts\reactor_sfw.py", line 15, in nsfw_image
        result = predict(img)
      File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\pipelines\image_classification.py", line 100, in __call__
        return super().__call__(images, **kwargs)
      File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\pipelines\base.py", line 1120, in __call__
        return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
      File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\pipelines\base.py", line 1127, in run_single
        model_outputs = self.forward(model_inputs, **forward_params)
      File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\pipelines\base.py", line 1026, in forward
        model_outputs = self._forward(model_inputs, **forward_params)
      File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\pipelines\image_classification.py", line 108, in _forward
        model_outputs = self.model(**model_inputs)
      File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\models\vit\modeling_vit.py", line 804, in forward
        outputs = self.vit(
      File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\models\vit\modeling_vit.py", line 583, in forward
        embedding_output = self.embeddings(
      File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\models\vit\modeling_vit.py", line 122, in forward
        embeddings = self.patch_embeddings(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding)
      File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\models\vit\modeling_vit.py", line 181, in forward
        embeddings = self.projection(pixel_values).flatten(2).transpose(1, 2)
      File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\sd\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 599, in network_Conv2d_forward
        return originals.Conv2d_forward(self, input)
      File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
        return self._conv_forward(input, self.weight, self.bias)
      File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
        return F.conv2d(input, weight, bias, self.stride,
    RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

Additional information

No response

@xiongdaxsl xiongdaxsl added bug Something isn't working new labels Feb 10, 2025
@PatrickAgeev
Copy link

same story

@dermesut
Copy link

here too

@mykeehu
Copy link

mykeehu commented Feb 14, 2025

Same here. This could be due to nsfw filtering, but then it would be useful to include a switch for the options.

@AlexSavex
Copy link

Failed to build insightface
CUDA 12.1
Error: The 'insightface==0.7.3' distribution was not found and is required by the application

Same trouble

@youhua1
Copy link

youhua1 commented Feb 18, 2025

Find the reactor_sfw.py file in the root directory and modify the code to the following:

from transformers import pipeline
from PIL import Image
import logging
import torch

SCORE = 0.965

logging.getLogger('transformers').setLevel(logging.ERROR)

def nsfw_image(img_path: str, model_path: str):
    device = 'cuda' if torch.cuda.is_available() else 'cpu'
    with Image.open(img_path) as img:
        predict = pipeline("image-classification", model=model_path, device=0 if device == 'cuda' else -1)
        result = predict(img)
        score = result[0]["score"]
        return True if score > SCORE else False

@SpiritOfDeath17
Copy link

Find the reactor_sfw.py file in the root directory and modify the code to the following:

from transformers import pipeline
from PIL import Image
import logging
import torch

SCORE = 0.965

logging.getLogger('transformers').setLevel(logging.ERROR)

def nsfw_image(img_path: str, model_path: str):
device = 'cuda' if torch.cuda.is_available() else 'cpu'
with Image.open(img_path) as img:
predict = pipeline("image-classification", model=model_path, device=0 if device == 'cuda' else -1)
result = predict(img)
score = result[0]["score"]
return True if score > SCORE else False

Okay, it's worked for me and I don't see any errors in console, but after trying to img2img faceswap I have no result. It just run for 2 seconds and give me image that I put into img2img (not source reactor image)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working new
Projects
None yet
Development

No branches or pull requests

7 participants