-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error running postprocess_image: D:\sd\stable-diffusion-webui\extensions\sd-webui-reactor-sfw\scripts\reactor_faceswap.py #20
Comments
same story |
here too |
Same here. This could be due to nsfw filtering, but then it would be useful to include a switch for the options. |
Failed to build insightface Same trouble |
Find the from transformers import pipeline
from PIL import Image
import logging
import torch
SCORE = 0.965
logging.getLogger('transformers').setLevel(logging.ERROR)
def nsfw_image(img_path: str, model_path: str):
device = 'cuda' if torch.cuda.is_available() else 'cpu'
with Image.open(img_path) as img:
predict = pipeline("image-classification", model=model_path, device=0 if device == 'cuda' else -1)
result = predict(img)
score = result[0]["score"]
return True if score > SCORE else False |
Okay, it's worked for me and I don't see any errors in console, but after trying to img2img faceswap I have no result. It just run for 2 seconds and give me image that I put into img2img (not source reactor image) |
First, confirm
What happened?
Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
Steps to reproduce the problem
Sysinfo
Error running postprocess_image: D:\sd\stable-diffusion-webui\extensions\sd-webui-reactor-sfw\scripts\reactor_faceswap.py
Traceback (most recent call last):
File "D:\sd\stable-diffusion-webui\modules\scripts.py", line 912, in postprocess_image
script.postprocess_image(p, pp, *script_args)
File "D:\sd\stable-diffusion-webui\extensions\sd-webui-reactor-sfw\scripts\reactor_faceswap.py", line 465, in postprocess_image
result, output, swapped = swap_face(
File "D:\sd\stable-diffusion-webui\extensions\sd-webui-reactor-sfw\scripts\reactor_swapper.py", line 391, in swap_face
if check_sfw_image(result_image) is None:
File "D:\sd\stable-diffusion-webui\extensions\sd-webui-reactor-sfw\scripts\reactor_swapper.py", line 359, in check_sfw_image
if not sfw.nsfw_image(tmp_img, NSFWDET_MODEL_PATH):
File "D:\sd\stable-diffusion-webui\extensions\sd-webui-reactor-sfw\scripts\reactor_sfw.py", line 15, in nsfw_image
result = predict(img)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\pipelines\image_classification.py", line 100, in call
return super().call(images, **kwargs)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\pipelines\base.py", line 1120, in call
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\pipelines\base.py", line 1127, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\pipelines\base.py", line 1026, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\pipelines\image_classification.py", line 108, in _forward
model_outputs = self.model(**model_inputs)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\models\vit\modeling_vit.py", line 804, in forward
outputs = self.vit(
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\models\vit\modeling_vit.py", line 583, in forward
embedding_output = self.embeddings(
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\models\vit\modeling_vit.py", line 122, in forward
embeddings = self.patch_embeddings(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\transformers\models\vit\modeling_vit.py", line 181, in forward
embeddings = self.projection(pixel_values).flatten(2).transpose(1, 2)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\sd\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 599, in network_Conv2d_forward
return originals.Conv2d_forward(self, input)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
return self._conv_forward(input, self.weight, self.bias)
File "D:\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
Relevant console log
Additional information
No response
The text was updated successfully, but these errors were encountered: