-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG]: lama anime output #601
Comments
@Sanster |
imgs: torch.Tensor =...
masks: torch.Tensor = ...
inpainted_images: torch.Tensor = ...
mask_clone = masks.clone()
mask_clone[:,:,0,0] = 0 # prevent all black mask
img_means = (imgs * (1-mask_clone)).mean(dim=(2, 3), keepdim=True) / (1-mask_clone).mean(dim=(2, 3), keepdim=True)
img_stds = (((imgs - img_means).pow(2) * (1-mask_clone)).mean(dim=(2, 3), keepdim=True) / (1-mask_clone).mean(dim=(2, 3), keepdim=True)).sqrt()
inpainted_means = (inpainted_images * (1-mask_clone)).mean(dim=(2, 3), keepdim=True) / (1-mask_clone).mean(dim=(2, 3), keepdim=True)
inpainted_stds = (((inpainted_images - inpainted_means).pow(2) * (1-mask_clone)).mean(dim=(2, 3), keepdim=True) / (1-mask_clone).mean(dim=(2, 3), keepdim=True)).sqrt()
inpainted_images = (inpainted_images - inpainted_means) / inpainted_stds * img_stds + img_means
inpainted_images = inpainted_images * masks + imgs * (1-masks) This is a simple statistical equalization process that you can try to apply to minimize the color discrepancy. You can further blur the mask to gain smoother transition in the last step. |
How can add this? |
@wolfkingal2000 For me, I modify the code in the models file directly. Below is an example from
...
mask = mask[:, :, 0]
items = load_image(image, mask, device=self.device)
self.wireframe_edge_and_line(items, config.zits_wireframe)
inpainted_image = self.inpaint(
items["images"],
items["masks"],
items["edge"],
items["line"],
items["rel_pos"],
items["direct"],
)
inpainted_image = inpainted_image * 255.0
...
...
mask = mask[:, :, 0]
items = load_image(image, mask, device=self.device)
self.wireframe_edge_and_line(items, config.zits_wireframe)
inpainted_image = self.inpaint(
items["images"],
items["masks"],
items["edge"],
items["line"],
items["rel_pos"],
items["direct"],
)
imgs: torch.Tensor = items["images"]
masks: torch.Tensor = items["masks"]
inpainted_images: torch.Tensor = inpainted_image
mask_clone = masks.clone()
mask_clone[:,:,0,0] = 0 # prevent all white mask
img_means = (imgs * (1-mask_clone)).mean(dim=(2, 3), keepdim=True) / (1-mask_clone).mean(dim=(2, 3), keepdim=True)
img_stds = (((imgs - img_means).pow(2) * (1-mask_clone)).mean(dim=(2, 3), keepdim=True) / (1-mask_clone).mean(dim=(2, 3), keepdim=True)).sqrt()
inpainted_means = (inpainted_images * (1-mask_clone)).mean(dim=(2, 3), keepdim=True) / (1-mask_clone).mean(dim=(2, 3), keepdim=True)
inpainted_stds = (((inpainted_images - inpainted_means).pow(2) * (1-mask_clone)).mean(dim=(2, 3), keepdim=True) / (1-mask_clone).mean(dim=(2, 3), keepdim=True)).sqrt()
inpainted_images = (inpainted_images - inpainted_means) / inpainted_stds * img_stds + img_means
inpainted_images = inpainted_images * masks + imgs * (1-masks)
inpainted_image = inpainted_images * 255.0
... |
Model
Which model are you using? lama anime
Describe the bug
so in other software give this output like iopaint
VoxelCubes/PanelCleaner#121
this issues
Screenshots
this my output
System Info
Software version used
The text was updated successfully, but these errors were encountered: