You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I watched a presentation from Artomatix and they have some arguments for using a covariance loss instead of a gram loss. You can flip between the two (I think it's correct..) by doing
classGramMatrix(nn.Module):
defforward(self, input):
B, C, H, W=input.size()
x_flat=input.view(C, H*W)
#Add this for covariance lossx_flat=x_flat-x_flat.mean(1).unsqueeze(1)
returntorch.mm(x_flat, x_flat.t())
I didn't experiment with it much, but using the default content/style at 1024, you get these:
Gram Loss:
Covariance Loss:
I wouldn't say it's better, but it is interesting it adds more texture to the sky. Might have some utility for more textured styles?
Thoughts?
p.s. Nice implementation! Happy there's a true to jcjohson, cuda 10, pytorch impl
The text was updated successfully, but these errors were encountered:
I watched a presentation from Artomatix and they have some arguments for using a covariance loss instead of a gram loss. You can flip between the two (I think it's correct..) by doing
I didn't experiment with it much, but using the default content/style at 1024, you get these:
Gram Loss:
Covariance Loss:
I wouldn't say it's better, but it is interesting it adds more texture to the sky. Might have some utility for more textured styles?
Thoughts?
p.s. Nice implementation! Happy there's a true to jcjohson, cuda 10, pytorch impl
The text was updated successfully, but these errors were encountered: