Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Solid gray image as a result #149

Open
guillefar opened this issue Mar 3, 2016 · 14 comments
Open

Solid gray image as a result #149

guillefar opened this issue Mar 3, 2016 · 14 comments

Comments

@guillefar
Copy link

So, after many hours running the software, i just got 10 images like this:

neuralstyle_test

The command I run was:

./Software/Git/neural-style/neural_style.lua -style_image ./Imágenes/style.jpg -content_image ./Imágenes/foto.png -output_image ./Imágenes/NeuralStyle_test -model_file ./Software/Git/neural-style/models/vgg_normalised.caffemodel -proto_file ./Software/Git/neural-style/models/VGG_ILSVRC_19_layers_deploy.prototxt -gpu -1 -num_iterations 1000 -seed 123 -content_layers relu0,relu3,relu7,relu12 -style_layers relu0,relu3,relu7,relu12 -content_weight 10 -style_weight 1000 -image_size 512 -optimizer adam

Any ideas?

@jcjohnson
Copy link
Owner

Try turning off TV regularization: -tv_weight 0

@guillefar
Copy link
Author

Ok, will try, thanks.

@Gordakan
Copy link

Gordakan commented Mar 5, 2016

Hi.

I got the same result. Input image is the same size as the style image (468x600). I tried with only 3 iterations (running on cpu) just to give it a try.

Here is the command with arguments used :
th neural_style.lua -style_image examples/inputs/picasso_selfport1907.jpg -content_image photo.jpg -output_image photo_styled.jpg -model_file models/vgg_normalised.caffemodel -proto_file models/train_val.prototxt -gpu -1 -backend clnn -num_iterations 3 -seed 123 -content_layers relu0,relu3,relu7,relu12 -style_layers relu0,relu3,relu7,relu12 -content_weight 10 -style_weight 100 -image_size 256 -optimizer adam -print_iter 1 -tv_weight 0

@Flyingvette
Copy link

Not even close to enough iterations to do anything. You can flag it to use the source image as the starting point rather than a blank field.

On March 5, 2016 3:59:19 AM PST, Gordakan [email protected] wrote:

Hi.

I got the same result. Input image is the same size as the style image
(468x600). I tried with only 3 iterations (running on cpu) just to give
it a try.

Here is the command with arguments used :
th neural_style.lua -style_image
examples/inputs/picasso_selfport1907.jpg -content_image photo.jpg
-output_image photo_styled.jpg -model_file
models/vgg_normalised.caffemodel -proto_file models/train_val.prototxt
-gpu -1 -backend clnn -num_iterations 3 -seed 123 -content_layers
relu0,relu3,relu7,relu12 -style_layers relu0,relu3,relu7,relu12
-content_weight 10 -style_weight 100 -image_size 256 -optimizer adam
-print_iter 1 -tv_weight 0


Reply to this email directly or view it on GitHub:
#149 (comment)

@shessenauer
Copy link

shessenauer commented Apr 26, 2016

@Flyingvette How do I tag it as a source image? would I do -init /directoryToImage/image.png?

@htoyryla
Copy link

Sam Hesssenauer [email protected] kirjoitti 26.4.2016 kello 10.34:

@Flyingvette How do I tag it as a source image?

-init image

As described in the Readme.md:

-init: Method for generating the generated image; one of random or image. Default is random which uses a noise initialization as in the paper; image initializes with the content image.

@StryxZilla
Copy link

Has anyone ever gotten this working? Even with tv_weight at 0 I get images that are largely grey versions of the content image @ 1k iterations.

@shaharz
Copy link

shaharz commented Jan 20, 2017

For the record, I noticed you're using vgg_normalised.caffemodel, which produces a blank image for me as well. On the other hand VGG_ILSVRC_19_layers.caffemodel works just fine!

@winnyec
Copy link

winnyec commented Jan 22, 2017

It appears to me that the issue is that the size of the layer deck is not #cnn, but #cnn.modules when using the backend nn (in particular, when -gpu -1).

@htoyryla
Copy link

htoyryla commented Jan 23, 2017

I used neural-style extensively with only nn on CPU for almost a year, and never experienced any problem with #cnn. I also experimented with many modifications to the code.

So while loadcaffe appears sometimes to return a model object where #cnn does not give the number of layers, it surely does not depend merely on nn and cpu.

The #cnn question has also been discussed in #338

PS. You may still have a point. When I try

require 'loadcaffe'
cnn = loadcaffe.load("models/VGG_ILSVRC_19_layers_deploy.prototxt", "models/VGG_ILSVRC_19_layers.caffemodel", "nn"):float()
print(#cnn)
c = cnn.modules
print(#c)

both give the number of layers correctly, but otherwise cnn and cnn.modules are not identical. The model structure can be printed from cnn

th> print(cnn)
nn.Sequential {
  [input -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> (7) -> (8) -> (9) -> (10) -> (11) -> (12) -> (13) -> (14) -> (15) -> (16) -> (17) -> (18) -> (19) -> (20) -> (21) -> (22) -> (23) -> (24) -> (25) -> (26) -> (27) -> (28) -> (29) -> (30) -> (31) -> (32) -> (33) -> (34) -> (35) -> (36) -> (37) -> (38) -> (39) -> (40) -> (41) -> (42) -> (43) -> (44) -> (45) -> (46) -> output]
  (1): nn.SpatialConvolution(3 -> 64, 3x3, 1,1, 1,1)
  (2): nn.ReLU
  (3): nn.SpatialConvolution(64 -> 64, 3x3, 1,1, 1,1)
  (4): nn.ReLU
 ...

but the actual layer stack really is under cnn.modules:

th> print(cnn.modules)
{
  1 :
    {
      padW : 1
      nInputPlane : 3
      output : FloatTensor - empty
      name : "conv1_1"
      _type : "torch.FloatTensor"
      dH : 1
      dW : 1
      nOutputPlane : 64
      padH : 1
      kH : 3
      weight : FloatTensor - size: 64x3x3x3
      gradWeight : FloatTensor - size: 64x3x3x3
      gradInput : FloatTensor - empty
      kW : 3
      bias : FloatTensor - size: 64
      gradBias : FloatTensor - size: 64
    }
  2 :
    {
      inplace : true
      threshold : 0
      _type : "torch.FloatTensor"
      output : FloatTensor - empty
      gradInput : FloatTensor - empty
      name : "relu1_1"
      val : 0
    }
...

So while #cnn appears to work for many, using #cnn.modules is probably safer.

@winnyec
Copy link

winnyec commented Jan 23, 2017

Sorry I am not an expert at all: I was just looking for this solid grey error, and since I found no help here that worked for me, I started debugging. I have found the key to this fix in the nn package, hence I was assuming that there might be the difference… given also that things are probably working in most of the cases for everybody else.

Anyway: probably it is an Arch thing, since I have Torch and the packages from AUR too like in #338.

@syz825211943
Copy link

@winnyec How to fix the nn package?

@winnyec
Copy link

winnyec commented Sep 4, 2018

@syz825211943: The computer on which I did this is packed up for the time being, so cannot check what exactly I did. But based on my comment above try editing #cnn into #cnn.modules. Apparently #cnn should also work just the same way, but with the versions Arch has (had?) only the explicit #cnn.modules works.

@ballerburg9005
Copy link

I am on Archlinux and just used the AUR packages.

It works if you just change #cnn into #cnn.modules in neural_style.lua .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests