-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Solid gray image as a result #149
Comments
Try turning off TV regularization: |
Ok, will try, thanks. |
Hi. I got the same result. Input image is the same size as the style image (468x600). I tried with only 3 iterations (running on cpu) just to give it a try. Here is the command with arguments used : |
Not even close to enough iterations to do anything. You can flag it to use the source image as the starting point rather than a blank field. On March 5, 2016 3:59:19 AM PST, Gordakan [email protected] wrote:
|
@Flyingvette How do I tag it as a source image? would I do -init /directoryToImage/image.png? |
Sam Hesssenauer [email protected] kirjoitti 26.4.2016 kello 10.34:
-init image As described in the Readme.md: -init: Method for generating the generated image; one of random or image. Default is random which uses a noise initialization as in the paper; image initializes with the content image. |
Has anyone ever gotten this working? Even with tv_weight at 0 I get images that are largely grey versions of the content image @ 1k iterations. |
For the record, I noticed you're using |
It appears to me that the issue is that the size of the layer deck is not #cnn, but #cnn.modules when using the backend nn (in particular, when -gpu -1). |
I used neural-style extensively with only nn on CPU for almost a year, and never experienced any problem with #cnn. I also experimented with many modifications to the code. So while loadcaffe appears sometimes to return a model object where #cnn does not give the number of layers, it surely does not depend merely on nn and cpu. The #cnn question has also been discussed in #338 PS. You may still have a point. When I try
both give the number of layers correctly, but otherwise cnn and cnn.modules are not identical. The model structure can be printed from cnn
but the actual layer stack really is under cnn.modules:
So while #cnn appears to work for many, using #cnn.modules is probably safer. |
Sorry I am not an expert at all: I was just looking for this solid grey error, and since I found no help here that worked for me, I started debugging. I have found the key to this fix in the nn package, hence I was assuming that there might be the difference… given also that things are probably working in most of the cases for everybody else. Anyway: probably it is an Arch thing, since I have Torch and the packages from AUR too like in #338. |
@winnyec How to fix the nn package? |
@syz825211943: The computer on which I did this is packed up for the time being, so cannot check what exactly I did. But based on my comment above try editing #cnn into #cnn.modules. Apparently #cnn should also work just the same way, but with the versions Arch has (had?) only the explicit #cnn.modules works. |
I am on Archlinux and just used the AUR packages. It works if you just change #cnn into #cnn.modules in neural_style.lua . |
So, after many hours running the software, i just got 10 images like this:
The command I run was:
./Software/Git/neural-style/neural_style.lua -style_image ./Imágenes/style.jpg -content_image ./Imágenes/foto.png -output_image ./Imágenes/NeuralStyle_test -model_file ./Software/Git/neural-style/models/vgg_normalised.caffemodel -proto_file ./Software/Git/neural-style/models/VGG_ILSVRC_19_layers_deploy.prototxt -gpu -1 -num_iterations 1000 -seed 123 -content_layers relu0,relu3,relu7,relu12 -style_layers relu0,relu3,relu7,relu12 -content_weight 10 -style_weight 1000 -image_size 512 -optimizer adam
Any ideas?
The text was updated successfully, but these errors were encountered: