Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

undefined function or variable 'dagnn.Square' #10

Open
sde123 opened this issue Sep 21, 2017 · 6 comments
Open

undefined function or variable 'dagnn.Square' #10

sde123 opened this issue Sep 21, 2017 · 6 comments

Comments

@sde123
Copy link

sde123 commented Sep 21, 2017

@layumi
hello
I am running the demo_heatmap.m.But I got a error

undefined function or variable 'dagnn.Square'

I have install matconvnet_beta23 in the matlabR2014a
Could you please tell me what is wrong?
Thankyou

@layumi
Copy link
Owner

layumi commented Sep 22, 2017

Hi @sde123,
I added some layers to matconvnet and I also included these layers in this repos.
In fact, you do not need to install the original Matconvnet.
I have included all necessary files in this repos. You can just download and run it.
More information can be find in README.

@sde123
Copy link
Author

sde123 commented Sep 22, 2017

@layumi
hello
Thank you
But when I run the gpu_compile.m
I got an error

/home/dai/code/person_reidentification/5/Untitled Folder/2016_person_re-ID-master/matlab/src/bits/impl/bilinearsampler_gpu.cu(247): warning: variable "backward" was declared but never referenced
          detected during instantiation of "vl::ErrorCode vl::impl::bilinearsampler<vl::VLDT_GPU, type>::forward(vl::Context &, type *, const type *, const type *, size_t, size_t, size_t, size_t, size_t, size_t, size_t) [with type=float]" 
(364): here

/home/dai/code/person_reidentification/5/Untitled Folder/2016_person_re-ID-master/matlab/src/bits/impl/bilinearsampler_gpu.cu(247): warning: variable "backward" was declared but never referenced

Could please tell me what is wrong,
I am in the ubuntu14.04,matlabR2014a

@layumi
Copy link
Owner

layumi commented Sep 22, 2017

I haven't met such error.
Would you like to provide the whole log?

@sde123
Copy link
Author

sde123 commented Sep 23, 2017

@layumi
Thankyou
When I run the train_id_net_res_2stream.m,because I only have one gpu,I add opts.gpus = 1 to the cnn_train_dag.m,but I got the error

train: epoch 01:   1/127:Error using  + 
Out of memory on device. To view more detail about available memory on the GPU, use 'gpuDevice()'. If
the problem persists, reset the GPU by calling 'gpuDevice(1)'.

Error in dagnn.Sum/forward (line 15)
        outputs{1} = outputs{1} + inputs{k} ;

Error in dagnn.Layer/forwardAdvanced (line 85)
      outputs = obj.forward(inputs, {net.params(par).value}) ;

Error in dagnn.DagNN/eval (line 91)
  obj.layers(l).block.forwardAdvanced(obj.layers(l)) ;

Error in cnn_train_dag>processEpoch (line 223)
      net.eval(inputs, params.derOutputs, 'holdOn', s < params.numSubBatches) ;

Error in cnn_train_dag (line 91)
    [net, state] = processEpoch(net, state, params, 'train',opts) ;

Error in train_id_net_res_2stream (line 34)
[net,info] = cnn_train_dag(net, imdb, @getBatch,opts) ;

could please tell me how to solve it?
Thankyou

@dinggd
Copy link

dinggd commented Sep 23, 2017 via email

@layumi
Copy link
Owner

layumi commented Sep 23, 2017

Thank you @gddingcs
net.conserveMemory = true; also helps. (I have turn it on in the code.)
So @sde123 you can try to use small batchsize first.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants