-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dropout demo #843
Dropout demo #843
Conversation
Ideally, you should add some tests to make sure it don't get broken in the Fred On Thu, Apr 24, 2014 at 3:10 PM, Mehdi Mirza [email protected]:
|
It's weird, I ran it on eos5 and I got different results. On the first phase, the best result I get is 1.15% on the validation set after 122 epochs, the training stops after 223 epochs with no improvements. On the second phase, I get 1.28% validation error on the test set, after training for 122 epochs. |
I'm reporting test error here, not valid. But the fact that it stopped after 223 epochs is weird., it's supposed to stop at epoch 440. I just ran it yesterday with a freshly installed pylearn. And the first script was written by @dwf and already tested by him. So I doubt if there is anything wrong with it. |
Could it be so different from a computer to another? |
Maybe it's because of pylearn and theano version? But different computer should not make any difference on the result. I also ran it using GPU. If the GPU version is different than CPU version, then we have inconsistency in out code for GPU vs CPU. |
Hmm, my pylearn version is up to date, but maybe not theano. I'll update it and try again. I ran it on GPU too. |
The model of GPU can change the results because different GPUs have different numbers of cores, so they use different orders of operations for things like hierarchical summation. |
@memimo Which computer and which GPU did you used? I will run it on the exact same computer with up to date pylearn and theano and see if I can get similar results. |
banrey0 GPU 3 |
Running it again on eos5 with theano up to date and MonitorBased.N = 300 rather than 100 gives me exact same results as you report. I'll try it again with N=100, but anyway, I'm happy now. :) |
skip is raised if PYLEARN2_DATA_PATH is not set. Which is the case for travis. But it runs fine on the lab computers, and it should be fine in nightly buildbot. |
But then commits might be merged before we know some unit tests fail? |
Yeah, that's the general issue with all the skip functions. The only solution is that the PR authors test them locally and confirm that it passes. And we should check the nightly buildbot log to fix if anything is broken after merge. |
Where does it test if PYLEARN2_DATA_PATH is defined? I see no reason for raising SkipTest when we feed the model with dummy data. |
fixes #193