Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dropout demo #843

Merged
merged 4 commits into from
May 1, 2014
Merged

Dropout demo #843

merged 4 commits into from
May 1, 2014

Conversation

memimo
Copy link
Member

@memimo memimo commented Apr 24, 2014

fixes #193

@nouiz
Copy link
Member

nouiz commented Apr 25, 2014

Ideally, you should add some tests to make sure it don't get broken in the
futur.

Fred

On Thu, Apr 24, 2014 at 3:10 PM, Mehdi Mirza [email protected]:

fixes #193 #193

You can merge this Pull Request by running

git pull https://github.com/memimo/pylearn dropout

Or view, comment on, or merge it at:

#843
Commit Summary

  • add all files
  • text fix

File Changes

  • A pylearn2/scripts/papers/dropout/READMEhttps://github.com/Dropout demo #843/files#diff-0(7)
  • A pylearn2/scripts/papers/dropout/mnist.yamlhttps://github.com/Dropout demo #843/files#diff-1(70)
  • A pylearn2/scripts/papers/dropout/mnist_valid.yamlhttps://github.com/Dropout demo #843/files#diff-2(88)

Patch Links:


Reply to this email directly or view it on GitHubhttps://github.com//pull/843
.

@bouthilx
Copy link
Member

It's weird, I ran it on eos5 and I got different results. On the first phase, the best result I get is 1.15% on the validation set after 122 epochs, the training stops after 223 epochs with no improvements. On the second phase, I get 1.28% validation error on the test set, after training for 122 epochs.

@memimo
Copy link
Member Author

memimo commented Apr 25, 2014

I'm reporting test error here, not valid. But the fact that it stopped after 223 epochs is weird., it's supposed to stop at epoch 440. I just ran it yesterday with a freshly installed pylearn. And the first script was written by @dwf and already tested by him. So I doubt if there is anything wrong with it.

@bouthilx
Copy link
Member

Could it be so different from a computer to another?

@memimo
Copy link
Member Author

memimo commented Apr 25, 2014

Maybe it's because of pylearn and theano version? But different computer should not make any difference on the result. I also ran it using GPU. If the GPU version is different than CPU version, then we have inconsistency in out code for GPU vs CPU.

@bouthilx
Copy link
Member

Hmm, my pylearn version is up to date, but maybe not theano. I'll update it and try again. I ran it on GPU too.

@goodfeli
Copy link
Contributor

The model of GPU can change the results because different GPUs have different numbers of cores, so they use different orders of operations for things like hierarchical summation.

@bouthilx
Copy link
Member

@memimo Which computer and which GPU did you used? I will run it on the exact same computer with up to date pylearn and theano and see if I can get similar results.

@memimo
Copy link
Member Author

memimo commented Apr 25, 2014

banrey0 GPU 3

@bouthilx
Copy link
Member

Running it again on eos5 with theano up to date and MonitorBased.N = 300 rather than 100 gives me exact same results as you report. I'll try it again with N=100, but anyway, I'm happy now. :)

@bouthilx
Copy link
Member

@memimo There seems to be a problem with limited_epoch_train (yaml_file_execution) (#849). SkipTest is always raised even if we use dummy data. I'm afraid those unit tests will always be skipped. I propose we merge this PR but add this problem to ticket #569

@memimo
Copy link
Member Author

memimo commented Apr 29, 2014

skip is raised if PYLEARN2_DATA_PATH is not set. Which is the case for travis. But it runs fine on the lab computers, and it should be fine in nightly buildbot.

@bouthilx
Copy link
Member

But then commits might be merged before we know some unit tests fail?

@memimo
Copy link
Member Author

memimo commented Apr 29, 2014

Yeah, that's the general issue with all the skip functions. The only solution is that the PR authors test them locally and confirm that it passes. And we should check the nightly buildbot log to fix if anything is broken after merge.

@bouthilx
Copy link
Member

Where does it test if PYLEARN2_DATA_PATH is defined? I see no reason for raising SkipTest when we feed the model with dummy data.

memimo added a commit that referenced this pull request May 1, 2014
@memimo memimo merged commit 4016b7a into lisa-lab:master May 1, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Reproduce Geoff Hinton's MNIST dropout results
4 participants