Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

💡 [REQUEST] - What is purpose of out.backward(torch.randn(1, 10)) in neural_networks_tutorial #3017

Open
Lovkush-A opened this issue Aug 28, 2024 · 5 comments
Labels
core Tutorials of any level of difficulty related to the core pytorch functionality intro question

Comments

@Lovkush-A
Copy link

Lovkush-A commented Aug 28, 2024

🚀 Describe the improvement or the new tutorial

In neural networks tutorial for beginners, we have the following:

Zero the gradient buffers of all parameters and backprops with random gradients:

net.zero_grad()
out.backward(torch.randn(1, 10))

What is the purpose of this? It is not part of standard ML workflows and can be confusing to beginners. (As evidence,I am helping some people learn basics of ML and I got questions about this line. This is how I found out about it!)

If there is no good reason for it, then I suggest:

  • dropping these few lines
  • changing wording of other parts of the page if needed. E.g. 'at this point we covered... calling backward'

Existing tutorials on this topic

No response

Additional context

No response

cc @subramen @albanD

@svekars svekars added intro core Tutorials of any level of difficulty related to the core pytorch functionality question labels Aug 28, 2024
@albanD
Copy link
Contributor

albanD commented Aug 29, 2024

I would agree the random gradient can be confusing if you're not already familiar with how backprop work.
out.sum().backward() might be less confusing here?

@Lovkush-A
Copy link
Author

@albanD What is the downside of just dropping doing backward in this cell? I think out.sum().backward() is also confusing because it is not part of the standard ml workflow.

@albanD
Copy link
Contributor

albanD commented Sep 16, 2024

Not sure, given some of the other wording in that tutorial, it might come from an earlier iteration where backward was discussed at the beginning.

@Lovkush-A
Copy link
Author

Makes sense that it is from a previous version of the tutorial.

Can I make a PR to drop this reference to backward then?

@albanD
Copy link
Contributor

albanD commented Sep 23, 2024

I'll let @svekars help on what we want to do to update the general outline here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core Tutorials of any level of difficulty related to the core pytorch functionality intro question
Projects
None yet
Development

No branches or pull requests

3 participants