Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why isn't the error scaled down during Mul's backprop ? #1285

Open
singam-sanjay opened this issue Oct 6, 2017 · 0 comments
Open

Why isn't the error scaled down during Mul's backprop ? #1285

singam-sanjay opened this issue Oct 6, 2017 · 0 comments

Comments

@singam-sanjay
Copy link

nn/Mul.lua

Lines 29 to 33 in 8726825

function Mul:updateGradInput(input, gradOutput)
self.gradInput:resizeAs(input):zero()
self.gradInput:add(self.weight[1], gradOutput)
return self.gradInput
end

Why isn't the error caused by the input to the Mul layer not divided by the scaling factor, but multiplied by the same ?

If the input to the layer with the scaling factor of 2 is [1, 2] and the expected output is [4,4], the error would be [-2, 0]. Subtracting the scaled down error ([-1, 0]) from the input would correct the input.

Please clarify.

@singam-sanjay singam-sanjay changed the title Why isn't the error scaled down during Mul:backprop ? Why isn't the error scaled down during Mul's backprop ? Oct 6, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant