-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What's our type promise? #135
Comments
I think the type promise should follow |
Hi @ragulpr, after reading your comment on the design doc more carefully, I believe there is merely a misunderstanding of the term 'scalar' in PyTorch. In PyTorch, 'scalar' is a concept orthogonal to 'Variable': a Tensor or Variable is a scalar iff it has empty shape, i.e. |
Hi and thanks @fritzo for the answer. I don't see how
I think I went into too much detail, in short I'm having a hard time understanding why there's so much effort keeping alive the possibility of initializing a distribution using It feels hard to keep track and testing for all errors that can occur and I'm wondering if its possible to simplify this. My use for [1] I noticed now that we |
Also, I think I see more use for being able to infer current precision/storage of parameters than cast them to other precision in order to abstract |
As we started the discussion on slack and since the design doc isn't up to date on this I thought it would be good to discuss it here for future reference.
Currently in the design doc:
So it feels like its not settled or I'm just not keeping up
Problems
validate_log_prob_arg
broadcast_call
is potentially expensive Implement broadcast_all() in C/C++ #80. See codemath.log(self.scale) if isinstance(self.scale, Number) else self.scale.log()
Possible reasons
log_prob
only accepts tensor xorVariable
)Possible solutions
I'm thinking from the assumption that we'll soon have deprecated tensors in favor of variables and 0d Variables. Since mixing is a herdle, can't we just decide something like:
Variable
?Variable
?This is mostly the case currently anyway.
We have good entry point for casting as parameters will be upcasted and reshaped using
broadcast_all
and arguments to (at least) log prob will have to passvalidate_log_prob_arg
.With a strong promise on input and return type this could be made fast (as proposed in #80). If this is made fast before we've figured this out we might be stuck with it.
I also like how the
nn
module is working and I expect to be able to do things like calling.double()
,.float()
on a distribution. This is easily implemented inDistribution
if we assume access to all the parameters of a distribution programmatically as a tuple or dict as discussed in #130. By clearing up types now I imagine that the transition will require less ifs & elses.The text was updated successfully, but these errors were encountered: