You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Also allow for Y to be continuous in [0, 1]; in this case the optimization would be through loss minimization and not through variational inference. Note that we still want the sigmoid to be applied as the final head.
Can you have a look to this change: e6bef13
Does it work as you expected?
P.S., this is a preliminary solution to illustrate the idea of potential modifications, I haven't tested it yet.
Oh. This looks great. This also means our forward function is fully modular, and we can use this infrastructure to compare across models. I will test this out. I will also add l2 regularization in the objective. That should be fine, right?
Also allow for Y to be continuous in [0, 1]; in this case the optimization would be through loss minimization and not through variational inference. Note that we still want the sigmoid to be applied as the final head.
@TianyuDu
The text was updated successfully, but these errors were encountered: