You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 23, 2023. It is now read-only.
If the objective function is binary, for example, the objective function is either relevance (1) or non-relevance (1), how does Bayesian Optimization work in this case?
See I have got the predictive mean (mu) and variance (sigma), here the range of mu is (-infinite, +infinite), not {0, 1}, is it reasonable to use mu and sigma to construct traditional acquisition function like EI, PI?
If not, suppose I turn the output of GP (mu and sigma) into a class probability using a link function ( e.g. 1/(1+exp(-mu)) ), then the new output and the original sigma are not in the same magnitude. In this case, how to construct acquisition function?
The text was updated successfully, but these errors were encountered:
I think we have a model to support binary objectives implemented somewhere. Maybe it is worth publishing it as a notebook example. @javiergonzalezh thoughts?
If the objective function is binary, for example, the objective function is either relevance (1) or non-relevance (1), how does Bayesian Optimization work in this case?
See I have got the predictive mean (mu) and variance (sigma), here the range of mu is (-infinite, +infinite), not {0, 1}, is it reasonable to use mu and sigma to construct traditional acquisition function like EI, PI?
If not, suppose I turn the output of GP (mu and sigma) into a class probability using a link function ( e.g. 1/(1+exp(-mu)) ), then the new output and the original sigma are not in the same magnitude. In this case, how to construct acquisition function?
The text was updated successfully, but these errors were encountered: