Skip to content
This repository has been archived by the owner on Feb 23, 2023. It is now read-only.

How does Bayesian Optimization work when the objective function is binary? #169

Open
dli1 opened this issue Mar 13, 2018 · 2 comments
Open
Assignees

Comments

@dli1
Copy link

dli1 commented Mar 13, 2018

If the objective function is binary, for example, the objective function is either relevance (1) or non-relevance (1), how does Bayesian Optimization work in this case?

See I have got the predictive mean (mu) and variance (sigma), here the range of mu is (-infinite, +infinite), not {0, 1}, is it reasonable to use mu and sigma to construct traditional acquisition function like EI, PI?

If not, suppose I turn the output of GP (mu and sigma) into a class probability using a link function ( e.g. 1/(1+exp(-mu)) ), then the new output and the original sigma are not in the same magnitude. In this case, how to construct acquisition function?

@apaleyes
Copy link
Collaborator

I think we have a model to support binary objectives implemented somewhere. Maybe it is worth publishing it as a notebook example. @javiergonzalezh thoughts?

@zjensen262
Copy link

Was this issue ever addressed? I am also interested in using this package with a binary objective function.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants