Skip to content
This repository has been archived by the owner on Feb 23, 2023. It is now read-only.

Optimizing model with random error #107

Open
sverzijl opened this issue Aug 29, 2017 · 7 comments
Open

Optimizing model with random error #107

sverzijl opened this issue Aug 29, 2017 · 7 comments

Comments

@sverzijl
Copy link

I'm not sure if this is the right place to ask.

I am trying to optimize a model using GPyOpt. My issue is that the model has a small, but significant, amount of random error - ie - when I repeat two experiments with the same hyperparameters I get different results. The error is small enough that for a small domain of hyperparameters it can be difficult to identify what is best. The easy, but costly solution, is for me to repeat the experiments multiple times and take the average - particularly inefficient if I am repeating experiments where the difference between its output value and the optimum value are so large that it can't possibly be the optimum.

My question is, is there a way for GPyOpt to do this in a more clever fashion? When I look at the reference material (and all the websites that talk about the process) it seems that the models assume that the points where an experiment has occurred is an exact value with no error - taken right to the point where if GPyOpt looks at a point twice then it must mean the optimum point has been found and the code stops.

So is there a way I can get GPyOpt to assume that there is always some error in a result with a view of finding hyperparameters that are - on average - the optimum? Where it can revisit the point (or close to it) if it thinks reducing the error (through averaging) will result in a greater improvement (by reducing the random error at that point via more sample data) than exploring another part of the problem domain?

I imagine that this is an issue for any physical experiment where variation can come from other sources than the parameters that are being refined.

Thanks in advance,
Simeon

@sverzijl
Copy link
Author

Since the code uses eps like this:
if not ((self.num_acquisitions < self.max_iter) and (self._distance_last_evaluations() > self.eps)): break

Can I just get away with setting eps = -1 (since self.distance_last_evaluations() can't be less than zero) and gain the desired effect?

@javiergonzalezh
Copy link
Member

javiergonzalezh commented Aug 29, 2017 via email

@sverzijl
Copy link
Author

Thank you for the response. It certainly does!

@sverzijl
Copy link
Author

Hi,

I just have one further question in regards to this.

My thought is that I should not rely on the 'best' result found by the model since I'm less interested in the iteration with the best result and more the result that has the lowest predicted result (as it should be an estimate of an average result).

Is predict(x) in http://pythonhosted.org/GPyOpt/_modules/GPyOpt/models/gpmodel.html#GPModel_MCMC.predict
the best approach?

I've noticed for each set of hyperparameters I try I get multiple results (one for each hmc_sample). Am I meant to take the average of these?

@javiergonzalezh
Copy link
Member

Sorry for the slow reply on this one. Yes, an average should work in that case.

@sverzijl
Copy link
Author

Hi,

I have one further question on this post. Is there any simple way to get GPyOpt to find the minimum of its estimated model and output the x-coordinate? I assume f_min from reading the material outputs the minimum predicted result but I'm interested in it's x-coordinate.

Thanks in advance!

@LarsHH
Copy link

LarsHH commented Nov 25, 2019

Hi @sverzijl ,

I have just read this thread because I’ve been looking for exactly the same functionality as you mentioned in your last question. Did you come up with a good solution? I am using GPyOpt with f=None I.e. doing the function evaluation outside of GPyOpt. A hacky way I thought of achieving what you mentioned is to make a second Bayesian Optimization with an LCB acquisition where I set the exploration parameter to 0. This will then just optimize the mean of the GP and use all of the Implemented acquisition function optimization code.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants