-
Notifications
You must be signed in to change notification settings - Fork 259
Optimizing model with random error #107
Comments
Since the code uses eps like this: Can I just get away with setting eps = -1 (since self.distance_last_evaluations() can't be less than zero) and gain the desired effect? |
Hi Simeon,
This is a very interesting question. As you mention, there are several
scenarios in which consider noisy observations is needed. For this, you
need to do two things. First, to set the eps to zero (or something
negative) so the optimization doesn't stop if you collect two points in the
same location. Second, you need to tell the model that observations can be
noisy, so the Gaussian noise in the model is not automatically set to zero.
To do this you need to set 'exact_feval=False' when creating the BO object.
You will see that the optimization is less efficient as you have one extra
parameter to learn, but all BO theory applies.
Hope this help.
…On 29 August 2017 at 03:36, sverzijl ***@***.***> wrote:
Since the code uses eps like this:
if not ((self.num_acquisitions < self.max_iter) and (self._distance_last_evaluations()
> self.eps)): break
Can I just get away with setting eps = -1 (since self.distance_last_evaluations()
can't be less than zero) and gain the desired effect?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#107 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGiS8woHXFKgH2DJjYGWesRBBEHSd8rHks5sc3kjgaJpZM4PFOkA>
.
|
Thank you for the response. It certainly does! |
Hi, I just have one further question in regards to this. My thought is that I should not rely on the 'best' result found by the model since I'm less interested in the iteration with the best result and more the result that has the lowest predicted result (as it should be an estimate of an average result). Is predict(x) in http://pythonhosted.org/GPyOpt/_modules/GPyOpt/models/gpmodel.html#GPModel_MCMC.predict I've noticed for each set of hyperparameters I try I get multiple results (one for each hmc_sample). Am I meant to take the average of these? |
Sorry for the slow reply on this one. Yes, an average should work in that case. |
Hi, I have one further question on this post. Is there any simple way to get GPyOpt to find the minimum of its estimated model and output the x-coordinate? I assume f_min from reading the material outputs the minimum predicted result but I'm interested in it's x-coordinate. Thanks in advance! |
Hi @sverzijl , I have just read this thread because I’ve been looking for exactly the same functionality as you mentioned in your last question. Did you come up with a good solution? I am using GPyOpt with f=None I.e. doing the function evaluation outside of GPyOpt. A hacky way I thought of achieving what you mentioned is to make a second Bayesian Optimization with an LCB acquisition where I set the exploration parameter to 0. This will then just optimize the mean of the GP and use all of the Implemented acquisition function optimization code. |
I'm not sure if this is the right place to ask.
I am trying to optimize a model using GPyOpt. My issue is that the model has a small, but significant, amount of random error - ie - when I repeat two experiments with the same hyperparameters I get different results. The error is small enough that for a small domain of hyperparameters it can be difficult to identify what is best. The easy, but costly solution, is for me to repeat the experiments multiple times and take the average - particularly inefficient if I am repeating experiments where the difference between its output value and the optimum value are so large that it can't possibly be the optimum.
My question is, is there a way for GPyOpt to do this in a more clever fashion? When I look at the reference material (and all the websites that talk about the process) it seems that the models assume that the points where an experiment has occurred is an exact value with no error - taken right to the point where if GPyOpt looks at a point twice then it must mean the optimum point has been found and the code stops.
So is there a way I can get GPyOpt to assume that there is always some error in a result with a view of finding hyperparameters that are - on average - the optimum? Where it can revisit the point (or close to it) if it thinks reducing the error (through averaging) will result in a greater improvement (by reducing the random error at that point via more sample data) than exploring another part of the problem domain?
I imagine that this is an issue for any physical experiment where variation can come from other sources than the parameters that are being refined.
Thanks in advance,
Simeon
The text was updated successfully, but these errors were encountered: