-
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
notes on tune tokens #281
notes on tune tokens #281
Conversation
# We might want to have the following in a function bc its used in MultiCrit as well | ||
if (is.null(search_space)) { | ||
# TODO: check if we can construct search space from learner$param_set using tune_tokens | ||
tmp = learner$param_set$get_tune_pair() # return fixed_values (all but tune tokens) and search_space (from tune tokens), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should have a function that returns two things that easily could be obtained by two calls, since here the two things that would happen are independent of each other.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to emphasize that you always want to do both? If you want to have the search_space you also always should take the param_set with the remaining fixed param values.
I would prefer doing 2., because it is the minimally invasive change, but I can see an argument for 3. |
so yeah I guess this solution is fine? |
I implemented something that doesn't have to do the same thing in every |
I don't like the above approach of having to delete TuneTokens in mlr3tuning because what if tune tokens are in the learner but the search_space is actually created with different params. We would have to catch that in a weird place. I think a clean API between Paradox and mlr3tuning would hide the existence of TuneTokens to mlr3tuning. Also do we risk having to write this delete tokens line in other packages? |
it is still good to delete the tune_tokens. the learner never uses tune_tokens, so nothing is lost.
We just ignore it
That is a legitimate concern / question. We could address this by having |
mlr-org/paradox#320 like this |
So far we have mostly implicitly relied on "the user uses |
Coming back to
The question comes down to: If the user sets something to
|
Fixed. |
talked with @be-marc. What do you think @mb706? Notes are in the source.