-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[c++] enhance error handling for forced splits file loading #6832
base: master
Are you sure you want to change the base?
Conversation
0cd73e5
to
7e35462
Compare
Thanks for working on this, can you please add some tests that cover these exceptions? |
7e35462
to
05430e5
Compare
@microsoft-github-policy-service agree |
05430e5
to
133cc75
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this! The general approach looks good and the error messages are informative. Nice idea thinking about "file exists but cannot be parsed" as a separate case too!
But I think this deserves some more careful consideration to be sure that we don't end up introducing a requirement on the file indicated by forcedsplits_filename
also existing at scoring (prediction) time.
if (!forced_splits_file.good()) { | ||
Log::Warning("Forced splits file '%s' does not exist. Forced splits will be ignored.", | ||
config->forcedsplits_filename.c_str()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should be a fatal error at training time... if I'm training a model and expecting specific splits to be used, I'd prefer a big loud error to a training run wasting time and compute resources only to produce a model that accidentally does not look like what I'd wanted.
HOWEVER... I think GBDT::Init()
and/or GBDT::ResetConfig()
will also be called when you load a model at scoring time, and at scoring time we wouldn't want to get a fatal error because of a missing or malformed file which is only supposed to affect training.
I'm not certain how to resolve that. Can you please investigate that and propose something?
It would probably be helpful to add tests for these different conditions. You can do this in Python for this purpose. Or if you don't have time / interest, I can push some tests here and then you could work on making them pass?
So to be clear, the behavior I want to see is:
- training time:
forcedsplits_filename
file does not exist or is not readable --> ERRORforcedsplits_filename
is not valid JSON --> ERROR
- prediction / scoring time:
forcedsplits_filename
file does not exist or is not readable --> no log output, no errorsforcedsplits_filename
is not valid JSON --> no log output, no errors
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could add a flag to the GBDT class to indicate the current mode.
This is what I was thinking:
bool is_training_ = false;
// Turn the flag on at the start of training, and off at the end.
void GBDT::Train() {
is_training_ = true;
// ... regular training code ...
is_training_ = false;
}
// In Init() and ResetConfig(), handle the file as follows:
if (is_training_) {
// Stop with an error if anything is wrong.
} else {
// Simply continue if there are issues.
}
Regarding the tests, I'd be happy to write them!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks very much. It is not that simple.
For example, there are many workflows where training and prediction are done in the same process, using the same Booster. So a single property is_training_
is not going to work.
There are also multiple APIs for training.
LightGBM/src/boosting/gbdt.cpp
Line 237 in 3fad53b
void GBDT::Train(int snapshot_freq, const std::string& model_output_path) { |
LightGBM/src/boosting/gbdt.cpp
Line 344 in 3fad53b
bool GBDT::TrainOneIter(const score_t* gradients, const score_t* hessians) { |
And we'd also want to be careful to not introduce this type of checking on every boosting round, as that would hurt performance.
Maybe @shiyu1994 could help us figure out where to put a check like this.
Also referencing this related PR to help: #5653
Fixes #6830