-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat(EstimatorReport): Display the feature permutation importance #1319
Comments
What is the difference with #1323 ? |
I forgot to change the title of 1323, thanks! |
Which data should be used to compute the permutation importance? Should we accept arguments like |
by default, test, but yes, adding this data_source parameter will be perfect! |
Can you give an example of what you'd expect the dataframe to look like? |
We can output something similar to what we have in the ComparisonReport or in the CrossValidationReport: several lines for several scores, and the features are the columns. It's not very pretty, I would expect the scorings to be max 5 and the features at least 10, making it logical to have the long list as the lines index, but it makes it consistant this way. |
Here is what I currently have:
|
|
Good point! |
Is your feature request related to a problem? Please describe.
As a Data Scientist, to explain my model and understand the problem I'm trying to solve, I need to check the feature importance by a permutation method. This should be available to any kind of model.
Describe the solution you'd like
Describe alternatives you've considered, if relevant
Later, if the object report contains too many accessors, we will group the feature importance and add a parameter to decide which of the feature importance type we want to display.
Additional context
part of the epic #1314
The text was updated successfully, but these errors were encountered: