Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove "feature importance" requirement (i.e., explanations share shape with input) #343

Open
annahedstroem opened this issue Mar 19, 2024 · 0 comments

Comments

@annahedstroem
Copy link
Member

annahedstroem commented Mar 19, 2024

Description of the problem

  • Many Quantus metrics practically work with non "feature importance" explanations (i.e., can accept free form explaantions

Description of a solution

  1. Remove unnecessary asserts in this one that runs for each base.Metric, see the commented out below:
def assert_attributions(x_batch: np.array, a_batch: np.array) -> None:
    """
    Asserts on attributions, assumes channel first layout.

    Parameters
    ----------
    x_batch: np.ndarray
         The batch of input to compare the shape of the attributions with.
    a_batch: np.ndarray
         The batch of attributions.

    Returns
    -------
    None
    """
    assert (
        type(a_batch) == np.ndarray
    ), "Attributions 'a_batch' should be of type np.ndarray."
    assert np.shape(x_batch)[0] == np.shape(a_batch)[0], (
        "The inputs 'x_batch' and attributions 'a_batch' should "
        "include the same number of samples."
        "{} != {}".format(np.shape(x_batch)[0], np.shape(a_batch)[0])
    )
    assert np.ndim(x_batch) == np.ndim(a_batch), (
        "The inputs 'x_batch' and attributions 'a_batch' should "
        "have the same number of dimensions."
        "{} != {}".format(np.ndim(x_batch), np.ndim(a_batch))
    )
    a_shape = [s for s in np.shape(a_batch)[1:] if s != 1]
    x_shape = [s for s in np.shape(x_batch)[1:]]
    #assert a_shape[0] == x_shape[0] or a_shape[-1] == x_shape[-1], (
     #   "The dimensions of attribution and input per sample should correspond in either "
      #  "the first or last dimensions, but got shapes "
     #   "{} and {}".format(a_shape, x_shape)
    #) # remove this!
    assert all([a in x_shape for a in a_shape]), (
        "All attribution dimensions should be included in the input dimensions, "
        "but got shapes {} and {}".format(a_shape, x_shape)
    )
    assert all(
        [
            x_shape.index(a) > x_shape.index(a_shape[i])
            for a in a_shape
            for i in range(a_shape.index(a))
        ]
    ), (
        "The dimensions of the attribution must correspond to dimensions of the input in the same order, "
        "but got shapes {} and {}".format(a_shape, x_shape)
    )
    #assert not np.all((a_batch == 0)), (
    #    "The elements in the attribution vector are all equal to zero, "
    #    "which may cause inconsistent results since many metrics rely on ordering. "
    #    "Recompute the explanations."
    #) # raise warning instead
    #assert not np.all((a_batch == 1.0)), (
    #    "The elements in the attribution vector are all equal to one, "
    #    "which may cause inconsistent results since many metrics rely on ordering. "
    #    "Recompute the explanations."
    #) # raise warning instead
    assert len(set(a_batch.flatten().tolist())) > 1, (
        "The attributions are uniformly distributed, "
        "which may cause inconsistent results since many "
        "metrics rely on ordering."
        "Recompute the explanations."
    )
    # assert not np.all((a_batch < 0.0)), "Attributions should not all be less than zero." # raise warning instead!
  1. Add a new category to each metric, to clarity if the same shape is needed
data_applicability = {DataType.IMAGE, DataType.TIMESERIES, DataType.TABULAR}
model_applicability = {ModelType.TORCH, ModelType.TF}
explanation_applicability = {ExplanationType.FeatureImportance, ExplanationType.FreeForm.....}
.....

Minimum acceptance criteria

  • Specify what is necessary for the issue to be closed.
  • @mentions of the person that is apt to review these changes e.g., @annahedstroem
@annahedstroem annahedstroem changed the title Remove "feature importance" requirement i.e., that attribution shape is same as input Remove "feature importance" requirement (i.e., explanations share shape with input) Mar 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant