You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Many Quantus metrics practically work with non "feature importance" explanations (i.e., can accept free form explaantions
Description of a solution
Remove unnecessary asserts in this one that runs for each base.Metric, see the commented out below:
defassert_attributions(x_batch: np.array, a_batch: np.array) ->None:
""" Asserts on attributions, assumes channel first layout. Parameters ---------- x_batch: np.ndarray The batch of input to compare the shape of the attributions with. a_batch: np.ndarray The batch of attributions. Returns ------- None """assert (
type(a_batch) ==np.ndarray
), "Attributions 'a_batch' should be of type np.ndarray."assertnp.shape(x_batch)[0] ==np.shape(a_batch)[0], (
"The inputs 'x_batch' and attributions 'a_batch' should ""include the same number of samples.""{} != {}".format(np.shape(x_batch)[0], np.shape(a_batch)[0])
)
assertnp.ndim(x_batch) ==np.ndim(a_batch), (
"The inputs 'x_batch' and attributions 'a_batch' should ""have the same number of dimensions.""{} != {}".format(np.ndim(x_batch), np.ndim(a_batch))
)
a_shape= [sforsinnp.shape(a_batch)[1:] ifs!=1]
x_shape= [sforsinnp.shape(x_batch)[1:]]
#assert a_shape[0] == x_shape[0] or a_shape[-1] == x_shape[-1], (# "The dimensions of attribution and input per sample should correspond in either "# "the first or last dimensions, but got shapes "# "{} and {}".format(a_shape, x_shape)#) # remove this!assertall([ainx_shapeforaina_shape]), (
"All attribution dimensions should be included in the input dimensions, ""but got shapes {} and {}".format(a_shape, x_shape)
)
assertall(
[
x_shape.index(a) >x_shape.index(a_shape[i])
foraina_shapeforiinrange(a_shape.index(a))
]
), (
"The dimensions of the attribution must correspond to dimensions of the input in the same order, ""but got shapes {} and {}".format(a_shape, x_shape)
)
#assert not np.all((a_batch == 0)), (# "The elements in the attribution vector are all equal to zero, "# "which may cause inconsistent results since many metrics rely on ordering. "# "Recompute the explanations."#) # raise warning instead#assert not np.all((a_batch == 1.0)), (# "The elements in the attribution vector are all equal to one, "# "which may cause inconsistent results since many metrics rely on ordering. "# "Recompute the explanations."#) # raise warning insteadassertlen(set(a_batch.flatten().tolist())) >1, (
"The attributions are uniformly distributed, ""which may cause inconsistent results since many ""metrics rely on ordering.""Recompute the explanations."
)
# assert not np.all((a_batch < 0.0)), "Attributions should not all be less than zero." # raise warning instead!
Add a new category to each metric, to clarity if the same shape is needed
Specify what is necessary for the issue to be closed.
@mentions of the person that is apt to review these changes e.g., @annahedstroem
The text was updated successfully, but these errors were encountered:
annahedstroem
changed the title
Remove "feature importance" requirement i.e., that attribution shape is same as input
Remove "feature importance" requirement (i.e., explanations share shape with input)
Mar 19, 2024
Description of the problem
Description of a solution
base.Metric
, see the commented out below:Minimum acceptance criteria
The text was updated successfully, but these errors were encountered: