Skip to content

Commit

Permalink
Merge pull request #24 from ruivieira/main
Browse files Browse the repository at this point in the history
docs: Update saliency explainer's tutorial with id listing endpoint
  • Loading branch information
ruivieira authored Jun 25, 2024
2 parents c59d350 + 2b8d608 commit 30f9aa8
Showing 1 changed file with 31 additions and 11 deletions.
42 changes: 31 additions & 11 deletions docs/modules/ROOT/pages/saliency-explanations-on-odh.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
= Saliency explanations on ODH

This tutorial will walk you through setting up and using TrustyAI to provide saliency explanations for model predictions within a OpenShift environment using OpenDataHub. We will deploy a model, configure the environment, and demonstrate how to obtain predictions and their explanations.
This tutorial will walk you through setting up and using TrustyAI to provide saliency explanations for model inferences within a OpenShift environment using OpenDataHub. We will deploy a model, configure the environment, and demonstrate how to obtain inferences and their explanations.

[NOTE]
====
Expand Down Expand Up @@ -107,12 +107,12 @@ export TOKEN=$(oc whoami -t)

== Requesting Explanations

=== Issue a Prediction
=== Request an inference

In order to obtain an explanation, we first need to make a prediction.
The explanation request will be based on this prediction ID.
In order to obtain an explanation, we first need to make an inference.
The explanation request will be based on this inference ID.

Start by sending an inference request to the model to get a prediction. Replace `${TOKEN}` with your actual authorization token.
Start by sending an inference request to the model to get an inference. Replace `${TOKEN}` with your actual authorization token.

[source,shell]
----
Expand All @@ -121,26 +121,46 @@ curl -skv -H "Authorization: Bearer ${TOKEN}" \
-d '{"inputs": [{"name": "predict","shape": [1,5], "datatype": "FP64", "data": [1.0, 2.0, 1.0, 0.0, 1.0]}]}'
----

=== Get a Random Prediction ID
=== Getting an inference ID

Extract the latest prediction ID for use in obtaining an explanation.
The TrustyAI service provides an endpoint to list stored inference ids.
You can list all (non-synthetic or _organic_) ids by running:

```shell
export PREDICTION_ID=$(oc exec $TRUSTYAI_POD -n explainer-tests -c trustyai-service -- sh -c "awk -F',' '{print \$2}' /inputs/explainer-test-internal_data.csv | tail -n 1")
curl -skv -H "Authorization: Bearer ${TOKEN}" \
https://${TRUSTYAI_ROUTE}/info/inference/ids/explainer-test?type=organic
```

The response will be similar to

```json
[
{
"id":"a3d3d4a2-93f6-4a23-aedb-051416ecf84f",
"timestamp":"2024-06-25T09:06:28.75701201"
}
]
```

Extract the latest inference ID for use in obtaining an explanation.

```shell
export INFERENCE_ID=$(curl -skv -H "Authorization: Bearer ${TOKEN}" \
https://${TRUSTYAI_ROUTE}/info/inference/ids/explainer-test?type=organic | jq -r '.[-1].id')
```

=== Request a LIME Explanation

We will use LIME as our explainer for this tutorial. More information on LIME can be found xref:local-explainers.adoc#LIME[here].

Request a LIME explanation for the selected prediction ID.
Request a LIME explanation for the selected inference ID.

[source,shell]
----
curl -sk -X POST -H "Authorization: Bearer ${TOKEN}" \
-H "Content-Type: application/json" \
-d "{
\"predictionId\": \"$PREDICTION_ID\",
\"predictionId\": \"$INFERENCE_ID\",
\"modelConfig\": {
\"target\": \"modelmesh-serving.${NAMESPACE}.svc.cluster.local:8033\",
\"name\": \"explainer-test\",
Expand All @@ -153,7 +173,7 @@ curl -sk -X POST -H "Authorization: Bearer ${TOKEN}" \

=== Results

The output will show the saliency scores and confidence for each input feature used in the prediction.
The output will show the saliency scores and confidence for each input feature used in the inference.

[source,json]
----
Expand Down

0 comments on commit 30f9aa8

Please sign in to comment.