Skip to content

Commit

Permalink
Change prediction to inference
Browse files Browse the repository at this point in the history
  • Loading branch information
ruivieira committed Jun 25, 2024
1 parent 16d4092 commit 2b8d608
Showing 1 changed file with 10 additions and 10 deletions.
20 changes: 10 additions & 10 deletions docs/modules/ROOT/pages/saliency-explanations-on-odh.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
= Saliency explanations on ODH

This tutorial will walk you through setting up and using TrustyAI to provide saliency explanations for model predictions within a OpenShift environment using OpenDataHub. We will deploy a model, configure the environment, and demonstrate how to obtain predictions and their explanations.
This tutorial will walk you through setting up and using TrustyAI to provide saliency explanations for model inferences within a OpenShift environment using OpenDataHub. We will deploy a model, configure the environment, and demonstrate how to obtain inferences and their explanations.

[NOTE]
====
Expand Down Expand Up @@ -107,12 +107,12 @@ export TOKEN=$(oc whoami -t)

== Requesting Explanations

=== Issue a Prediction
=== Request an inference

In order to obtain an explanation, we first need to make a prediction.
The explanation request will be based on this prediction ID.
In order to obtain an explanation, we first need to make an inference.
The explanation request will be based on this inference ID.

Start by sending an inference request to the model to get a prediction. Replace `${TOKEN}` with your actual authorization token.
Start by sending an inference request to the model to get an inference. Replace `${TOKEN}` with your actual authorization token.

[source,shell]
----
Expand Down Expand Up @@ -142,25 +142,25 @@ The response will be similar to
]
```

Extract the latest prediction ID for use in obtaining an explanation.
Extract the latest inference ID for use in obtaining an explanation.

```shell
export PREDICTION_ID=$(curl -skv -H "Authorization: Bearer ${TOKEN}" \
export INFERENCE_ID=$(curl -skv -H "Authorization: Bearer ${TOKEN}" \
https://${TRUSTYAI_ROUTE}/info/inference/ids/explainer-test?type=organic | jq -r '.[-1].id')
```

=== Request a LIME Explanation

We will use LIME as our explainer for this tutorial. More information on LIME can be found xref:local-explainers.adoc#LIME[here].

Request a LIME explanation for the selected prediction ID.
Request a LIME explanation for the selected inference ID.

[source,shell]
----
curl -sk -X POST -H "Authorization: Bearer ${TOKEN}" \
-H "Content-Type: application/json" \
-d "{
\"predictionId\": \"$PREDICTION_ID\",
\"predictionId\": \"$INFERENCE_ID\",
\"modelConfig\": {
\"target\": \"modelmesh-serving.${NAMESPACE}.svc.cluster.local:8033\",
\"name\": \"explainer-test\",
Expand All @@ -173,7 +173,7 @@ curl -sk -X POST -H "Authorization: Bearer ${TOKEN}" \

=== Results

The output will show the saliency scores and confidence for each input feature used in the prediction.
The output will show the saliency scores and confidence for each input feature used in the inference.

[source,json]
----
Expand Down

0 comments on commit 2b8d608

Please sign in to comment.