From 16d40922373572ebe292ccda5638dae76327cb30 Mon Sep 17 00:00:00 2001 From: Rui Vieira Date: Tue, 25 Jun 2024 15:29:11 +0100 Subject: [PATCH 1/2] docs: Update saliency explainer's tutorial with id listing endpoint --- .../pages/saliency-explanations-on-odh.adoc | 24 +++++++++++++++++-- 1 file changed, 22 insertions(+), 2 deletions(-) diff --git a/docs/modules/ROOT/pages/saliency-explanations-on-odh.adoc b/docs/modules/ROOT/pages/saliency-explanations-on-odh.adoc index 3342533..bf6a976 100644 --- a/docs/modules/ROOT/pages/saliency-explanations-on-odh.adoc +++ b/docs/modules/ROOT/pages/saliency-explanations-on-odh.adoc @@ -121,12 +121,32 @@ curl -skv -H "Authorization: Bearer ${TOKEN}" \ -d '{"inputs": [{"name": "predict","shape": [1,5], "datatype": "FP64", "data": [1.0, 2.0, 1.0, 0.0, 1.0]}]}' ---- -=== Get a Random Prediction ID +=== Getting an inference ID + +The TrustyAI service provides an endpoint to list stored inference ids. +You can list all (non-synthetic or _organic_) ids by running: + +```shell +curl -skv -H "Authorization: Bearer ${TOKEN}" \ + https://${TRUSTYAI_ROUTE}/info/inference/ids/explainer-test?type=organic +``` + +The response will be similar to + +```json +[ + { + "id":"a3d3d4a2-93f6-4a23-aedb-051416ecf84f", + "timestamp":"2024-06-25T09:06:28.75701201" + } +] +``` Extract the latest prediction ID for use in obtaining an explanation. ```shell -export PREDICTION_ID=$(oc exec $TRUSTYAI_POD -n explainer-tests -c trustyai-service -- sh -c "awk -F',' '{print \$2}' /inputs/explainer-test-internal_data.csv | tail -n 1") +export PREDICTION_ID=$(curl -skv -H "Authorization: Bearer ${TOKEN}" \ + https://${TRUSTYAI_ROUTE}/info/inference/ids/explainer-test?type=organic | jq -r '.[-1].id') ``` === Request a LIME Explanation From 2b8d608c0eb5acc738f5bb15aea6603585d0f74c Mon Sep 17 00:00:00 2001 From: Rui Vieira Date: Tue, 25 Jun 2024 16:31:20 +0100 Subject: [PATCH 2/2] Change prediction to inference --- .../pages/saliency-explanations-on-odh.adoc | 20 +++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/docs/modules/ROOT/pages/saliency-explanations-on-odh.adoc b/docs/modules/ROOT/pages/saliency-explanations-on-odh.adoc index bf6a976..baa06d9 100644 --- a/docs/modules/ROOT/pages/saliency-explanations-on-odh.adoc +++ b/docs/modules/ROOT/pages/saliency-explanations-on-odh.adoc @@ -1,6 +1,6 @@ = Saliency explanations on ODH -This tutorial will walk you through setting up and using TrustyAI to provide saliency explanations for model predictions within a OpenShift environment using OpenDataHub. We will deploy a model, configure the environment, and demonstrate how to obtain predictions and their explanations. +This tutorial will walk you through setting up and using TrustyAI to provide saliency explanations for model inferences within a OpenShift environment using OpenDataHub. We will deploy a model, configure the environment, and demonstrate how to obtain inferences and their explanations. [NOTE] ==== @@ -107,12 +107,12 @@ export TOKEN=$(oc whoami -t) == Requesting Explanations -=== Issue a Prediction +=== Request an inference -In order to obtain an explanation, we first need to make a prediction. -The explanation request will be based on this prediction ID. +In order to obtain an explanation, we first need to make an inference. +The explanation request will be based on this inference ID. -Start by sending an inference request to the model to get a prediction. Replace `${TOKEN}` with your actual authorization token. +Start by sending an inference request to the model to get an inference. Replace `${TOKEN}` with your actual authorization token. [source,shell] ---- @@ -142,10 +142,10 @@ The response will be similar to ] ``` -Extract the latest prediction ID for use in obtaining an explanation. +Extract the latest inference ID for use in obtaining an explanation. ```shell -export PREDICTION_ID=$(curl -skv -H "Authorization: Bearer ${TOKEN}" \ +export INFERENCE_ID=$(curl -skv -H "Authorization: Bearer ${TOKEN}" \ https://${TRUSTYAI_ROUTE}/info/inference/ids/explainer-test?type=organic | jq -r '.[-1].id') ``` @@ -153,14 +153,14 @@ export PREDICTION_ID=$(curl -skv -H "Authorization: Bearer ${TOKEN}" \ We will use LIME as our explainer for this tutorial. More information on LIME can be found xref:local-explainers.adoc#LIME[here]. -Request a LIME explanation for the selected prediction ID. +Request a LIME explanation for the selected inference ID. [source,shell] ---- curl -sk -X POST -H "Authorization: Bearer ${TOKEN}" \ -H "Content-Type: application/json" \ -d "{ - \"predictionId\": \"$PREDICTION_ID\", + \"predictionId\": \"$INFERENCE_ID\", \"modelConfig\": { \"target\": \"modelmesh-serving.${NAMESPACE}.svc.cluster.local:8033\", \"name\": \"explainer-test\", @@ -173,7 +173,7 @@ curl -sk -X POST -H "Authorization: Bearer ${TOKEN}" \ === Results -The output will show the saliency scores and confidence for each input feature used in the prediction. +The output will show the saliency scores and confidence for each input feature used in the inference. [source,json] ----