Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update demo #33

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
name: "explainer-test-lime"
name: "explainer-test"
spec:
predictor: <1>
model:
Expand Down
20 changes: 20 additions & 0 deletions docs/modules/ROOT/examples/inference-service-explainer.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
name: "explainer-test"
annotations:
sidecar.istio.io/inject: "true"
sidecar.istio.io/rewriteAppHTTPProbers: "true"
serving.knative.openshift.io/enablePassthrough: "true"
spec:
predictor:
model:
modelFormat:
name: sklearn
protocolVersion: v2
runtime: kserve-sklearnserver
storageUri: https://github.com/trustyai-explainability/model-collection/raw/bank-churn/model.joblib
explainer:
containers:
- name: explainer
image: quay.io/trustyai/trustyai-kserve-explainer:latest
38 changes: 0 additions & 38 deletions docs/modules/ROOT/examples/kserve-explainer-lime-saliencies.json

This file was deleted.

74 changes: 74 additions & 0 deletions docs/modules/ROOT/examples/kserve-explainer-saliencies.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
{
"timestamp": "2024-05-06T21:42:45.307+00:00",
"type": "explanation",
"saliencies": {
"LIME": {
"outputs-0": [
{
"name": "inputs-12",
"score": 0.8496797810357467,
"confidence": 0
},
{
"name": "inputs-5",
"score": 0.6830766647546147,
"confidence": 0
},
{
"name": "inputs-7",
"score": 0.6768475400887952,
"confidence": 0
},
{
"name": "inputs-9",
"score": 0.018349706373627164,
"confidence": 0
},
{
"name": "inputs-3",
"score": 0.10709513039521452,
"confidence": 0
},
{
"name": "inputs-11",
"score": 0,
"confidence": 0
}
]
},
"SHAP": {
"outputs-0": [
{
"name": "inputs-12",
"score": 0.8496797810357467,
"confidence": 0
},
{
"name": "inputs-5",
"score": 0.6830766647546147,
"confidence": 0
},
{
"name": "inputs-7",
"score": 0.6768475400887952,
"confidence": 0
},
{
"name": "inputs-9",
"score": 0.018349706373627164,
"confidence": 0
},
{
"name": "inputs-3",
"score": 0.10709513039521452,
"confidence": 0
},
{
"name": "inputs-11",
"score": 0,
"confidence": 0
}
]
}
}
}
29 changes: 19 additions & 10 deletions docs/modules/ROOT/pages/saliency-explanations-with-kserve.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -39,18 +39,18 @@ The `InferenceService` to deploy will have a new key (`explainer`), and will be

[,yaml,highlight=2..5]
----
include::example$inference-service-explainer-lime-k8s.yaml[]
include::example$inference-service-explainer-k8s.yaml[]
----
<1> The `predictor` field is the same as you would use for a regular `InferenceService`.
<2> In this case we are using an `sklearn` model, by specifying the URI location, but you can use any other model supported by KServe.
<3> The `explainer` field is a new key that specifies the explainer to use. In this case, the `lime` explainer is used by default.
<3> The `explainer` field is a new key that specifies the explainer to use. In this case, the `lime` and `shap` explainers are used by default.
<4> The image of the KServe TrustyAI explainer must be specified in the `explainer.image` field.

We can deploy it with

[source,shell]
----
kubectl apply -f inference-service-explainer-lime.yaml -n $NAMESPACE
kubectl apply -f inference-service-explainer.yaml -n $NAMESPACE
----

And wait for the `InferenceService` to be ready.
Expand All @@ -63,8 +63,8 @@ kubectl get pods -n $NAMESPACE
[source,text]
----
NAME READY STATUS RESTARTS AGE
explainer-test-lime-explainer-00001-deployment-c6fff8b4-5x4qg 2/2 Running 0 41m
explainer-test-lime-predictor-00001-deployment-dfd47bb47-lwl5b 2/2 Running 0 41m
explainer-test-explainer-00001-deployment-c6fff8b4-5x4qg 2/2 Running 0 41m
explainer-test-predictor-00001-deployment-dfd47bb47-lwl5b 2/2 Running 0 41m
----

You will see that in addition to the `predictor` pod, there is also an `explainer` pod running. These two pods are responsible for the prediction and explanation, respectively.
Expand All @@ -85,9 +85,9 @@ To request a prediction, we can use the following command:

[source,shell]
----
curl -s -H "Host: explainer-test-lime.${NAMESPACE}.example.com" \
curl -s -H "Host: explainer-test.${NAMESPACE}.example.com" \
-H "Content-Type: application/json" \
"http://localhost:8080/v1/models/explainer-test-lime:predict" \
"http://localhost:8080/v1/models/explainer-test:predict" \
-d @payload.json
----

Expand All @@ -104,9 +104,9 @@ To request an explanation, we can use the a very similar command and payload, si

[source,shell]
----
curl -s -H "Host: explainer-test-lime.${NAMESPACE}.example.com" \
curl -s -H "Host: explainer-test.${NAMESPACE}.example.com" \
-H "Content-Type: application/json" \
"http://localhost:8080/v1/models/explainer-test-lime:explain" \ <1>
"http://localhost:8080/v1/models/explainer-test:explain" \ <1>
-d @payload.json
----
<1> The verb `predict` is replaced with `explain`.
Expand All @@ -115,7 +115,16 @@ This produce a saliency map, similar to:

[source,json]
----
include::example$kserve-explainer-lime-saliencies.json[]
include::example$kserve-explainer-saliencies.json[]
----

From the above saliency map, we can see that the most important feature is `inputs-12`.

== Additional configuration

To get only `LIME` or `SHAP` explanations we simply add the following lines to the `InferenceService` spec:

[source,json]
----
include::example$inference-service-explainer-lime.yaml[]
----