This topic provides guidance on how to troubleshoot issues related to working with services on Tanzu Application Platform. For workarounds for known limitations, see Known limitations.
To follow the steps in this topic, you must have kubectl access to the cluster.
This section provides guidance on how to debug issues related to using ClassClaim
and provisioner-based ClusterInstanceClass
.
The approach starts by inspecting a ClassClaim
and tracing back through the chain of
resources that are created when fulfilling the ClassClaim
.
-
Inspect the status of
ClassClaim
by running:kubectl describe classclaim claim-name -n NAMESPACE
Where
NAMESPACE
is your namespace.From the output, check the following:
- Check the status conditions for information that can lead you to the cause of the issue.
- Check
.spec.classRef.name
and record the value.
-
Inspect the status of the
ClusterInstanceClass
by running:kubectl describe clusterinstanceclass CLASS-NAME
Where
CLASS-NAME
is the value of.spec.classRef.name
you retrieved in the previous step.From the output, check the following:
- Check the status conditions for information that can lead you to the cause of the issue.
- Check that the
Ready
condition has status"True"
. - Check
.spec.provisioner.crossplane
and record the value.
-
Inspect the status of the
CompositeResourceDefinition
by running:kubectl describe xrd XRD-NAME
Where
XRD-NAME
is the value of.spec.provisioner.crossplane
you retrieved in the previous step.From the output, check the following:
- Check the status conditions for information that can lead you to the cause of the issue.
- Check that the
Established
condition has status"True"
. - Check events for any errors or warnings that can lead you to the cause of the issue.
- If both the
ClusterInstanceClass
reportsReady="True"
and theCompositeResourceDefinition
reportsEstablished="True"
, move on to the next section.
-
Check
.status.provisionedResourceRef
by running:kubectl describe classclaim claim-name -n NAMESPACE
Where
NAMESPACE
is your namespace.From the output, check the following:
- Check
.status.provisionedResourceRef
, and record the values ofkind
,apiVersion
, andname
.
- Check
-
Inspect the status of the Composite Resource by running:
kubectl describe KIND.API-GROUP NAME
Where:
KIND
is the value ofkind
you retrieved in the previous step.API-GROUP
is the value ofapiVersion
you retrieved in the previous step without the/<version>
part.NAME
is the value ofname
you retrieved in the previous step.
From the output, check the following:
- Check the status conditions for information that can lead you to the cause of the issue.
- Check that the
Synced
condition has status"True"
. If it doesn't then there was an issue creating the Managed Resources from which this Composite Resource is composed. Refer to.spec.resourceRefs
in the output and for each:- Use the values of
kind
,apiVersion
, andname
to inspect the status of the Managed Resource. - Check the status conditions for information that can lead you to the cause of the issue.
- Use the values of
- Check events for any errors or warnings that can lead you to the cause of the issue.
- If all Managed Resources appear healthy, move on to the next section.
Inspect the events log by running:
kubectl get events -A
From the output, check the following:
- Check for any errors or warnings that can lead you to the cause of the issue.
- If there are no errors or warnings, move on to the next section.
-
Check
.status.resourceRef
by running:kubectl get classclaim claim-name -n NAMESPACE -o yaml
Where
NAMESPACE
is your namespace.From the output, check the following:
- Check
.status.resourceRef
and record the valueskind
,apiVersion
,name
, andnamespace
- Check
-
Inspect the claimed resource, which is likely a secret, by running:
kubectl get secret NAME -n NAMESPACE -o yaml
Where:
NAME
is thename
you retrieved in the previous step.NAMESPACE
is thenamespace
you retrieved in the previous step.
If the secret is there and has data, then something else must be causing the issue.
If you have followed the steps in this topic and are still unable to discover the cause of the issue, contact VMware Support for further guidance and help to resolve the issue.