diff --git a/.hugo_build.lock b/.hugo_build.lock new file mode 100644 index 00000000..e69de29b diff --git a/content/docs/demos/canary_rollout.md b/content/docs/demos/canary_rollout.md index 39da8bc6..403872f9 100644 --- a/content/docs/demos/canary_rollout.md +++ b/content/docs/demos/canary_rollout.md @@ -27,53 +27,63 @@ The following steps demonstrate the canary rollout deployment strategy. 1. Enable permissive mode - ```bash + ```console osm_namespace=osm-system # Replace osm-system with the namespace where OSM is installed kubectl patch meshconfig osm-mesh-config -n "$osm_namespace" -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":true}}}' --type=merge ``` 1. Deploy the `curl` client into the `curl` namespace after enrolling its namespace to the mesh. - ```bash - # Create the curl namespace + - Create the curl namespace + ```console kubectl create namespace curl + ``` - # Add the namespace to the mesh + - Add the namespace to the mesh + ```console osm namespace add curl + ``` # Deploy curl client in the curl namespace + ```console kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/{{< param osm_branch >}}/manifests/samples/curl/curl.yaml -n curl ``` - Confirm the `curl` client pod is up and running. +Confirm the `curl` client pod is up and running. + + ```console + kubectl get pods -n curl + ``` + - The output will be similar to: ```console - $ kubectl get pods -n curl NAME READY STATUS RESTARTS AGE curl-54ccc6954c-9rlvp 2/2 Running 0 20s ``` 1. Create the root `httpbin` service that clients will direct traffic to. The service has the selector `app: httpbin`. - ```bash - # Create the httpbin namespace +- Create the httpbin namespace + ```console kubectl create namespace httpbin + ``` # Add the namespace to the mesh osm namespace add httpbin # Create the httpbin root service and service account + ```console kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/{{< param osm_branch >}}/manifests/samples/canary/httpbin.yaml -n httpbin ``` 1. Deploy version `v1` of the `httpbin` service. The service `httpbin-v1` has the selector `app: httpbin, version: v1`, and the deployment `httpbin-v1` has the labels `app: httpbin, version: v1` matching the selector of both the `httpbin` root service and `httpbin-v1` service. - ```bash + ```console kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/{{< param osm_branch >}}/manifests/samples/canary/httpbin-v1.yaml -n httpbin ``` 1. Create an SMI TrafficSplit resource that directs all traffic to the `httpbin-v1` service. - ```bash + ```console kubectl apply -f - <}}/manifests/samples/canary/httpbin-v2.yaml -n httpbin ``` 1. Perform the canary rollout by updating the SMI TrafficSplit resource to split traffic directed to the root service FQDN `httpbin.httpbin.svc.cluster.local` to both the `httpbin-v1` and `httpbin-v2` services, fronting the `v1` and `v2` versions of the `httpbin` service respectively. We will distribute the weight equally to demonstrate traffic splitting. - ```bash + ```console kubectl apply -f - <}}/manifests/samples/httpbin/httpbin.yaml -n httpbin ``` Confirm the `httpbin` service and pods are up and running. ```console - $ kubectl get svc -n httpbin + kubectl get svc -n httpbin + ``` + The output will be similar to: + ```console NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE httpbin ClusterIP 10.96.198.23 14001/TCP 20s ``` ```console - $ kubectl get pods -n httpbin + kubectl get pods -n httpbin + ``` + The output will be similar to: + ```console NAME READY STATUS RESTARTS AGE httpbin-5b8b94b9-lt2vs 2/2 Running 0 20s ``` 1. Deploy the `curl` client into the `curl` namespace after enrolling its namespace to the mesh. + Create the curl namespace ```bash - # Create the curl namespace kubectl create namespace curl - - # Add the namespace to the mesh + ``` + Add the namespace to the mesh + ```bash osm namespace add curl - - # Deploy curl client in the curl namespace + ``` + Deploy curl client in the curl namespace + ```bash kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/{{< param osm_branch >}}/manifests/samples/curl/curl.yaml -n curl ``` Confirm the `curl` client pod is up and running. ```console - $ kubectl get pods -n curl + kubectl get pods -n curl + ``` + The output will be similar to: + ```console NAME READY STATUS RESTARTS AGE curl-54ccc6954c-9rlvp 2/2 Running 0 20s ``` @@ -160,7 +184,10 @@ The following demo uses [cert-manager][1] as the certificate provider to issue c 1. Confirm the `curl` client is able to access the `httpbin` service on port `14001`. ```console - $ kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- curl -I http://httpbin.httpbin:14001 + kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- curl -I http://httpbin.httpbin:14001 + ``` + The output willbe similar to: + ```console HTTP/1.1 200 OK server: envoy date: Mon, 15 Mar 2021 22:45:23 GMT diff --git a/content/docs/demos/circuit_breaking_mesh_external.md b/content/docs/demos/circuit_breaking_mesh_external.md index ed685bdf..3302e0a7 100644 --- a/content/docs/demos/circuit_breaking_mesh_external.md +++ b/content/docs/demos/circuit_breaking_mesh_external.md @@ -22,44 +22,58 @@ The following demo shows a load-testing client [fortio](https://github.com/forti 1. Deploy the `httpbin` service into the `httpbin` namespace. The `httpbin` service runs on port `14001` and is not added to the mesh, so it is considered to be a destination external to the mesh. + Create the httpbin namespace ```bash - # Create the httpbin namespace kubectl create namespace httpbin - - # Deploy httpbin service in the httpbin namespace + ``` + Deploy httpbin service in the httpbin namespace + ```bash kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/{{< param osm_branch >}}/manifests/samples/httpbin/httpbin.yaml -n httpbin ``` Confirm the `httpbin` service and pods are up and running. ```console - $ kubectl get svc -n httpbin + kubectl get svc -n httpbin + ``` + The output will be si,ilar to: + ```console NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE httpbin ClusterIP 10.96.198.23 14001/TCP 20s ``` ```console - $ kubectl get pods -n httpbin + kubectl get pods -n httpbin + ``` + The output will be similar to: + ```console NAME READY STATUS RESTARTS AGE httpbin-5b8b94b9-lt2vs 1/1 Running 0 20s ``` 1. Deploy the `fortio` load-testing client in the `client` namespace after enrolling its namespace to the mesh. + Create the client namespace ```bash - # Create the client namespace kubectl create namespace client + ``` - # Add the namespace to the mesh + Add the namespace to the mesh + ```bash osm namespace add client + ``` # Deploy fortio client in the client namespace + ```bash kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/{{< param osm_branch >}}/manifests/samples/fortio/fortio.yaml -n client ``` Confirm the `fortio` client pod is up and running. ```console - $ kubectl get pods -n client + kubectl get pods -n client + ``` + The output will be si,ilar to: + ```console NAME READY STATUS RESTARTS AGE fortio-6477f8495f-bj4s9 2/2 Running 0 19s ``` @@ -87,9 +101,11 @@ The following demo shows a load-testing client [fortio](https://github.com/forti 1. Confirm the `fortio` client is able to successfully make HTTP requests to the external host `httpbin.httpbin.svc.cluster.local` service on port `14001`. We call the external service with `5` concurrent connections (`-c 5`) and send `50` requests (`-n 50`). ```console - $ export fortio_pod="$(kubectl get pod -n client -l app=fortio -o jsonpath='{.items[0].metadata.name}')" - - $ kubectl exec "$fortio_pod" -c fortio -n client -- /usr/bin/fortio load -c 5 -qps 0 -n 50 -loglevel Warning http://httpbin.httpbin.svc.cluster.local:14001/get + export fortio_pod="$(kubectl get pod -n client -l app=fortio -o jsonpath='{.items[0].metadata.name}')" + ``` + Theoutpul will be similar to: + ```console + kubectl exec "$fortio_pod" -c fortio -n client -- /usr/bin/fortio load -c 5 -qps 0 -n 50 -loglevel Warning http://httpbin.httpbin.svc.cluster.local:14001/get 19:56:34 I logger.go:127> Log level is now 3 Warning (was 2 Info) Fortio 1.17.1 running at 0 queries per second, 8->8 procs, for 50 calls: http://httpbin.httpbin.svc.cluster.local:14001/get Starting at max qps with 5 thread(s) [gomax 8] for exactly 50 calls (10 per thread + 0) @@ -160,7 +176,10 @@ The following demo shows a load-testing client [fortio](https://github.com/forti 1. Confirm the `fortio` client is unable to make the same amount of successful requests as before due to the connection and request level circuit breaking limits configured above. ```console - $ kubectl exec "$fortio_pod" -c fortio -n client -- /usr/bin/fortio load -c 5 -qps 0 -n 50 -loglevel Warning http://httpbin.httpbin.svc.cluster.local:14001/get + kubectl exec "$fortio_pod" -c fortio -n client -- /usr/bin/fortio load -c 5 -qps 0 -n 50 -loglevel Warning http://httpbin.httpbin.svc.cluster.local:14001/get + ``` + The output will be similar to: + ```console 19:58:48 I logger.go:127> Log level is now 3 Warning (was 2 Info) Fortio 1.17.1 running at 0 queries per second, 8->8 procs, for 50 calls: http://httpbin.httpbin.svc.cluster.local:14001/get Starting at max qps with 5 thread(s) [gomax 8] for exactly 50 calls (10 per thread + 0) @@ -217,7 +236,10 @@ The following demo shows a load-testing client [fortio](https://github.com/forti 1. Examine the `Envoy` sidecar stats to see statistics pertaining to the requests that tripped the circuit breaker. ```console - $ osm proxy get stats $fortio_pod -n client | grep 'httpbin.*pending' + osm proxy get stats $fortio_pod -n client | grep 'httpbin.*pending' + ``` + The output will be similar to: + ```console cluster.httpbin_httpbin_svc_cluster_local_14001.circuit_breakers.default.remaining_pending: 1 cluster.httpbin_httpbin_svc_cluster_local_14001.circuit_breakers.default.rq_pending_open: 0 cluster.httpbin_httpbin_svc_cluster_local_14001.circuit_breakers.high.rq_pending_open: 0 diff --git a/content/docs/demos/circuit_breaking_mesh_internal.md b/content/docs/demos/circuit_breaking_mesh_internal.md index 7694f9d6..1c3ad990 100644 --- a/content/docs/demos/circuit_breaking_mesh_internal.md +++ b/content/docs/demos/circuit_breaking_mesh_internal.md @@ -22,62 +22,83 @@ The following demo shows a load-testing client [fortio](https://github.com/forti 1. For simplicity, enable [permissive traffic policy mode](/docs/guides/traffic_management/permissive_mode) so that explicit SMI traffic access policies are not required for application connectivity within the mesh. ```bash - export osm_namespace=osm-system # Replace osm-system with the namespace where OSM is installed + export osm_namespace=osm-system + ``` + Replace osm-system with the namespace where OSM is installed + ```bash kubectl patch meshconfig osm-mesh-config -n "$osm_namespace" -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":true}}}' --type=merge ``` 1. Deploy the `httpbin` service into the `httpbin` namespace after enrolling its namespace to the mesh. The `httpbin` service runs on port `14001`. + Create the httpbin namespace ```bash - # Create the httpbin namespace kubectl create namespace httpbin + ``` - # Add the namespace to the mesh + Add the namespace to the mesh + ```bash osm namespace add httpbin + ``` - # Deploy httpbin service in the httpbin namespace + Deploy httpbin service in the httpbin namespace + ```bash kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/{{< param osm_branch >}}/manifests/samples/httpbin/httpbin.yaml -n httpbin ``` Confirm the `httpbin` service and pods are up and running. + ```bash + kubectl get svc -n httpbin + ``` + The output will be similar to: ```console - $ kubectl get svc -n httpbin NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE httpbin ClusterIP 10.96.198.23 14001/TCP 20s ``` ```console - $ kubectl get pods -n httpbin + kubectl get pods -n httpbin + ``` + The output will be similar to: + ```console NAME READY STATUS RESTARTS AGE httpbin-5b8b94b9-lt2vs 2/2 Running 0 20s ``` 1. Deploy the `fortio` load-testing client in the `client` namespace after enrolling its namespace to the mesh. + Create the client namespace ```bash - # Create the client namespace kubectl create namespace client - - # Add the namespace to the mesh + ``` + Add the namespace to the mesh + ```bash osm namespace add client - - # Deploy fortio client in the client namespace + ``` + Deploy fortio client in the client namespace + ```bash kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/{{< param osm_branch >}}/manifests/samples/fortio/fortio.yaml -n client ``` Confirm the `fortio` client pod is up and running. ```console - $ kubectl get pods -n client + kubectl get pods -n client + ``` + The output will be similar to: + ```console NAME READY STATUS RESTARTS AGE fortio-6477f8495f-bj4s9 2/2 Running 0 19s ``` 1. Confirm the `fortio` client is able to successfully make HTTP requests to the `httpbin` service on port `14001`. We call the `httpbin` service with `3` concurrent connections (`-c 3`) and send `50` requests (`-n 50`). ```console - $ export fortio_pod="$(kubectl get pod -n client -l app=fortio -o jsonpath='{.items[0].metadata.name}')" + export fortio_pod="$(kubectl get pod -n client -l app=fortio -o jsonpath='{.items[0].metadata.name}')" - $ kubectl exec "$fortio_pod" -c fortio -n client -- /usr/bin/fortio load -c 3 -qps 0 -n 50 -loglevel Warning http://httpbin.httpbin.svc.cluster.local:14001/get + kubectl exec "$fortio_pod" -c fortio -n client -- /usr/bin/fortio load -c 3 -qps 0 -n 50 -loglevel Warning http://httpbin.httpbin.svc.cluster.local:14001/get + ``` + The output will be similar to: + ```console 17:48:46 I logger.go:127> Log level is now 3 Warning (was 2 Info) Fortio 1.17.1 running at 0 queries per second, 8->8 procs, for 50 calls: http://httpbin.httpbin.svc.cluster.local:14001/get Starting at max qps with 3 thread(s) [gomax 8] for exactly 50 calls (16 per thread + 2) @@ -138,7 +159,10 @@ The following demo shows a load-testing client [fortio](https://github.com/forti 1. Confirm the `fortio` client is unable to make the same amount of successful requests as before due to the connection and request level circuit breaking limits configured above. ```console - $ kubectl exec "$fortio_pod" -c fortio -n client -- /usr/bin/fortio load -c 3 -qps 0 -n 50 -loglevel Warning http://httpbin.httpbin.svc.cluster.local:14001/get + kubectl exec "$fortio_pod" -c fortio -n client -- /usr/bin/fortio load -c 3 -qps 0 -n 50 -loglevel Warning http://httpbin.httpbin.svc.cluster.local:14001/get + ``` + The output will be similar to: + ```console 17:59:19 I logger.go:127> Log level is now 3 Warning (was 2 Info) Fortio 1.17.1 running at 0 queries per second, 8->8 procs, for 50 calls: http://httpbin.httpbin.svc.cluster.local:14001/get Starting at max qps with 3 thread(s) [gomax 8] for exactly 50 calls (16 per thread + 2) @@ -212,7 +236,10 @@ The following demo shows a load-testing client [fortio](https://github.com/forti 1. Examine the `Envoy` sidecar stats to see statistics pertaining to the requests that tripped the circuit breaker. ```console - $ osm proxy get stats "$fortio_pod" -n client | grep 'httpbin.*pending' + osm proxy get stats "$fortio_pod" -n client | grep 'httpbin.*pending' + ``` + The output will be similar to: + ```console cluster.httpbin/httpbin|14001.circuit_breakers.default.remaining_pending: 1 cluster.httpbin/httpbin|14001.circuit_breakers.default.rq_pending_open: 0 cluster.httpbin/httpbin|14001.circuit_breakers.high.rq_pending_open: 0 diff --git a/content/docs/demos/egress_passthrough.md b/content/docs/demos/egress_passthrough.md index 5e10dd11..e7fd82a8 100644 --- a/content/docs/demos/egress_passthrough.md +++ b/content/docs/demos/egress_passthrough.md @@ -25,28 +25,36 @@ This guide demonstrates a client within the service mesh accessing destinations ``` 1. Deploy the `curl` client into the `curl` namespace after enrolling its namespace to the mesh. + Create the curl namespace ```bash - # Create the curl namespace kubectl create namespace curl - - # Add the namespace to the mesh + ``` + Add the namespace to the mesh + ```bash osm namespace add curl - - # Deploy curl client in the curl namespace + ``` + Deploy curl client in the curl namespace + ```bash kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/{{< param osm_branch >}}/manifests/samples/curl/curl.yaml -n curl ``` Confirm the `curl` client pod is up and running. ```console - $ kubectl get pods -n curl + kubectl get pods -n curl + ``` + The output will be similar to: + ```console NAME READY STATUS RESTARTS AGE curl-54ccc6954c-9rlvp 2/2 Running 0 20s ``` 1. Confirm the `curl` client is able to make successful HTTPS requests to the `httpbin.org` website on port `443`. ```console - $ kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- curl -I https://httpbin.org:443 + kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- curl -I https://httpbin.org:443 + ``` + The output will be similar to: + ```console HTTP/2 200 date: Tue, 16 Mar 2021 22:19:00 GMT content-type: text/html; charset=utf-8 @@ -63,7 +71,10 @@ This guide demonstrates a client within the service mesh accessing destinations kubectl patch meshconfig osm-mesh-config -n "$osm_namespace" -p '{"spec":{"traffic":{"enableEgress":false}}}' --type=merge ``` ```console - $ kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- curl -I https://httpbin.org:443 + kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- curl -I https://httpbin.org:443 + ``` + The output will be similar to: + ```console curl: (7) Failed to connect to httpbin.org port 443 after 3 ms: Connection refused command terminated with exit code 7 ``` diff --git a/content/docs/demos/egress_policy.md b/content/docs/demos/egress_policy.md index dbed2e14..ae643d46 100644 --- a/content/docs/demos/egress_policy.md +++ b/content/docs/demos/egress_policy.md @@ -25,21 +25,26 @@ This guide demonstrates a client within the service mesh accessing destinations ``` 1. Deploy the `curl` client into the `curl` namespace after enrolling its namespace to the mesh. + Create the curl namespace ```bash - # Create the curl namespace kubectl create namespace curl - - # Add the namespace to the mesh + ``` + Add the namespace to the mesh + ```bash osm namespace add curl - - # Deploy curl client in the curl namespace + ``` + Deploy curl client in the curl namespace + ```bash kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/{{< param osm_branch >}}/manifests/samples/curl/curl.yaml -n curl ``` Confirm the `curl` client pod is up and running. ```console - $ kubectl get pods -n curl + kubectl get pods -n curl + ``` + The output willbe similar to: + ```console NAME READY STATUS RESTARTS AGE curl-54ccc6954c-9rlvp 2/2 Running 0 20s ``` @@ -48,7 +53,10 @@ This guide demonstrates a client within the service mesh accessing destinations 1. Confirm the `curl` client is unable make the HTTP request `http://httpbin.org:80/get` to the `httpbin.org` website on port `80`. ```console - $ kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI http://httpbin.org:80/get + kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI http://httpbin.org:80/get + ``` + The output willbe similar to: + ```console command terminated with exit code 7 ``` @@ -75,7 +83,10 @@ This guide demonstrates a client within the service mesh accessing destinations 1. Confirm the `curl` client is able to make successful HTTP requests to `http://httpbin.org:80/get`. ```console - $ kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI http://httpbin.org:80/get + kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI http://httpbin.org:80/get + ``` + The output will be similar to: + ```console HTTP/1.1 200 OK date: Thu, 13 May 2021 21:49:35 GMT content-type: application/json @@ -92,7 +103,10 @@ This guide demonstrates a client within the service mesh accessing destinations ``` ```console - $ kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI http://httpbin.org:80/get + kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI http://httpbin.org:80/get + ``` + The output will be similar to: + ```console command terminated with exit code 7 ``` @@ -102,7 +116,10 @@ Since HTTPS traffic is encrypted with TLS, OSM routes HTTPS based traffic by pro 1. Confirm the `curl` client is unable make the HTTPS request `https://httpbin.org:443/get` to the `httpbin.org` website on port `443`. ```console - $ kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI https://httpbin.org:443/get + kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI https://httpbin.org:443/get + ``` + The output will be similar to: + ```console command terminated with exit code 7 ``` @@ -128,8 +145,11 @@ Since HTTPS traffic is encrypted with TLS, OSM routes HTTPS based traffic by pro ``` 1. Confirm the `curl` client is able to make successful HTTPS requests to `https://httpbin.org:443/get`. + ```bash + kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI https://httpbin.org:443/get + ``` + The output will be similar to: ```console - $ kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI https://httpbin.org:443/get HTTP/2 200 date: Thu, 13 May 2021 22:09:36 GMT content-type: application/json @@ -144,7 +164,10 @@ Since HTTPS traffic is encrypted with TLS, OSM routes HTTPS based traffic by pro kubectl delete egress httpbin-443 -n curl ``` ```console - $ kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI https://httpbin.org:443/get + kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI https://httpbin.org:443/get + ``` + The output will be similar to: + ```console command terminated with exit code 7 ``` @@ -154,7 +177,10 @@ TCP based Egress traffic is matched against the destination port and IP address 1. Confirm the `curl` client is unable make the HTTPS request `https://openservicemesh.io:443` to the `openservicemesh.io` website on port `443`. Since HTTPS uses TCP as the underlying transport protocol, TCP based routing should implicitly enable access to any HTTP(s) host on the specified port. ```console - $ kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI https://openservicemesh.io:443 + kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI https://openservicemesh.io:443 + ``` + The output will be similar to: + ```console command terminated with exit code 7 ``` @@ -180,7 +206,10 @@ TCP based Egress traffic is matched against the destination port and IP address 1. Confirm the `curl` client is able to make successful HTTPS requests to `https://openservicemesh.io:443`. ```console - $ kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI https://openservicemesh.io:443 + kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI https://openservicemesh.io:443 + ``` + The output will be similar to: + ```console HTTP/2 200 cache-control: public, max-age=0, must-revalidate content-length: 0 @@ -198,7 +227,10 @@ TCP based Egress traffic is matched against the destination port and IP address kubectl delete egress tcp-443 -n curl ``` ```console - $ kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI https://openservicemesh.io:443 + kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI https://openservicemesh.io:443 + ``` + The output will be similar to: + ```console command terminated with exit code 7 ``` @@ -208,9 +240,15 @@ HTTP Egress policies can specify SMI HTTPRouteGroup matches for fine grained tra 1. Confirm the `curl` client is unable make HTTP requests to `http://httpbin.org:80/get` and `http://httpbin.org:80/status/200` to the `httpbin.org` website on port `80`. ```console - $ kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI http://httpbin.org:80/get + kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI http://httpbin.org:80/get + ``` + The output will be similar to: + ```console command terminated with exit code 7 - $ kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI http://httpbin.org:80/status/200 + kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI http://httpbin.org:80/status/200 + ``` + The output will be similar to: + ```console command terminated with exit code 7 ``` @@ -251,7 +289,10 @@ HTTP Egress policies can specify SMI HTTPRouteGroup matches for fine grained tra 1. Confirm the `curl` client is able to make successful HTTP requests to `http://httpbin.org:80/get`. ```console - $ kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI http://httpbin.org:80/get + kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI http://httpbin.org:80/get + ``` + The output will be similar to: + ```console HTTP/1.1 200 OK date: Thu, 13 May 2021 21:49:35 GMT content-type: application/json @@ -264,7 +305,10 @@ HTTP Egress policies can specify SMI HTTPRouteGroup matches for fine grained tra 1. Confirm the `curl` client is unable to make successful HTTP requests to `http://httpbin.org:80/status/200`. ```console - $ kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI http://httpbin.org:80/status/200 + kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI http://httpbin.org:80/status/200 + ``` + The output will be similar to: + ```console HTTP/1.1 404 Not Found date: Fri, 14 May 2021 17:08:48 GMT server: envoy @@ -290,7 +334,10 @@ HTTP Egress policies can specify SMI HTTPRouteGroup matches for fine grained tra 1. Confirm the `curl` client can now make successful HTTP requests to `http://httpbin.org:80/status/200`. ```console - $ kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI http://httpbin.org:80/status/200 + kubectl exec $(kubectl get pod -n curl -l app=curl -o jsonpath='{.items..metadata.name}') -n curl -c curl -- curl -sI http://httpbin.org:80/status/200 + ``` + The output will be similar to: + ```console HTTP/1.1 200 OK date: Fri, 14 May 2021 17:10:48 GMT content-type: text/html; charset=utf-8 diff --git a/content/docs/demos/ingress_contour.md b/content/docs/demos/ingress_contour.md index f349c788..c0b41705 100644 --- a/content/docs/demos/ingress_contour.md +++ b/content/docs/demos/ingress_contour.md @@ -54,24 +54,32 @@ export ingress_port="$(kubectl -n "$osm_namespace" get service osm-contour-envoy Next, we will deploy the sample `httpbin` service. +Create a namespace ```bash -# Create a namespace kubectl create ns httpbin - -# Add the namespace to the mesh +``` +Add the namespace to the mesh +```bash osm namespace add httpbin - -# Deploy the application +``` +Deploy the application +```bash kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/{{< param osm_branch >}}/manifests/samples/httpbin/httpbin.yaml -n httpbin ``` Confirm the `httpbin` service and pod is up and running: ```console -$ kubectl get pods -n httpbin +kubectl get pods -n httpbin +``` +The output will be similar to: +```console NAME READY STATUS RESTARTS AGE httpbin-74677b7df7-zzlm2 2/2 Running 0 11h - -$ kubectl get svc -n httpbin +```bash +kubectl get svc -n httpbin +``` +The output will be similar to: +```console NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE httpbin ClusterIP 10.0.22.196 14001/TCP 11h ``` @@ -115,7 +123,10 @@ EOF Now, we expect external clients to be able to access the `httpbin` service for HTTP requests for the `Host:` header `httpbin.org`: ```console -$ curl -sI http://"$ingress_host":"$ingress_port"/get -H "Host: httpbin.org" +curl -sI http://"$ingress_host":"$ingress_port"/get -H "Host: httpbin.org" +``` +The output will be similar to: +```console HTTP/1.1 200 OK server: envoy date: Fri, 06 Aug 2021 17:39:43 GMT @@ -194,7 +205,10 @@ EOF Now, we expect external clients to be able to access the `httpbin` service for HTTP requests for the `Host:` header `httpbin.org` with HTTPS proxying over mTLS between the ingress gateway and service backend: ```console -$ curl -sI http://"$ingress_host":"$ingress_port"/get -H "Host: httpbin.org" +curl -sI http://"$ingress_host":"$ingress_port"/get -H "Host: httpbin.org" +``` +The output will be similar to: +```console HTTP/1.1 200 OK server: envoy date: Fri, 06 Aug 2021 17:39:43 GMT @@ -234,7 +248,10 @@ EOF Confirm the requests are rejected with an `HTTP 403 Forbidden` response: ```console -$ curl -sI http://"$ingress_host":"$ingress_port"/get -H "Host: httpbin.org" +curl -sI http://"$ingress_host":"$ingress_port"/get -H "Host: httpbin.org" +``` +The output will be similar to: +```console HTTP/1.1 403 Forbidden content-length: 19 content-type: text/plain @@ -269,8 +286,12 @@ EOF ``` Confirm the requests succeed again since untrusted authenticated principals are allowed to connect to the backend: + +```bash +curl -sI http://"$ingress_host":"$ingress_port"/get -H "Host: httpbin.org" ``` -$ curl -sI http://"$ingress_host":"$ingress_port"/get -H "Host: httpbin.org" +The output will be similar to: +```console HTTP/1.1 200 OK server: envoy date: Fri, 06 Aug 2021 18:51:47 GMT diff --git a/content/docs/demos/ingress_k8s_nginx.md b/content/docs/demos/ingress_k8s_nginx.md index 83aef8b0..266cc8cb 100644 --- a/content/docs/demos/ingress_k8s_nginx.md +++ b/content/docs/demos/ingress_k8s_nginx.md @@ -18,13 +18,26 @@ This guide will demonstrate how to configure HTTP and HTTPS ingress to a service ## Demo First, note the details regarding OSM and Nginx installations: +Replace osm-system with the namespace where OSM is installed +```bash +osm_namespace=osm-system +``` +Replace osm with the mesh name (use `osm mesh list` command) +```bash +osm_mesh_name=osm +``` +Replace with the namespace where Nginx is installed +```bash +nginx_ingress_namespace= +``` +Replace with the name of the nginx ingress controller service +```bash +nginx_ingress_service= +``` ```bash -osm_namespace=osm-system # Replace osm-system with the namespace where OSM is installed -osm_mesh_name=osm # replace osm with the mesh name (use `osm mesh list` command) - -nginx_ingress_namespace= # replace with the namespace where Nginx is installed -nginx_ingress_service= # replace with the name of the nginx ingress controller service nginx_ingress_host="$(kubectl -n "$nginx_ingress_namespace" get service "$nginx_ingress_service" -o jsonpath='{.status.loadBalancer.ingress[0].ip}')" +``` +```bash nginx_ingress_port="$(kubectl -n "$nginx_ingress_namespace" get service "$nginx_ingress_service" -o jsonpath='{.spec.ports[?(@.name=="http")].port}')" ``` @@ -35,24 +48,33 @@ osm namespace add "$nginx_ingress_namespace" --mesh-name "$osm_mesh_name" --disa Next, we will deploy the sample `httpbin` service. +Create a namespace ```bash -# Create a namespace kubectl create ns httpbin - +``` # Add the namespace to the mesh +```bash osm namespace add httpbin - -# Deploy the application +``` +Deploy the application +```bash kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/{{< param osm_branch >}}/manifests/samples/httpbin/httpbin.yaml -n httpbin ``` Confirm the `httpbin` service and pod is up and running: ```console -$ kubectl get pods -n httpbin +kubectl get pods -n httpbin +``` +The output will be similar to: +```console NAME READY STATUS RESTARTS AGE httpbin-74677b7df7-zzlm2 2/2 Running 0 11h - -$ kubectl get svc -n httpbin +``` +```bash +kubectl get svc -n httpbin +``` +The output will be similar to: +```console NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE httpbin ClusterIP 10.0.22.196 14001/TCP 11h ``` @@ -101,7 +123,10 @@ EOF Now, we expect external clients to be able to access the `httpbin` service for HTTP requests: ```console -$ curl -sI http://"$nginx_ingress_host":"$nginx_ingress_port"/get +curl -sI http://"$nginx_ingress_host":"$nginx_ingress_port"/get +``` +The output will be similar to: +```console HTTP/1.1 200 OK Date: Wed, 18 Aug 2021 18:12:35 GMT Content-Type: application/json @@ -187,8 +212,11 @@ EOF ``` Now, we expect external clients to be able to access the `httpbin` service for requests with HTTPS proxying over mTLS between the ingress gateway and service backend: -```console -$ curl -sI http://"$nginx_ingress_host":"$nginx_ingress_port"/get +```bash +curl -sI http://"$nginx_ingress_host":"$nginx_ingress_port"/get +``` +The output will be similar to: +```comsole HTTP/1.1 200 OK Date: Wed, 18 Aug 2021 18:12:35 GMT Content-Type: application/json @@ -227,7 +255,10 @@ EOF Confirm the requests are rejected with an `HTTP 403 Forbidden` response: ```console -$ curl -sI http://"$nginx_ingress_host":"$nginx_ingress_port"/get +curl -sI http://"$nginx_ingress_host":"$nginx_ingress_port"/get +``` +The output will be similar to: +```console HTTP/1.1 403 Forbidden Date: Wed, 18 Aug 2021 18:36:09 GMT Content-Type: text/plain @@ -261,8 +292,11 @@ EOF ``` Confirm the requests succeed again since untrusted authenticated principals are allowed to connect to the backend: +```bash +curl -sI http://"$nginx_ingress_host":"$nginx_ingress_port"/get ``` -$ curl -sI http://"$nginx_ingress_host":"$nginx_ingress_port"/get +The output will be similar to: +```comsole HTTP/1.1 200 OK Date: Wed, 18 Aug 2021 18:36:49 GMT Content-Type: application/json diff --git a/content/docs/demos/local_rate_limit_connections.md b/content/docs/demos/local_rate_limit_connections.md index b1ccca40..142674fb 100644 --- a/content/docs/demos/local_rate_limit_connections.md +++ b/content/docs/demos/local_rate_limit_connections.md @@ -21,27 +21,35 @@ This guide demonstrates how to configure rate limiting for L4 TCP connections de The following demo shows a client [fortio-client](https://github.com/fortio/fortio) sending TCP traffic to the `fortio` `TCP echo` service. The `fortio` service echoes TCP messages back to the client. We will see the impact of applying local TCP rate limiting policies targeting the `fortio` service to control the throughput of traffic destined to the service backend. 1. For simplicity, enable [permissive traffic policy mode](/docs/guides/traffic_management/permissive_mode) so that explicit SMI traffic access policies are not required for application connectivity within the mesh. + + Replace osm-system with the namespace where OSM is installed ```bash - export osm_namespace=osm-system # Replace osm-system with the namespace where OSM is installed + export osm_namespace=osm-system kubectl patch meshconfig osm-mesh-config -n "$osm_namespace" -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":true}}}' --type=merge ``` 1. Deploy the `fortio` `TCP echo` service in the `demo` namespace after enrolling its namespace to the mesh. The `fortio` `TCP echo` service runs on port `8078`. + + Create the demo namespace ```bash - # Create the demo namespace kubectl create namespace demo - - # Add the namespace to the mesh + ``` + Add the namespace to the mesh + ```bash osm namespace add demo - - # Deploy fortio TCP echo in the demo namespace + ``` + Deploy fortio TCP echo in the demo namespace + ```bash kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/{{< param osm_branch >}}/manifests/samples/fortio/fortio.yaml -n demo ``` Confirm the `fortio` service pod is up and running. ```console - $ kubectl get pods -n demo + kubectl get pods -n demo + ``` + The output will be similar to: + ```console NAME READY STATUS RESTARTS AGE fortio-c4bd7857f-7mm6w 2/2 Running 0 22m ``` @@ -60,9 +68,12 @@ The following demo shows a client [fortio-client](https://github.com/fortio/fort 1. Confirm the `fortio-client` app is able to successfully make TCP connections and send data to the `frotio` `TCP echo` service on port `8078`. We call the `fortio` service with `3` concurrent connections (`-c 3`) and send `10` calls (`-n 10`). ```console - $ fortio_client="$(kubectl get pod -n demo -l app=fortio-client -o jsonpath='{.items[0].metadata.name}')" + fortio_client="$(kubectl get pod -n demo -l app=fortio-client -o jsonpath='{.items[0].metadata.name}')" - $ $ kubectl exec "$fortio_client" -n demo -c fortio-client -- fortio load -qps -1 -c 3 -n 10 tcp://fortio.demo.svc.cluster.local:8078 + kubectl exec "$fortio_client" -n demo -c fortio-client -- fortio load -qps -1 -c 3 -n 10 tcp://fortio.demo.svc.cluster.local:8078 + ``` + Theo output will be similar to: + ```console Fortio 1.32.3 running at -1 queries per second, 8->8 procs, for 10 calls: tcp://fortio.demo.svc.cluster.local:8078 20:41:47 I tcprunner.go:238> Starting tcp test for tcp://fortio.demo.svc.cluster.local:8078 with 3 threads at -1.0 qps Starting at max qps with 3 thread(s) [gomax 8] for exactly 10 calls (3 per thread + 1) @@ -90,7 +101,7 @@ The following demo shows a client [fortio-client](https://github.com/fortio/fort ``` As seen above, all the TCP connections from the `fortio-client` pod succeeded. - ``` + ```console Total Bytes sent: 240, received: 240 tcp OK : 10 (100.0 %) All done 10 calls (plus 0 warmup) 10.966 ms avg, 226.2 qps @@ -116,15 +127,18 @@ The following demo shows a client [fortio-client](https://github.com/fortio/fort Confirm no traffic has been rate limited yet by examining the stats on the `fortio` backend pod. ```console - $ fortio_server="$(kubectl get pod -n demo -l app=fortio -o jsonpath='{.items[0].metadata.name}')" + fortio_server="$(kubectl get pod -n demo -l app=fortio -o jsonpath='{.items[0].metadata.name}')" - $ osm proxy get stats "$fortio_server" -n demo | grep fortio.*8078.*rate_limit + osm proxy get stats "$fortio_server" -n demo | grep fortio.*8078.*rate_limit local_rate_limit.inbound_demo/fortio_8078_tcp.rate_limited: 0 ``` 1. Confirm TCP connections are rate limited. ```console - $ kubectl exec "$fortio_client" -n demo -c fortio-client -- fortio load -qps -1 -c 3 -n 10 tcp://fortio.demo.svc.cluster.local:8078 + kubectl exec "$fortio_client" -n demo -c fortio-client -- fortio load -qps -1 -c 3 -n 10 tcp://fortio.demo.svc.cluster.local:8078 + ``` + The output will be similar to: + ```comsole Fortio 1.32.3 running at -1 queries per second, 8->8 procs, for 10 calls: tcp://fortio.demo.svc.cluster.local:8078 20:49:38 I tcprunner.go:238> Starting tcp test for tcp://fortio.demo.svc.cluster.local:8078 with 3 threads at -1.0 qps Starting at max qps with 3 thread(s) [gomax 8] for exactly 10 calls (3 per thread + 1) @@ -172,7 +186,7 @@ The following demo shows a client [fortio-client](https://github.com/fortio/fort Examine the sidecar stats to further confirm this. ```console - $ osm proxy get stats "$fortio_server" -n demo | grep 'fortio.*8078.*rate_limit' + osm proxy get stats "$fortio_server" -n demo | grep 'fortio.*8078.*rate_limit' local_rate_limit.inbound_demo/fortio_8078_tcp.rate_limited: 7 ``` @@ -197,7 +211,10 @@ The following demo shows a client [fortio-client](https://github.com/fortio/fort 1. Confirm the burst capability allows a burst of connections within a small window of time. ```console - $ kubectl exec "$fortio_client" -n demo -c fortio-client -- fortio load -qps -1 -c 3 -n 10 tcp://fortio.demo.svc.cluster.local:8078 + kubectl exec "$fortio_client" -n demo -c fortio-client -- fortio load -qps -1 -c 3 -n 10 tcp://fortio.demo.svc.cluster.local:8078 + ``` + The output will be similar to: + ```console Fortio 1.32.3 running at -1 queries per second, 8->8 procs, for 10 calls: tcp://fortio.demo.svc.cluster.local:8078 20:56:56 I tcprunner.go:238> Starting tcp test for tcp://fortio.demo.svc.cluster.local:8078 with 3 threads at -1.0 qps Starting at max qps with 3 thread(s) [gomax 8] for exactly 10 calls (3 per thread + 1) @@ -223,7 +240,7 @@ The following demo shows a client [fortio-client](https://github.com/fortio/fort ``` As seen above, all the TCP connections from the `fortio-client` pod succeeded. - ``` + ```console Total Bytes sent: 240, received: 240 tcp OK : 10 (100.0 %) All done 10 calls (plus 0 warmup) 1.531 ms avg, 1897.1 qps diff --git a/content/docs/demos/outbound_ip_exclusion.md b/content/docs/demos/outbound_ip_exclusion.md index b50fbebc..e4dd404b 100644 --- a/content/docs/demos/outbound_ip_exclusion.md +++ b/content/docs/demos/outbound_ip_exclusion.md @@ -27,28 +27,36 @@ The following demo shows an HTTP `curl` client making HTTP requests to the `http 1. Deploy the `curl` client into the `curl` namespace after enrolling its namespace to the mesh. + Create the curl namespace ```bash - # Create the curl namespace kubectl create namespace curl - - # Add the namespace to the mesh + ``` + Add the namespace to the mesh + ```bash osm namespace add curl - - # Deploy curl client in the curl namespace + ``` + Deploy curl client in the curl namespace + ```bash kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/{{< param osm_branch >}}/manifests/samples/curl/curl.yaml -n curl ``` Confirm the `curl` client pod is up and running. ```console - $ kubectl get pods -n curl + kubectl get pods -n curl + ``` + The out will be similar to: + ```comsole NAME READY STATUS RESTARTS AGE curl-54ccc6954c-9rlvp 2/2 Running 0 20s ``` 1. Retrieve the public IP address for the `httpbin.org` website. For the purpose of this demo, we will test with a single IP range to be excluded from traffic interception. In this example, we will use the IP address `54.91.118.50` represented by the IP range `54.91.118.50/32`, to make HTTP requests with and without outbound IP range exclusions configured. ```console - $ nslookup httpbin.org + nslookup httpbin.org + ``` + The output will be similar to: + ```comsole Server: 172.23.48.1 Address: 172.23.48.1#53 @@ -68,7 +76,10 @@ The following demo shows an HTTP `curl` client making HTTP requests to the `http 1. Confirm the `curl` client is unable to make successful HTTP requests to the `httpbin.org` website running on `http://54.91.118.50:80`. ```console - $ kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- curl -I http://54.91.118.50:80 + kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- curl -I http://54.91.118.50:80 + ``` + The output will be similar to: + ```comsole curl: (7) Failed to connect to 54.91.118.50 port 80: Connection refused command terminated with exit code 7 ``` @@ -83,7 +94,7 @@ The following demo shows an HTTP `curl` client making HTTP requests to the `http 1. Confirm the MeshConfig has been updated as expected ```console # 54.91.118.50 is one of the IP addresses of httpbin.org - $ kubectl get meshconfig osm-mesh-config -n "$osm_namespace" -o jsonpath='{.spec.traffic.outboundIPRangeExclusionList}{"\n"}' + kubectl get meshconfig osm-mesh-config -n "$osm_namespace" -o jsonpath='{.spec.traffic.outboundIPRangeExclusionList}{"\n"}' ["54.91.118.50/32"] ``` @@ -97,7 +108,10 @@ The following demo shows an HTTP `curl` client making HTTP requests to the `http 1. Confirm the `curl` client is able to make successful HTTP requests to the `httpbin.org` website running on `http://54.91.118.50:80` ```console # 54.91.118.50 is one of the IP addresses for httpbin.org - $ kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- curl -I http://54.91.118.50:80 + kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- curl -I http://54.91.118.50:80 + ``` + The output will be similar to: + ```comsole HTTP/1.1 200 OK Date: Thu, 18 Mar 2021 23:17:44 GMT Content-Type: text/html; charset=utf-8 @@ -109,9 +123,12 @@ The following demo shows an HTTP `curl` client making HTTP requests to the `http ``` 1. Confirm that HTTP requests to other IP addresses of the `httpbin.org` website that are not excluded fail + 34.199.75.4 is one of the IP addresses for httpbin.org + ```console + kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- curl -I http://34.199.75.4:80 + ``` + The output will be similar to: ```console - # 34.199.75.4 is one of the IP addresses for httpbin.org - $ kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- curl -I http://34.199.75.4:80 curl: (7) Failed to connect to 34.199.75.4 port 80: Connection refused command terminated with exit code 7 ``` \ No newline at end of file diff --git a/content/docs/demos/permissive_traffic_mode.md b/content/docs/demos/permissive_traffic_mode.md index d08e2cae..3ac195b9 100644 --- a/content/docs/demos/permissive_traffic_mode.md +++ b/content/docs/demos/permissive_traffic_mode.md @@ -28,48 +28,61 @@ The following demo shows an HTTP `curl` client making HTTP requests to the `http 1. Deploy the `httpbin` service into the `httpbin` namespace after enrolling its namespace to the mesh. The `httpbin` service runs on port `14001`. + Create the httpbin namespace ```bash - # Create the httpbin namespace kubectl create namespace httpbin - - # Add the namespace to the mesh + ``` + Add the namespace to the mesh + ```bash osm namespace add httpbin - - # Deploy httpbin service in the httpbin namespace + ``` + Deploy httpbin service in the httpbin namespace + ```bash kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/{{< param osm_branch >}}/manifests/samples/httpbin/httpbin.yaml -n httpbin ``` Confirm the `httpbin` service and pods are up and running. ```console - $ kubectl get svc -n httpbin + kubectl get svc -n httpbin + ``` + The output will be similar to: + ```console NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE httpbin ClusterIP 10.96.198.23 14001/TCP 20s ``` ```console - $ kubectl get pods -n httpbin + kubectl get pods -n httpbin + ``` + The output will be similar to: + ```console NAME READY STATUS RESTARTS AGE httpbin-5b8b94b9-lt2vs 2/2 Running 0 20s ``` 1. Deploy the `curl` client into the `curl` namespace after enrolling its namespace to the mesh. + Create the curl namespace ```bash - # Create the curl namespace kubectl create namespace curl - - # Add the namespace to the mesh + ``` + Add the namespace to the mesh + ```bash osm namespace add curl - - # Deploy curl client in the curl namespace + ``` + Deploy curl client in the curl namespace + ```bash kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/{{< param osm_branch >}}/manifests/samples/curl/curl.yaml -n curl ``` Confirm the `curl` client pod is up and running. ```console - $ kubectl get pods -n curl + kubectl get pods -n curl + ``` + The output will be similar to: + ```console NAME READY STATUS RESTARTS AGE curl-54ccc6954c-9rlvp 2/2 Running 0 20s ``` @@ -77,7 +90,10 @@ The following demo shows an HTTP `curl` client making HTTP requests to the `http 1. Confirm the `curl` client is able to access the `httpbin` service on port `14001`. ```console - $ kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- curl -I http://httpbin.httpbin:14001 + kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- curl -I http://httpbin.httpbin:14001 + ``` + The output will be similar to: + ```console HTTP/1.1 200 OK server: envoy date: Mon, 15 Mar 2021 22:45:23 GMT @@ -97,7 +113,10 @@ The following demo shows an HTTP `curl` client making HTTP requests to the `http ``` ```console - $ kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- curl -I http://httpbin.httpbin:14001 + kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- curl -I http://httpbin.httpbin:14001 + ``` + The output will be similar to: + ```console curl: (7) Failed to connect to httpbin.httpbin port 14001: Connection refused command terminated with exit code 7 ``` \ No newline at end of file diff --git a/content/docs/demos/prometheus_grafana.md b/content/docs/demos/prometheus_grafana.md index 77bbec9f..08605e1d 100644 --- a/content/docs/demos/prometheus_grafana.md +++ b/content/docs/demos/prometheus_grafana.md @@ -46,9 +46,11 @@ Prometheus needs to be configured to scape the OSM endpoints and properly handle Use `kubectl get configmap` to verify the `stable-prometheus-sever` configmap has been created. For example: -``` +```bash $ kubectl get configmap - +``` +The output will be similar to: +```console NAME DATA AGE ... stable-prometheus-alertmanager 1 18m @@ -58,7 +60,7 @@ stable-prometheus-server 5 18m Create `update-prometheus-configmap.yaml` with the following: -``` +```yaml apiVersion: v1 kind: ConfigMap metadata: @@ -322,15 +324,18 @@ helm install grafana/grafana --generate-name Use `kubectl get secret` to display the administrator password for Grafana. -``` +```bash export SECRET_NAME=$(kubectl get secret -l "app.kubernetes.io/name=grafana" -o jsonpath="{.items[0].metadata.name}") kubectl get secret $SECRET_NAME -o jsonpath="{.data.admin-password}" | base64 --decode ; echo ``` Use `kubectl port-forward` to forward the traffic between the Grafana's management application and your development computer. -``` +```bash export POD_NAME=$(kubectl get pods -l "app.kubernetes.io/name=grafana" -o jsonpath="{.items[0].metadata.name}") +``` +The output will be similar to: +```console kubectl port-forward $POD_NAME 3000 ``` diff --git a/content/docs/demos/statefulsets.md b/content/docs/demos/statefulsets.md index 7802cacc..61f1d3c5 100644 --- a/content/docs/demos/statefulsets.md +++ b/content/docs/demos/statefulsets.md @@ -22,15 +22,15 @@ This guide will illustrate how to configure stateful applications with OSM and S First, we need to install Apache Zookeeper, the backing metadata store for Kafka. We're going to start off by creating a namespace for our zookeeper pods and adding that namespace to our OSM mesh: -```shell -# Create a namespace for Zookeeper and add it to OSM +Create a namespace for Zookeeper and add it to OSM +```bash kubectl create ns zookeeper osm namespace add zookeeper ``` Next, we need to configure traffic policies that will allow the Zookeepers to talk to each other once they're installed. These policies will also allow our eventual Kafka deployment to talk to Zookeeper: -```shell +```bash kubectl apply -f - <}}/manifests/apps/tcp-echo.yaml -n tcp-demo ``` Confirm the `tcp-echo` service and pod is up and running. ```console - $ kubectl get svc,po -n tcp-demo + kubectl get svc,po -n tcp-demo + ``` + Should look similar to: + ```console NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/tcp-echo ClusterIP 10.0.216.68 9000/TCP 97s @@ -50,21 +56,27 @@ The following demo shows a TCP client sending data to a `tcp-echo` server, which 1. Deploy the `curl` client into the `curl` namespace. + Create the curl namespace ```bash - # Create the curl namespace kubectl create namespace curl - - # Add the namespace to the mesh + ``` + Add the namespace to the mesh + ```bash osm namespace add curl - - # Deploy curl client in the curl namespace + ``` + + Deploy curl client in the curl namespace + ```bash kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/{{< param osm_branch >}}/manifests/samples/curl/curl.yaml -n curl ``` Confirm the `curl` client pod is up and running. ```console - $ kubectl get pods -n curl + kubectl get pods -n curl + ``` + should look similar to: + ```console NAME READY STATUS RESTARTS AGE curl-54ccc6954c-9rlvp 2/2 Running 0 20s ``` @@ -80,7 +92,9 @@ We will enable service discovery using [permissive traffic policy mode](/docs/gu 1. Confirm the `curl` client is able to send and receive a response from the `tcp-echo` service using TCP routing. ```console - $ kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- sh -c 'echo hello | nc tcp-echo.tcp-demo 9000' + kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- sh -c 'echo hello | nc tcp-echo.tcp-demo 9000' + ``` + ```console echo response: hello ``` @@ -97,7 +111,9 @@ When using SMI traffic policy mode, explicit traffic policies must be configured 1. Confirm the `curl` client is unable to send and receive a response from the `tcp-echo` service in the absence of SMI policies. ```console - $ kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- sh -c 'echo hello | nc tcp-echo.tcp-demo 9000' + kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- sh -c 'echo hello | nc tcp-echo.tcp-demo 9000' + ``` + ```console command terminated with exit code 1 ``` @@ -138,5 +154,8 @@ When using SMI traffic policy mode, explicit traffic policies must be configured 1. Confirm the `curl` client is able to send and receive a response from the `tcp-echo` service using SMI TCP route. ```console - $ kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- sh -c 'echo hello | nc tcp-echo.tcp-demo 9000' - echo response: hello \ No newline at end of file + kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- sh -c 'echo hello | nc tcp-echo.tcp-demo 9000' + ``` + ```console + echo response: hello + ``` \ No newline at end of file diff --git a/content/docs/guides/app_onboarding/_index.md b/content/docs/guides/app_onboarding/_index.md index 64b38c60..93176157 100644 --- a/content/docs/guides/app_onboarding/_index.md +++ b/content/docs/guides/app_onboarding/_index.md @@ -23,7 +23,7 @@ The following guide describes how to onboard a Kubernetes microservice to an OSM First get the Kubernetes API server cluster IP: ```console - $ kubectl get svc -n default + kubectl get svc -n default NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.0.0.1 443/TCP 1d ``` @@ -32,7 +32,7 @@ The following guide describes how to onboard a Kubernetes microservice to an OSM Add this IP to the MeshConfig so that outbound traffic to it is excluded from interception by OSM's sidecar: ```console - $ kubectl patch meshconfig osm-mesh-config -n -p '{"spec":{"traffic":{"outboundIPRangeExclusionList":["10.0.0.1/32"]}}}' --type=merge + kubectl patch meshconfig osm-mesh-config -n -p '{"spec":{"traffic":{"outboundIPRangeExclusionList":["10.0.0.1/32"]}}}' --type=merge meshconfig.config.openservicemesh.io/osm-mesh-config patched ``` @@ -74,7 +74,7 @@ The following guide describes how to onboard a Kubernetes microservice to an OSM To onboard a namespace containing applications to be managed by OSM, run the `osm namespace add` command: ```console - $ osm namespace add --mesh-name + osm namespace add --mesh-name ``` By default, the `osm namespace add` command enables automatic sidecar injection for pods in the namespace. @@ -93,7 +93,7 @@ The following guide describes how to onboard a Kubernetes microservice to an OSM Namespaces can be removed from the OSM mesh with the `osm namespace remove` command: ```console -$ osm namespace remove +osm namespace remove ``` > **Please Note:** diff --git a/content/docs/guides/app_onboarding/sidecar_injection.md b/content/docs/guides/app_onboarding/sidecar_injection.md index bdcf0ffb..8babf97c 100644 --- a/content/docs/guides/app_onboarding/sidecar_injection.md +++ b/content/docs/guides/app_onboarding/sidecar_injection.md @@ -30,12 +30,12 @@ Automatic Sidecar injection can be enabled in the following ways: ```console # Enable sidecar injection on a namespace - $ kubectl annotate namespace openservicemesh.io/sidecar-injection=enabled + kubectl annotate namespace openservicemesh.io/sidecar-injection=enabled ``` ```console # Enable sidecar injection on a pod - $ kubectl annotate pod openservicemesh.io/sidecar-injection=enabled + kubectl annotate pod openservicemesh.io/sidecar-injection=enabled ``` - Setting the sidecar injection annotation to `enabled` in the Kubernetes resource spec for a namespace or pod: @@ -61,7 +61,7 @@ Namespaces can be disabled for automatic sidecar injection in the following ways ```console # Disable sidecar injection on a namespace - $ kubectl annotate namespace openservicemesh.io/sidecar-injection=disabled + kubectl annotate namespace openservicemesh.io/sidecar-injection=disabled ``` ### Explicitly Disabling Automatic Sidecar Injection on Pods @@ -71,7 +71,7 @@ Individual pods can be explicitly disabled for sidecar injection. This is useful - Using `kubectl` to annotate individual pods to disable sidecar injection: ```console # Disable sidecar injection on a pod - $ kubectl annotate pod openservicemesh.io/sidecar-injection=disabled + kubectl annotate pod openservicemesh.io/sidecar-injection=disabled ``` - Setting the sidecar injection annotation to `disabled` in the Kubernetes resource spec for the pod: diff --git a/content/docs/guides/cli.md b/content/docs/guides/cli.md index a3204ef2..9122bea6 100644 --- a/content/docs/guides/cli.md +++ b/content/docs/guides/cli.md @@ -90,9 +90,9 @@ Building OSM from source requires more steps but is the best way to test the lat You must have a working [Go](https://golang.org/doc/install) environment. ```console -$ git clone git@github.com:openservicemesh/osm.git -$ cd osm -$ make build-osm +git clone git@github.com:openservicemesh/osm.git +cd osm +make build-osm ``` `make build-osm` will fetch any required dependencies, compile `osm` and place it in `bin/osm`. Add `bin/osm` to `$PATH` so you can easily use `osm`. @@ -121,7 +121,7 @@ Run `osm install`. ```console # Install osm control plane components -$ osm install +osm install OSM installed successfully in namespace [osm-system] with mesh name [osm] ``` diff --git a/content/docs/guides/health_checks/control_plane_health_probes.md b/content/docs/guides/health_checks/control_plane_health_probes.md index 8fdce679..6f90566c 100644 --- a/content/docs/guides/health_checks/control_plane_health_probes.md +++ b/content/docs/guides/health_checks/control_plane_health_probes.md @@ -81,7 +81,7 @@ kubectl port-forward -n osm-system $(kubectl get pods -n osm-system -l app=osm-c Then, in a separate terminal instance, `curl` may be used to check the endpoint. The following example shows a healthy osm-controller: ```console -$ curl -i localhost:9091/health/alive +curl -i localhost:9091/health/alive HTTP/1.1 200 OK Date: Thu, 18 Mar 2021 20:15:29 GMT Content-Length: 16 @@ -100,8 +100,8 @@ If any health probes are consistently failing, perform the following steps to id For example, an osm-controller Pod that includes an Envoy container: ```console - $ # Assuming OSM is installed in the osm-system namespace: - $ kubectl get pod -n osm-system $(kubectl get pods -n osm-system -l app=osm-controller -o jsonpath='{.items[0].metadata.name}') -o jsonpath='{range .spec.containers[*]}{.image}{"\n"}{end}' + # Assuming OSM is installed in the osm-system namespace: + kubectl get pod -n osm-system $(kubectl get pods -n osm-system -l app=osm-controller -o jsonpath='{.items[0].metadata.name}') -o jsonpath='{range .spec.containers[*]}{.image}{"\n"}{end}' openservicemesh/osm-controller:v0.8.0 envoyproxy/envoy-alpine:v1.17.2 ``` @@ -110,8 +110,8 @@ If any health probes are consistently failing, perform the following steps to id For example, an osm-injector Pod that includes an Envoy container: ```console - $ # Assuming OSM is installed in the osm-system namespace: - $ kubectl get pod -n osm-system $(kubectl get pods -n osm-system -l app=osm-injector -o jsonpath='{.items[0].metadata.name}') -o jsonpath='{range .spec.containers[*]}{.image}{"\n"}{end}' + # Assuming OSM is installed in the osm-system namespace: + kubectl get pod -n osm-system $(kubectl get pods -n osm-system -l app=osm-injector -o jsonpath='{.items[0].metadata.name}') -o jsonpath='{range .spec.containers[*]}{.image}{"\n"}{end}' openservicemesh/osm-injector:v0.8.0 envoyproxy/envoy-alpine:v1.17.2 ``` @@ -121,7 +121,7 @@ If any health probes are consistently failing, perform the following steps to id For example, for all of the following meshes: ```console - $ osm mesh list + osm mesh list MESH NAME NAMESPACE CONTROLLER PODS VERSION SMI SUPPORTED osm osm-system osm-controller-5494bcffb6-qpjdv v0.8.0 HTTPRouteGroup:specs.smi-spec.io/v1alpha4,TCPRoute:specs.smi-spec.io/v1alpha4,TrafficSplit:split.smi-spec.io/v1alpha2,TrafficTarget:access.smi-spec.io/v1alpha3 @@ -131,7 +131,7 @@ If any health probes are consistently failing, perform the following steps to id Note how `osm-system` (the mesh control plane namespace) is present in the following list of namespaces: ```console - $ osm namespace list --mesh-name osm --osm-namespace osm-system + osm namespace list --mesh-name osm --osm-namespace osm-system NAMESPACE MESH SIDECAR-INJECTION osm-system osm2 enabled bookbuyer osm2 enabled @@ -141,7 +141,7 @@ If any health probes are consistently failing, perform the following steps to id If the OSM namespace is found in any `osm namespace list` command with `SIDECAR-INJECTION` enabled, remove the namespace from the mesh injecting the sidecars. For the example above: ```console - $ osm namespace remove osm-system --mesh-name osm2 --osm-namespace osm-system2 + osm namespace remove osm-system --mesh-name osm2 --osm-namespace osm-system2 ``` 1. Determine if Kubernetes encountered any errors while scheduling or starting the Pod. @@ -151,15 +151,15 @@ If any health probes are consistently failing, perform the following steps to id For osm-controller: ```console - $ # Assuming OSM is installed in the osm-system namespace: - $ kubectl describe pod -n osm-system $(kubectl get pods -n osm-system -l app=osm-controller -o jsonpath='{.items[0].metadata.name}') + # Assuming OSM is installed in the osm-system namespace: + kubectl describe pod -n osm-system $(kubectl get pods -n osm-system -l app=osm-controller -o jsonpath='{.items[0].metadata.name}') ``` For osm-injector: ```console - $ # Assuming OSM is installed in the osm-system namespace: - $ kubectl describe pod -n osm-system $(kubectl get pods -n osm-system -l app=osm-injector -o jsonpath='{.items[0].metadata.name}') + # Assuming OSM is installed in the osm-system namespace: + kubectl describe pod -n osm-system $(kubectl get pods -n osm-system -l app=osm-injector -o jsonpath='{.items[0].metadata.name}') ``` Resolve any errors and verify OSM's health again. @@ -171,15 +171,15 @@ If any health probes are consistently failing, perform the following steps to id For osm-controller: ```console - $ # Assuming OSM is installed in the osm-system namespace: - $ kubectl logs -n osm-system $(kubectl get pods -n osm-system -l app=osm-controller -o jsonpath='{.items[0].metadata.name}') + # Assuming OSM is installed in the osm-system namespace: + kubectl logs -n osm-system $(kubectl get pods -n osm-system -l app=osm-controller -o jsonpath='{.items[0].metadata.name}') ``` For osm-injector: ```console - $ # Assuming OSM is installed in the osm-system namespace: - $ kubectl logs -n osm-system $(kubectl get pods -n osm-system -l app=osm-injector -o jsonpath='{.items[0].metadata.name}') + # Assuming OSM is installed in the osm-system namespace: + kubectl logs -n osm-system $(kubectl get pods -n osm-system -l app=osm-injector -o jsonpath='{.items[0].metadata.name}') ``` Resolve any errors and verify OSM's health again. diff --git a/content/docs/guides/health_checks/health_probes.md b/content/docs/guides/health_checks/health_probes.md index 3746688a..6769d88f 100644 --- a/content/docs/guides/health_checks/health_probes.md +++ b/content/docs/guides/health_checks/health_probes.md @@ -351,7 +351,7 @@ kubectl port-forward -n bookstore deployment/bookstore-v1 15901 Then, in a separate terminal instance, `curl` may be used to check the endpoint. The following example shows a healthy bookstore-v1: ```console -$ curl -i localhost:15901/osm-liveness-probe +curl -i localhost:15901/osm-liveness-probe HTTP/1.1 200 OK date: Wed, 31 Mar 2021 16:00:01 GMT content-length: 1396 diff --git a/content/docs/guides/install.md b/content/docs/guides/install.md index 1b1ffa81..724eb371 100644 --- a/content/docs/guides/install.md +++ b/content/docs/guides/install.md @@ -32,7 +32,7 @@ Each version of the OSM CLI is designed to work only with the matching version o Run `osm install` to install the OSM control plane. ```console -$ osm install +osm install OSM installed successfully in namespace [osm-system] with mesh name [osm] ``` @@ -64,7 +64,7 @@ You can configure the OSM installation by overriding the values file. Then run the following `helm install` command. The chart version can be found in the Helm chart you wish to install [here](https://github.com/openservicemesh/osm/blob/{{< param osm_branch >}}/charts/osm/Chart.yaml#L17). ```console -$ helm install osm --repo https://openservicemesh.github.io/osm --version --namespace --values override.yaml +helm install osm --repo https://openservicemesh.github.io/osm --version --namespace --values override.yaml ``` Omit the `--values` flag if you prefer to use the default settings. @@ -105,7 +105,7 @@ A few components will be installed by default. Inspect them by using the followi ```console # Replace osm-system with the namespace where OSM is installed -$ kubectl get pods,svc,secrets,meshconfigs,serviceaccount --namespace osm-system + kubectl get pods,svc,secrets,meshconfigs,serviceaccount --namespace osm-system ``` A few cluster wide (non Namespaced components) will also be installed. Inspect them using the following `kubectl` command: @@ -118,7 +118,7 @@ Under the hood, `osm` is using [Helm](https://helm.sh) libraries to create a Hel ```console # Replace osm-system with the namespace where OSM is installed -$ helm get manifest osm --namespace osm-system + helm get manifest osm --namespace osm-system ``` ## Next Steps diff --git a/content/docs/guides/integrations/dapr.md b/content/docs/guides/integrations/dapr.md index 1ef22bb6..da12dea9 100644 --- a/content/docs/guides/integrations/dapr.md +++ b/content/docs/guides/integrations/dapr.md @@ -20,7 +20,7 @@ This document walks you through the steps of getting Dapr working with OSM on a - Further [hello-kubernetes](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes) sets up everything in the default namespace, it is **strongly recommended** to set up the entire hello-kubernetes demo in a specific namespace (we will later join this namespace to OSM's mesh). For the purpose of this integration, we have the namespace as `dapr-test` ```console - $ kubectl create namespace dapr-test + kubectl create namespace dapr-test namespace/dapr-test created ``` @@ -61,14 +61,14 @@ This document walks you through the steps of getting Dapr working with OSM on a 2. Install OSM: ```console - $ osm install + osm install OSM installed successfully in namespace [osm-system] with mesh name [osm] ``` 3. Enable permissive mode in OSM: ```console - $ kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":true}}}' --type=merge + kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":true}}}' --type=merge meshconfig.config.openservicemesh.io/osm-mesh-config patched ``` @@ -78,13 +78,13 @@ This document walks you through the steps of getting Dapr working with OSM on a 1. Get the kubernetes API server cluster IP: ```console - $ kubectl get svc -n default + kubectl get svc -n default NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.0.0.1 443/TCP 1d ``` 2. Add this IP to the MeshConfig so that outbound traffic to it is excluded from interception by OSM's sidecar ```console - $ kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"traffic":{"outboundIPRangeExclusionList":["10.0.0.1/32"]}}}' --type=merge + kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"traffic":{"outboundIPRangeExclusionList":["10.0.0.1/32"]}}}' --type=merge meshconfig.config.openservicemesh.io/osm-mesh-config patched ``` @@ -96,7 +96,7 @@ This document walks you through the steps of getting Dapr working with OSM on a 1. Get the ports of Dapr's placement server (`dapr-placement-server`): ```console - $ kubectl get svc -n dapr-system + kubectl get svc -n dapr-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dapr-api ClusterIP 10.0.172.245 80/TCP 2h dapr-dashboard ClusterIP 10.0.80.141 8080/TCP 2h @@ -109,7 +109,7 @@ This document walks you through the steps of getting Dapr working with OSM on a 3. Add these ports to the MeshConfig so that outbound traffic to it is excluded from interception by OSM's sidecar ```console - $ kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"traffic":{"outboundPortExclusionList":[50005,8201,6379]}}}' --type=merge + kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"traffic":{"outboundPortExclusionList":[50005,8201,6379]}}}' --type=merge meshconfig.config.openservicemesh.io/osm-mesh-config patched ``` @@ -122,7 +122,7 @@ This document walks you through the steps of getting Dapr working with OSM on a 1. Get the ports of Dapr's api and sentry (`dapr-sentry` and `dapr-api`): ```console - $ kubectl get svc -n dapr-system + kubectl get svc -n dapr-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dapr-api ClusterIP 10.0.172.245 80/TCP 2h dapr-dashboard ClusterIP 10.0.80.141 8080/TCP 2h @@ -138,38 +138,38 @@ This document walks you through the steps of getting Dapr working with OSM on a 7. Make OSM monitor the namespace that was used for the Dapr hello-kubernetes demo setup: ```console - $ osm namespace add dapr-test + osm namespace add dapr-test Namespace [dapr-test] successfully added to mesh [osm] ``` 8. Delete and re-deploy the Dapr hello-kubernetes pods: ```console - $ kubectl delete -f ./deploy/node.yaml + kubectl delete -f ./deploy/node.yaml service "nodeapp" deleted deployment.apps "nodeapp" deleted ``` ```console - $ kubectl delete -f ./deploy/python.yaml + kubectl delete -f ./deploy/python.yaml deployment.apps "pythonapp" deleted ``` ```console - $ kubectl apply -f ./deploy/node.yaml + kubectl apply -f ./deploy/node.yaml service "nodeapp" created deployment.apps "nodeapp" created ``` ```console - $ kubectl apply -f ./deploy/python.yaml + kubectl apply -f ./deploy/python.yaml deployment.apps "pythonapp" created ``` The pythonapp and nodeapp pods on restart will now have 3 containers each, indicating OSM's proxy sidecar has been successfully injected ```console - $ kubectl get pods -n dapr-test + kubectl get pods -n dapr-test NAME READY STATUS RESTARTS AGE my-release-redis-master-0 1/1 Running 0 2h my-release-redis-slave-0 1/1 Running 0 2h @@ -193,7 +193,7 @@ This document walks you through the steps of getting Dapr working with OSM on a 1. Disable permissive mode: ```console - $ kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":false}}}' --type=merge + kubectl patch meshconfig osm-mesh-config -n osm-system -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":false}}}' --type=merge meshconfig.config.openservicemesh.io/osm-mesh-config patched ``` @@ -202,12 +202,12 @@ This document walks you through the steps of getting Dapr working with OSM on a 3. Create a service account for nodeapp and pythonapp: ```console - $ kubectl create sa nodeapp -n dapr-test + kubectl create sa nodeapp -n dapr-test serviceaccount/nodeapp created ``` ```console - $ kubectl create sa pythonapp -n dapr-test + kubectl create sa pythonapp -n dapr-test serviceaccount/pythonapp created ``` @@ -289,23 +289,23 @@ This document walks you through the steps of getting Dapr working with OSM on a 1. To clean up the Dapr hello-kubernetes demo, clean the `dapr-test` namespace ```console - $ kubectl delete ns dapr-test + kubectl delete ns dapr-test ``` 2. To uninstall Dapr, run ```console - $ dapr uninstall --kubernetes + dapr uninstall --kubernetes ``` 3. To uninstall OSM, run ```console - $ osm uninstall mesh + osm uninstall mesh ``` 4. To remove OSM's cluster wide resources after uninstallation, run the following command. See the [uninstall guide](/docs/guides/uninstall/) for more context and information. ```console - $ osm uninstall mesh --delete-cluster-wide-resources + osm uninstall mesh --delete-cluster-wide-resources ``` diff --git a/content/docs/guides/integrations/prometheus.md b/content/docs/guides/integrations/prometheus.md index 5fe0ae6c..3430c32d 100644 --- a/content/docs/guides/integrations/prometheus.md +++ b/content/docs/guides/integrations/prometheus.md @@ -15,38 +15,38 @@ To familiarize yourself on how OSM works with Prometheus, try installing a new m 1. Install OSM with its own Prometheus instance: ```console - $ osm install --set osm.deployPrometheus=true,osm.enablePermissiveTrafficPolicy=true + osm install --set osm.deployPrometheus=true,osm.enablePermissiveTrafficPolicy=true OSM installed successfully in namespace [osm-system] with mesh name [osm] ``` 1. Create a namespace for sample workloads: ```console - $ kubectl create namespace metrics-demo + kubectl create namespace metrics-demo namespace/metrics-demo created ``` 1. Make the new OSM monitor the new namespace: ```console - $ osm namespace add metrics-demo + osm namespace add metrics-demo Namespace [metrics-demo] successfully added to mesh [osm] ``` 1. Configure OSM's Prometheus to scrape metrics from the new namespace: ```console - $ osm metrics enable --namespace metrics-demo + osm metrics enable --namespace metrics-demo Metrics successfully enabled in namespace [metrics-demo] ``` 1. Install sample applications: ```console - $ kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/{{< param osm_branch >}}/manifests/samples/curl/curl.yaml -n metrics-demo + kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/{{< param osm_branch >}}/manifests/samples/curl/curl.yaml -n metrics-demo serviceaccount/curl created deployment.apps/curl created - $ kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/{{< param osm_branch >}}/manifests/samples/httpbin/httpbin.yaml -n metrics-demo + kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/{{< param osm_branch >}}/manifests/samples/httpbin/httpbin.yaml -n metrics-demo serviceaccount/httpbin created service/httpbin created deployment.apps/httpbin created @@ -55,7 +55,7 @@ To familiarize yourself on how OSM works with Prometheus, try installing a new m Ensure the new Pods are Running and all containers are ready: ```console - $ kubectl get pods -n metrics-demo + kubectl get pods -n metrics-demo NAME READY STATUS RESTARTS AGE curl-54ccc6954c-q8s89 2/2 Running 0 95s httpbin-8484bfdd46-vq98x 2/2 Running 0 72s @@ -66,7 +66,7 @@ To familiarize yourself on how OSM works with Prometheus, try installing a new m The following command makes the curl Pod make about 1 request per second to the httpbin Pod forever: ```console - $ kubectl exec -n metrics-demo -ti "$(kubectl get pod -n metrics-demo -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- sh -c 'while :; do curl -i httpbin.metrics-demo:14001/status/200; sleep 1; done' + kubectl exec -n metrics-demo -ti "$(kubectl get pod -n metrics-demo -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- sh -c 'while :; do curl -i httpbin.metrics-demo:14001/status/200; sleep 1; done' HTTP/1.1 200 OK server: envoy date: Tue, 23 Mar 2021 17:27:44 GMT @@ -93,7 +93,7 @@ To familiarize yourself on how OSM works with Prometheus, try installing a new m Forward the Prometheus port: ```console - $ kubectl port-forward -n osm-system $(kubectl get pods -n osm-system -l app=osm-prometheus -o jsonpath='{.items[0].metadata.name}') 7070 + kubectl port-forward -n osm-system $(kubectl get pods -n osm-system -l app=osm-prometheus -o jsonpath='{.items[0].metadata.name}') 7070 Forwarding from 127.0.0.1:7070 -> 7070 Forwarding from [::1]:7070 -> 7070 ``` @@ -111,14 +111,14 @@ To familiarize yourself on how OSM works with Prometheus, try installing a new m Once you are done with the demo resources, clean them up by first deleting the application namespace: ```console - $ kubectl delete ns metrics-demo + kubectl delete ns metrics-demo namespace "metrics-demo" deleted ``` Then, uninstall OSM: ``` - $ osm uninstall mesh + osm uninstall mesh Uninstall OSM [mesh name: osm] ? [y/n]: y OSM [mesh name: osm] uninstalled ``` @@ -126,5 +126,5 @@ To familiarize yourself on how OSM works with Prometheus, try installing a new m To remove OSM's cluster wide resources after uninstallation, run the following command. See the [uninstall guide](/docs/guides/uninstall/) for more context and information. ```console - $ osm uninstall mesh --delete-cluster-wide-resources + osm uninstall mesh --delete-cluster-wide-resources ``` \ No newline at end of file diff --git a/content/docs/guides/troubleshooting/grafana.md b/content/docs/guides/troubleshooting/grafana.md index bcc5213f..3fc4774b 100644 --- a/content/docs/guides/troubleshooting/grafana.md +++ b/content/docs/guides/troubleshooting/grafana.md @@ -19,7 +19,7 @@ If a Grafana instance installed with OSM can't be reached, perform the following If no such Pod is found, verify the OSM Helm chart was installed with the `osm.deployGrafana` parameter set to `true` with `helm`: ```console - $ helm get values -a -n + helm get values -a -n ``` If the parameter is set to anything but `true`, reinstall OSM with the `--set=osm.deployGrafana=true` flag on `osm install`. @@ -29,8 +29,8 @@ If a Grafana instance installed with OSM can't be reached, perform the following The Grafana Pod identified above should be both in a Running state and have all containers ready, as shown in the `kubectl get` output: ```console - $ # Assuming OSM is installed in the osm-system namespace: - $ kubectl get pods -n osm-system -l app=osm-grafana + # Assuming OSM is installed in the osm-system namespace: + kubectl get pods -n osm-system -l app=osm-grafana NAME READY STATUS RESTARTS AGE osm-grafana-7c88b9687d-tlzld 1/1 Running 0 58s ``` @@ -38,8 +38,8 @@ If a Grafana instance installed with OSM can't be reached, perform the following If the Pod is not showing as Running or its containers ready, use `kubectl describe` to look for other potential issues: ```console - $ # Assuming OSM is installed in the osm-system namespace: - $ kubectl describe pods -n osm-system -l app=osm-grafana + # Assuming OSM is installed in the osm-system namespace: + kubectl describe pods -n osm-system -l app=osm-grafana ``` Once the Grafana Pod is found to be healthy, Grafana should be reachable. @@ -57,7 +57,7 @@ If data appears to be missing from the Grafana dashboards, perform the following Start by opening the Grafana UI in a browser: ```console - $ osm dashboard + osm dashboard [+] Starting Dashboard forwarding [+] Issuing open browser http://localhost:3000 ``` diff --git a/content/docs/guides/troubleshooting/prometheus.md b/content/docs/guides/troubleshooting/prometheus.md index 6b889ee1..b5e161bf 100644 --- a/content/docs/guides/troubleshooting/prometheus.md +++ b/content/docs/guides/troubleshooting/prometheus.md @@ -19,7 +19,7 @@ If a Prometheus instance installed with OSM can't be reached, perform the follow If no such Pod is found, verify the OSM Helm chart was installed with the `osm.deployPrometheus` parameter set to `true` with `helm`: ```console - $ helm get values -a -n + helm get values -a -n ``` If the parameter is set to anything but `true`, reinstall OSM with the `--set=osm.deployPrometheus=true` flag on `osm install`. @@ -29,8 +29,8 @@ If a Prometheus instance installed with OSM can't be reached, perform the follow The Prometheus Pod identified above should be both in a Running state and have all containers ready, as shown in the `kubectl get` output: ```console - $ # Assuming OSM is installed in the osm-system namespace: - $ kubectl get pods -n osm-system -l app=osm-prometheus + # Assuming OSM is installed in the osm-system namespace: + kubectl get pods -n osm-system -l app=osm-prometheus NAME READY STATUS RESTARTS AGE osm-prometheus-5794755b9f-67p6r 1/1 Running 0 27m ``` @@ -38,8 +38,8 @@ If a Prometheus instance installed with OSM can't be reached, perform the follow If the Pod is not showing as Running or its containers ready, use `kubectl describe` to look for other potential issues: ```console - $ # Assuming OSM is installed in the osm-system namespace: - $ kubectl describe pods -n osm-system -l app=osm-prometheus + # Assuming OSM is installed in the osm-system namespace: + kubectl describe pods -n osm-system -l app=osm-prometheus ``` Once the Prometheus Pod is found to be healthy, Prometheus should be reachable. @@ -59,7 +59,7 @@ If Prometheus is found not to be scraping metrics for any Pods, perform the foll Only Pods with an Envoy sidecar container are expected to have their metrics scraped by Prometheus. Ensure each Pod is running a container from an image with `envoyproxy/envoy` in its name: ```console - $ kubectl get po -n -o jsonpath='{.spec.containers[*].image}' + kubectl get po -n -o jsonpath='{.spec.containers[*].image}' mynamespace/myapp:v1.0.0 envoyproxy/envoy-alpine:v1.17.2 ``` 1. Verify the proxy's endpoint being scraped by Prometheus is working as expected. @@ -69,7 +69,7 @@ If Prometheus is found not to be scraping metrics for any Pods, perform the foll For each Pod whose metrics are missing, use `kubectl` to forward the Envoy proxy admin interface port and check the metrics: ```console - $ kubectl port-forward -n 15000 + kubectl port-forward -n 15000 ``` Go to http://localhost:15000/stats/prometheus in a browser to check the metrics generated by that Pod. If Prometheus does not seem to be accounting for these metrics, move on to the next step to ensure Prometheus is configured properly. @@ -81,15 +81,15 @@ If Prometheus is found not to be scraping metrics for any Pods, perform the foll Next, check to make sure the namespace is annotated with `openservicemesh.io/metrics: enabled`: ```console - $ # Assuming OSM is installed in the osm-system namespace: - $ kubectl get namespace -o jsonpath='{.metadata.annotations.openservicemesh\.io/metrics}' + # Assuming OSM is installed in the osm-system namespace: + kubectl get namespace -o jsonpath='{.metadata.annotations.openservicemesh\.io/metrics}' enabled ``` If no such annotation exists on the namespace or it has a different value, fix it with `osm`: ```console - $ osm metrics enable --namespace + osm metrics enable --namespace Metrics successfully enabled in namespace [] ``` @@ -98,7 +98,7 @@ If Prometheus is found not to be scraping metrics for any Pods, perform the foll Custom metrics are currently disable by default and enabled when the `osm.featureFlags.enableWASMStats` parameter is set to `true`. Verify the current OSM instance has this parameter set for a mesh named `` in the `` namespace: ```console - $ helm get values -a -n + helm get values -a -n ``` > Note: replace `` with the name of the osm mesh and `` with the namespace where osm was installed. diff --git a/content/docs/guides/troubleshooting/traffic_management/egress.md b/content/docs/guides/troubleshooting/traffic_management/egress.md index d99189ed..8aaab014 100644 --- a/content/docs/guides/troubleshooting/traffic_management/egress.md +++ b/content/docs/guides/troubleshooting/traffic_management/egress.md @@ -15,7 +15,7 @@ If relying on passthrough egress functionality to unknown destinations, confirm ```console # Returns true if global passthrough egress is enabled -$ kubectl get meshconfig osm-mesh-config -n osm-system -o jsonpath='{.spec.traffic.enableEgress}{"\n"}' +kubectl get meshconfig osm-mesh-config -n osm-system -o jsonpath='{.spec.traffic.enableEgress}{"\n"}' true ``` @@ -23,7 +23,7 @@ If using [Egress policy](/docs/guides/traffic_management/egress/#1-configuring-e ```console # Returns true if egress policy capability is enabled -$ kubectl get meshconfig osm-mesh-config -n osm-system -o jsonpath='{.spec.featureFlags.enableEgressPolicy}{"\n"}' +kubectl get meshconfig osm-mesh-config -n osm-system -o jsonpath='{.spec.featureFlags.enableEgressPolicy}{"\n"}' true ``` @@ -48,7 +48,7 @@ Examples: To verify if the pod `curl-7bb5845476-zwxbt` in the namespace `curl` can direct HTTPS traffic to the the external `httpbin.org` host on port `443`: ```console -$ osm verify connectivity --from-pod curl/curl-7bb5845476-zwxbt --to-ext-port 443 --to-ext-host httpbin.org --app-protocol https +osm verify connectivity --from-pod curl/curl-7bb5845476-zwxbt --to-ext-port 443 --to-ext-host httpbin.org --app-protocol https --------------------------------------------- [+] Context: Verify if pod "curl/curl-7bb5845476-zwxbt" can access external service on port 443 Status: Success @@ -58,7 +58,7 @@ Status: Success To verify if the pod `curl-7bb5845476-zwxbt` in the namespace `curl` can direct HTTP traffic to the the external `httpbin.org` host on port `80`: ```console -$ osm verify connectivity --from-pod curl/curl-7bb5845476-zwxbt --to-ext-port 80 --to-ext-host httpbin.org --app-protocol http +osm verify connectivity --from-pod curl/curl-7bb5845476-zwxbt --to-ext-port 80 --to-ext-host httpbin.org --app-protocol http --------------------------------------------- [+] Context: Verify if pod "curl/curl-7bb5845476-zwxbt" can access external service on port 80 Status: Success diff --git a/content/docs/guides/troubleshooting/traffic_management/ingress.md b/content/docs/guides/troubleshooting/traffic_management/ingress.md index 137e9e03..955a5ea2 100644 --- a/content/docs/guides/troubleshooting/traffic_management/ingress.md +++ b/content/docs/guides/troubleshooting/traffic_management/ingress.md @@ -13,7 +13,7 @@ weight: 3 ```console # Returns true if HTTPS ingress is enabled -$ kubectl get meshconfig osm-mesh-config -n osm-system -o jsonpath='{.spec.traffic.useHTTPSIngress}{"\n"}' +kubectl get meshconfig osm-mesh-config -n osm-system -o jsonpath='{.spec.traffic.useHTTPSIngress}{"\n"}' false ``` diff --git a/content/docs/guides/troubleshooting/traffic_management/iptables_redirection.md b/content/docs/guides/troubleshooting/traffic_management/iptables_redirection.md index 7548d414..4602909d 100644 --- a/content/docs/guides/troubleshooting/traffic_management/iptables_redirection.md +++ b/content/docs/guides/troubleshooting/traffic_management/iptables_redirection.md @@ -14,7 +14,7 @@ weight: 1 The application pod should be injected with the Envoy proxy sidecar for traffic redirection to work as expected. Confirm this by ensuring the application pod is running and has the Envoy proxy sidecar container in ready state. ```console -$ kubectl get pod test-58d4f8ff58-wtz4f -n test +kubectl get pod test-58d4f8ff58-wtz4f -n test NAME READY STATUS RESTARTS AGE test-58d4f8ff58-wtz4f 2/2 Running 0 32s ``` @@ -26,7 +26,7 @@ OSM's init container `osm-init` is responsible for initializing individual appli Confirm OSM's init container has finished running successfully by running `kubectl describe` on the application pod, and verifying the `osm-init` container has terminated with an exit code of 0. The container's `State` property provides this information. ```console -$ kubectl describe pod test-58d4f8ff58-wtz4f -n test +kubectl describe pod test-58d4f8ff58-wtz4f -n test Name: test-58d4f8ff58-wtz4f Namespace: test ... @@ -67,7 +67,7 @@ Confirm the outbound IP ranges to be excluded are set correctly: ```console # Assumes OSM is installed in the osm-system namespace -$ kubectl get meshconfig osm-mesh-config -n osm-system -o jsonpath='{.spec.traffic.outboundIPRangeExclusionList}{"\n"}' +kubectl get meshconfig osm-mesh-config -n osm-system -o jsonpath='{.spec.traffic.outboundIPRangeExclusionList}{"\n"}' ["1.1.1.1/32","2.2.2.2/24"] ``` @@ -80,7 +80,7 @@ When outbound IP range exclusions are configured, OSM's `osm-injector` service r Confirm OSM's `osm-init` init container spec has rules corresponding to the configured outbound IP ranges to exclude. ```console -$ kubectl describe pod test-58d4f8ff58-wtz4f -n test +kubectl describe pod test-58d4f8ff58-wtz4f -n test Name: test-58d4f8ff58-wtz4f Namespace: test ... @@ -127,7 +127,7 @@ Confirm the outbound ports to be excluded are set correctly: ```console # Assumes OSM is installed in the osm-system namespace -$ kubectl get meshconfig osm-mesh-config -n osm-system -o jsonpath='{.spec.traffic.outboundPortExclusionList}{"\n"}' +kubectl get meshconfig osm-mesh-config -n osm-system -o jsonpath='{.spec.traffic.outboundPortExclusionList}{"\n"}' [6379,7070] ``` @@ -138,7 +138,7 @@ The output shows the ports that are excluded from outbound traffic redirection, Confirm the outbound ports to be excluded on a pod are set correctly: ```console -$ kubectl get pod POD_NAME -o jsonpath='{.metadata.annotations}' -n POD_NAMESPACE' +kubectl get pod POD_NAME -o jsonpath='{.metadata.annotations}' -n POD_NAMESPACE' map[openservicemesh.io/outbound-port-exclusion-list:8080] ``` @@ -151,7 +151,7 @@ When outbound port exclusions are configured, OSM's `osm-injector` service reads Confirm OSM's `osm-init` init container spec has rules corresponding to the configured outbound ports to exclude. ```console -$ kubectl describe pod test-58d4f8ff58-wtz4f -n test +kubectl describe pod test-58d4f8ff58-wtz4f -n test Name: test-58d4f8ff58-wtz4f Namespace: test ... @@ -197,7 +197,7 @@ Confirm the network interfaces to be excluded are set correctly: ```console # Assumes OSM is installed in the osm-system namespace -$ kubectl get meshconfig osm-mesh-config -n osm-system -o jsonpath='{.spec.traffic.networkInterfaceExclusionList}{"\n"}' +kubectl get meshconfig osm-mesh-config -n osm-system -o jsonpath='{.spec.traffic.networkInterfaceExclusionList}{"\n"}' ["net1","net2"] ``` @@ -210,7 +210,7 @@ When network interface exclusions are configured, OSM's `osm-injector` service r Confirm OSM's `osm-init` init container spec has rules corresponding to the configured network interfaces to exclude. ```console -$ kubectl describe pod server-85f4bc46c5-hprkw +kubectl describe pod server-85f4bc46c5-hprkw Name: server-85f4bc46c5-hprkw Namespace: default ... diff --git a/content/docs/guides/troubleshooting/traffic_management/permissive_traffic_policy_mode.md b/content/docs/guides/troubleshooting/traffic_management/permissive_traffic_policy_mode.md index 707ff5a8..43e09678 100644 --- a/content/docs/guides/troubleshooting/traffic_management/permissive_traffic_policy_mode.md +++ b/content/docs/guides/troubleshooting/traffic_management/permissive_traffic_policy_mode.md @@ -15,7 +15,7 @@ Confirm permissive traffic policy mode is enabled by verifying the value for the ```console # Returns true if permissive traffic policy mode is enabled -$ kubectl get meshconfig osm-mesh-config -n osm-system -o jsonpath='{.spec.traffic.enablePermissiveTrafficPolicyMode}{"\n"}' +kubectl get meshconfig osm-mesh-config -n osm-system -o jsonpath='{.spec.traffic.enablePermissiveTrafficPolicyMode}{"\n"}' true ``` @@ -40,7 +40,7 @@ Use the `osm verify connectivity` command to validate that the pods can communic For example, to verify if the pod `curl-7bb5845476-zwxbt` in the namespace `curl` can direct traffic to the pod `httpbin-69dc7d545c-n7pjb` in the `httpbin` namespace using the `httpbin` Kubernetes service: ```console -$ osm verify connectivity --from-pod curl/curl-7bb5845476-zwxbt --to-pod httpbin/httpbin-69dc7d545c-n7pjb --to-service httpbin +osm verify connectivity --from-pod curl/curl-7bb5845476-zwxbt --to-pod httpbin/httpbin-69dc7d545c-n7pjb --to-service httpbin --------------------------------------------- [+] Context: Verify if pod "curl/curl-7bb5845476-zwxbt" can access pod "httpbin/httpbin-69dc7d545c-n7pjb" for service "httpbin/httpbin" Status: Success diff --git a/content/docs/guides/uninstall.md b/content/docs/guides/uninstall.md index 47fc7c28..709b1d4a 100644 --- a/content/docs/guides/uninstall.md +++ b/content/docs/guides/uninstall.md @@ -32,16 +32,16 @@ namespaces have sidecar injection enabled. If there are multiple control planes View namespaces in a mesh: ```console -$ osm namespace list --mesh-name= -NAMESPACE MESH SIDECAR-INJECTION - enabled - enabled +osm namespace list --mesh-name= +# NAMESPACE MESH SIDECAR-INJECTION + enabled + enabled ``` Remove each namespace from the mesh: ```console -$ osm namespace remove --mesh-name= +osm namespace remove --mesh-name= Namespace [] successfully removed from mesh [] ``` @@ -56,11 +56,11 @@ Restart all pods running with a sidecar: ```console # If pods are running as part of a Kubernetes deployment # Can use this strategy for daemonset as well -$ kubectl rollout restart deployment -n +kubectl rollout restart deployment -n # If pod is running standalone (not part of a deployment or replica set) -$ kubectl delete pod -n namespace -$ k apply -f # if pod is not restarted as part of replicaset +kubectl delete pod -n namespace +k apply -f # if pod is not restarted as part of replicaset ``` Now, there should be no OSM Envoy sidecar containers running as part of the applications that were once part of the mesh. Traffic is no @@ -89,7 +89,7 @@ Run `osm uninstall mesh`: ```console # Uninstall osm control plane components -$ osm uninstall mesh --mesh-name= +osm uninstall mesh --mesh-name= Uninstall OSM [mesh name: ] ? [y/n]: y OSM [mesh name: ] uninstalled ``` @@ -99,7 +99,7 @@ Run `osm uninstall mesh --help` for more options. Alternatively, if you used Helm to install the control plane, run the following `helm uninstall` command: ```console -$ helm uninstall --namespace +helm uninstall --namespace ``` Run `helm uninstall --help` for more options. @@ -118,7 +118,7 @@ there may be resources a user created in the namespace that they may not want au If the namespace was only used for OSM and there is nothing that needs to be kept around, the namespace can be deleted at the time of uninstall or later using the following command. ```console -$ osm uninstall mesh --delete-namespace +osm uninstall mesh --delete-namespace ``` > Warning: Only delete the namespace if resources in the namespace are no longer needed. For example, if osm was installed in `kube-system`, deleting the namespace may delete important cluster resources and may have unintended consequences. diff --git a/content/docs/guides/upgrade.md b/content/docs/guides/upgrade.md index 68abec26..07b7c54b 100644 --- a/content/docs/guides/upgrade.md +++ b/content/docs/guides/upgrade.md @@ -68,8 +68,8 @@ The `osm mesh upgrade` command performs a `helm upgrade` of the existing Helm re Basic usage requires no additional arguments or flags: ```console -$ osm mesh upgrade -OSM successfully upgraded mesh osm +osm mesh upgrade +# OSM successfully upgraded mesh osm ``` This command will upgrade the mesh with the default mesh name in the default OSM namespace. Values from the previous release will NOT carry over to the new release by default, but may be passed individually with the `--set` flag on `osm mesh upgrade`. @@ -97,7 +97,7 @@ For example, if the `logLevel` field in the MeshConfig was set to `info` prior t #### Helm Upgrade Then run the following `helm upgrade` command. ```console -$ helm upgrade osm --repo https://openservicemesh.github.io/osm --version --namespace --values override.yaml +helm upgrade osm --repo https://openservicemesh.github.io/osm --version --namespace --values override.yaml ``` Omit the `--values` flag if you prefer to use the default settings. diff --git a/content/docs/overview/osm_components.md b/content/docs/overview/osm_components.md index 602614a0..9ea790a6 100644 --- a/content/docs/overview/osm_components.md +++ b/content/docs/overview/osm_components.md @@ -11,7 +11,7 @@ Some OSM components will be installed by default in the chosen namespace, which ```console # Replace osm-system with the namespace where OSM is installed -$ kubectl get pods,svc,secrets,meshconfigs,serviceaccount --namespace osm-system +kubectl get pods,svc,secrets,meshconfigs,serviceaccount --namespace osm-system ``` Some cluster-wide (non-namespaced) OSM components will also be installed. Inspect them using the following `kubectl` command: @@ -23,7 +23,7 @@ kubectl get clusterrolebinding,clusterrole,mutatingwebhookconfiguration Under the hood, `osm` is using [Helm](https://helm.sh) libraries to create a Helm `release` object in the control plane Namespace. The Helm `release` name is the mesh-name. The `helm` CLI can also be used to inspect Kubernetes manifests installed in more detail. See the Helm docs for how to [install Helm](https://helm.sh/docs/intro/install/). ```console -$ helm get manifest osm --namespace +helm get manifest osm --namespace ``` ## Components