Skip to content

Latest commit

 

History

History
125 lines (113 loc) · 3.71 KB

File metadata and controls

125 lines (113 loc) · 3.71 KB

Applications

Agro CD setup

Sample values yaml for exposing AgroCD

server:
  certificate:
    enabled: true
    domain: argocd.xxxx.lab.kubermatic.io
    issuer:
      group: cert-manager.io
      kind: ClusterIssuer
      name: letsencrypt-prod
    secretName: argocd-tls
  ingress:
    enabled: true
    https: true
    hosts:
    - argocd.xxxx.lab.kubermatic.io
    ingressClassName: 'nginx'
    annotations:
      cert-manager.io/cluster-issuer: letsencrypt-prod
      kubernetes.io/tls-acme: 'true'
      nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    tls: 
    - secretName: argocd-tls
      hosts:
        - argocd.xxxx.lab.kubermatic.io

Echo Server setup

Sample values yaml for exposing Echo server

    ingress:
        enabled: true
        hosts:
        - host: echo.xxxx.lab.kubermatic.io
          paths:
          - /
        ingressClassName: 'nginx'
        annotations:
            cert-manager.io/cluster-issuer: letsencrypt-prod
            kubernetes.io/tls-acme: 'true'
        tls: 
        - secretName: echoserver-tls
          hosts:
            - echo.xxxx.lab.kubermatic.io

Eclipse CHE setup

Pre-requisite: Nginx ingress controller, cert-manager and dex/oauth setup should available on the cluster. Add the redirect-uri "https://eclipse-che.xxxx.lab.kubermatic.io/oauth/callback" of Eclipse CHE under DEX "kubermaticIssuer" Client While adding application, provide the namespace value as "default" for Eclipse CHE operator installation as per the design. Internally it takes care of creation of "eclipse-che" namespace and resources within it.

Sample values yaml for exposing Eclipse CHE

networking:
  auth:
    identityProviderURL: "https://xxxxx.lab.kubermatic.io/dex"
    oAuthClientName: "kubermaticIssuer"
    oAuthSecret: "xxxxxxxxxxxxxxxxxxxxxxxxxxx"
  domain: eclipse-che.xxxxx.lab.kubermatic.io

Harbor setup

Pre-requisite: Nginx ingress controller and cert-manager setup should available on the cluster.

Sample values yaml for exposing Harbor

expose:
  ingress:
    hosts:
      core: harbor.xxxx.lab.kubermatic.io
      notary: notary.xxxx.lab.kubermatic.io
    className: "nginx"
    annotations:
      cert-manager.io/cluster-issuer: letsencrypt-prod
      kubernetes.io/tls-acme: 'true'
externalURL: https://harbor.xxxx.lab.kubermatic.io
updateStrategy:
  type: Recreate
harborAdminPassword: xxxxxx

Canal setup

If the user cluster setup is without CNI (none), in that case the canal setup can be done as an application under kube-system namepsace

Sample values yaml for exposing Canal

# Provide the network interface to be use
canalIface: "wt0"
# Adjust the MTU size
vethMTU: "1280"
cluster:
  network:
    # Required. Value to be provided from Cluster.Network which is set Pods CIDR IPv4
    podCIDRBlocks: "172.25.0.0/16"

Sysdig Secure Integration

  1. Setup a demo account at: https://sysdig.com/start-free/
  2. After you login to the UI: https://eu1.app.sysdig.com/secure/#/data-sources/agents?setupModalEnv=Kubernetes&installContentDisplayType=tabular
  3. The generated access key you could then add to the Sysdig Agent via the following values
    global:
      sysdig:
        # get from sysdig portal: https://eu1.app.sysdig.com/secure/#/data-sources/agents?setupModalEnv=Kubernetes&installContentDisplayType=tabular
        accessKey: xxxxx___TODO-ADD-KEY___xxxx
        region: eu1
      clusterConfig:
        # give a name for sysdig portal
        name: xxxxx___TODO-ADD_CLUSTER_NAME___xxxx
      kspm:
        deploy: true
    nodeAnalyzer:
      secure:
        vulnerabilityManagement:
          newEngineOnly: true
      nodeAnalyzer:
        benchmarkRunner:
          deploy: false