Skip to content

rpieczon/container-experience-kits

 
 

Repository files navigation

Intel Container Experience Kits Setup Scripts

Intel Container Experience Kits Setup Scripts provide a simplified mechanism for installing and configuring Kubernetes clusters on Intel Architecture using Ansible.

The software provided here is for reference only and not intended for production environments.

Quickstart guide

  1. Initialize git submodules to download Kubespray code.

    git submodule update --init
  2. Decide which configuration profile you want to use and optionally export environmental variable.

    NOTE: It will be used only to ease execution of the steps listed below.

    • For Kubernetes Basic Infrastructure deployment:

      export PROFILE=basic
    • For Kubernetes Access Edge Infrastructure deployment:

      export PROFILE=access
    • For Kubernetes Regional Data Center Infrastructure deployment:

      export PROFILE=regional_dc
    • For Kubernetes Remote Forwarding Platform Infrastructure deployment:

      export PROFILE=remote_fp
    • For Kubernetes Infrastructure On Customer Premises deployment:

      export PROFILE=on_prem
    • For Kubernetes Full NFV Infrastructure deployment:

      export PROFILE=full_nfv
    • For Kubernetes Storage Infrastructure deployment:

      export PROFILE=storage
  3. Install dependencies

    pip3 install -r requirements.txt
  4. Generate example host_vars, group_vars and inventory files for Intel Container Experience Kits profiles.

    NOTE: It is highly recommended to read this file before profiles generation.

    make examples
  5. Copy example inventory file to the project root dir.

    cp examples/k8s/${PROFILE}/inventory.ini .

    or, for VM case:

    cp examples/vm/${PROFILE}/inventory.ini .
  6. Update inventory file with your environment details.

    For VM case: update details relevant for vm_host

    NOTE: At this stage you can inspect your target environment by running:

    ansible -i inventory.ini -m setup all > all_system_facts.txt

    In all_system_facts.txt file you will find details about your hardware, operating system and network interfaces, which will help to properly configure Ansible variables in the next steps.

  7. Copy group_vars and host_vars directories to the project root dir.

    cp -r examples/k8s/${PROFILE}/group_vars examples/k8s/${PROFILE}/host_vars .
    
    or
    
    For VM case:
    cp -r examples/vm/${PROFILE}/group_vars examples/vm/${PROFILE}/host_vars .
  8. Update group and host vars to match your desired configuration. Refer to this section for more details.

    NOTE: Please pay special attention to the http_proxy, https_proxy and additional_no_proxy vars if you're behind proxy.

    For VM case:

    • update details relevant for vm_host (e.g.: datalane_interfaces, ...)
    • update VMs definition in host_vars/host-for-vms-1.yml
    • update/create host_vars for all defined VMs (e.g.: host_vars/vm-ctrl-1.yml and host_vars/vm-work-1.yml)
      Needed details are at least dataplane_interfaces
      For more details see VM case configuration guide
  9. Recommended: Apply bug fix patch for Kubespray submodule (Required for RHEL 8+).

    ansible-playbook -i inventory.ini playbooks/k8s/patch_kubespray.yml
  10. Execute ansible-playbook.

    ansible-playbook -i inventory.ini playbooks/${PROFILE}.yml

    or, for VM case:

    ansible-playbook -i inventory.ini playbooks/vm.yml

    NOTE: VMs are accessible from ansible host via ssh vm-ctrl-1 or ssh vm-work-1

Configuration

Refer to the documentation linked below to see configuration details for selected capabilities and deployment profiles.

Prerequisites and Requirements

NOTE: Packages requirements might be installed in 3rd step.

  • Python present on the target servers depending on the target distribution. Python3 is required.
  • Ansible 3.4.0 and ansible-base 2.10.15 installed on the Ansible host machine (the one you run these playbooks from).
  • python-pip3 installed on the Ansible machine.
  • python-netaddr installed on the Ansible machine.
  • SSH keys copied to all Kubernetes cluster nodes (ssh-copy-id <user>@<host> command can be used for that).
  • Internet access on all target servers is mandatory. Proxy is supported.
  • At least 8GB of RAM on the target servers/VMs for minimal number of functions (some Docker image builds are memory-hungry and may cause OOM kills of Docker registry - observed with 4GB of RAM), more if you plan to run heavy workloads such as NFV applications.
  • For the RHEL-like OSes SELinux must be configured prior to the CEK deployment and required SELinux-related packages should be installed. CEK itself is keeping initial SELinux state but SELinux-related packages might be installed during k8s cluster deployment as a dependency, for Docker engine e.g., causing OS boot failure or other inconsistencies if SELinux is not configured properly. Preferable SELinux state is permissive. For more details, please, refer to the respective OS documentation.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Jinja 74.1%
  • Python 10.4%
  • Mustache 7.4%
  • Shell 3.8%
  • Smarty 2.5%
  • Makefile 1.8%