Skip to content

Commit

Permalink
0.13 Documentation changes (#40)
Browse files Browse the repository at this point in the history
* Add DTR features to documentation (#37)

* Add DTR features to documentation

Signed-off-by: Kyle Squizzato <[email protected]>

* Remove proxy and engineConfig from dtr role

Signed-off-by: Kyle Squizzato <[email protected]>

* Update configuration-file.md

Co-authored-by: Kimmo Lehto <[email protected]>

* Add Openstack terraform example to provision resources for launchpad (#41)

* add Openstack terraform example to provision resources for launchpad

* Update examples/terraform/openstack/README.md

* resolving tab issues

Co-authored-by: Kimmo Lehto <[email protected]>

* Fixes and updates

Co-authored-by: Kyle Squizzato <[email protected]>
Co-authored-by: Heiko Krämer <[email protected]>
  • Loading branch information
3 people authored Aug 24, 2020
1 parent 423bf5b commit a8d1632
Show file tree
Hide file tree
Showing 33 changed files with 829 additions and 51 deletions.
44 changes: 36 additions & 8 deletions docs/configuration-file.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@ Mirantis Launchpad cluster configuration is described in a file that is in YAML
The complete `cluster.yaml` reference for UCP clusters:

```yaml
apiVersion: launchpad.mirantis.com/v1beta2
kind: UCP
apiVersion: launchpad.mirantis.com/v1beta3
kind: DockerEnterprise
metadata:
name: launchpad-ucp
spec:
Expand All @@ -32,14 +32,20 @@ spec:
role: worker
winRM:
user: Administrator
password: abcd1234
port: 5986
useHTTPS: true
insecure: false
useNTLM: false
caCertPath: ~/.certs/cacert.pem
certPath: ~/.certs/cert.pem
keyPath: ~/.certs/key.pem
password: abcd1234
- address: 10.0.0.3
role: dtr
ssh:
user: root
port: 22
keyPath: ~/.ssh/id_rsa
ucp:
version: "3.3.0"
imageRepo: "docker.io/docker"
Expand All @@ -57,6 +63,13 @@ spec:
configData: |-
[Global]
region=RegionOne
dtr:
version: 2.8.1
imageRepo: "docker.io/docker"
installFlags:
- --dtr-external-url dtr.example.com
- --ucp-insecure-tls
replicaConfig: sequential
engine:
version: "19.03.8"
channel: stable
Expand All @@ -69,11 +82,11 @@ We follow Kubernetes like versioning and grouping the launchpad configuration, h

## `apiVersion`

Currently `launchpad.mirantis.com/v1beta1` and `launchpad.mirantis.com/v1beta2` are supported. A `v1beta1` configuration will still work unchanged, but `v1beta2` features such as `environment`, `engineConfig` and `winRM` can not be used with `v1beta1`.
Currently `launchpad.mirantis.com/v1beta1`, `v1beta2` and `v1beta3` are supported. Earlier configuration syntaxes should still work unchanged, but any changes and additions in new versions are not backwards compatible.

## `kind`

Currently only `UCP` is supported.
Currently only `DockerEnterprise` is supported.

## `metadata`

Expand All @@ -87,11 +100,11 @@ The specification for the cluster.

Specify the machines for the cluster.

- `address` - Address of the machine. This needs to be an address to which `launchpad` tool can connect to with SSH protocol.
- `address` - Address of the machine. This needs to be an address the `launchpad` tool can connect to using SSH protocol.
- `privateInterface` - Discover private network address from the configured network interface (default: `eth0`)
- `ssh` - [SSH](#ssh) connection configuration options
- `winRM` - [WinRM](#winrm) connection configuration options
- `role` - One of `manager` or `worker`, specifies the role of the machine in the cluster
- `role` - One of `manager` or `worker` or `dtr`, specifies the role of the machine in the cluster
- `environment` - Key - value pairs in YAML mapping syntax. Values will be updated to host environment. (optional)
- `engineConfig` - Docker Engine configuration in YAML mapping syntax, will be converted to `daemon.json`. (optional)

Expand All @@ -115,7 +128,7 @@ Specify the machines for the cluster.

### `ucp`

Specify options for UCP cluster itself.
Specify options for the UCP cluster itself.

- `version` - Which version of UCP we should install or upgrade to (default `3.3.0`)
- `imageRepo` - Which image repository we should use for UCP installation (default `docker.io/docker`)
Expand All @@ -133,6 +146,21 @@ Cloud provider configuration.
- `configFile` - Path to cloud provider configuration file on local machine (optional)
- `configData` - Inlined cloud provider configuration (optional)

### `dtr`

Specify options for the DTR cluster itself.

- `version` - Which version of DTR we should install or upgrade to (default `2.8.1`)
- `imageRepo` - Which image repository we should use for DTR installation (default `docker.io/docker`)
- `installFlags` - Custom installation flags for DTR installation. You can get a list of supported installation options for a specific DTR version by running the installer container with `docker run -t -i --rm docker/dtr:2.8.1 install --help`. (optional)

**Note**: `launchpad` will inherit the UCP flags which are needed by DTR to perform installation, joining and removal of nodes. There's no need to include the following install flags in the `installFlags` section of `dtr`:
- `--ucp-username` (inherited from UCP's `--admin-username` flag)
- `--ucp-password` (inherited from UCP's `--admin-password` flag)
- `--ucp-url` (inherited from UCP's `--san` flag or intelligently selected based on other configuration variables)

- `replicaConfig` - Set to `sequential` to generate sequential replica id's for cluster members, for example `000000000001`, `000000000002`, etc. (default: `random`)

### `engine`

Specify options for Docker EE engine to be installed
Expand Down
15 changes: 8 additions & 7 deletions docs/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ To fully evaluate Docker Enterprise, we recommend installing Launchpad on a Linu
* curl, [Postman](https://www.postman.com/) and/or [client libraries](https://kubernetes.io/docs/reference/using-api/client-libraries/) for accessing the Kubernetes REST API
* [Docker](https://docs.docker.com/get-docker/) and related tools, for using the 'docker swarm' CLI, and for containerizing workloads and accessing local and remote registries

This machine can reside in different contexts from the hosts and connect with them several different ways, depending on the infrastructure and services at your disposal.
This machine can reside in different contexts from the hosts and connect with them in several different ways, depending on the infrastructure and services at your disposal.

Your deployer machine must be able to communicate with your hosts on their IP addresses, using several ports. Depending on your infrastructure and security requirements, this can be relatively simple to achieve for evaluation clusters. See [Networking Considerations](networking-considerations.md) for more.

Expand Down Expand Up @@ -90,7 +90,7 @@ To finalize the installation, you'll need to complete the registration. The info
$ launchpad register
name: Luke Skywalker
company: Jedi Corp
email: luke@jedicorp.com
email: luke@example.com
I agree to Mirantis Launchpad Software Evaluation License Agreement https://github.com/Mirantis/launchpad/blob/master/LICENSE [Y/n]: Yes
INFO[0022] Registration completed!
```
Expand All @@ -102,8 +102,8 @@ The cluster is configured using [a yaml file](configuration-file.md). In this ex
Open up your favourite editor, and type something similar to the example below. Once done, save the file as `cluster.yaml`. Naturally you need to adjust the example below to match your infrastructure details. This model should work to deploy hosts on most public clouds.

```yaml
apiVersion: launchpad.mirantis.com/v1beta2
kind: UCP
apiVersion: launchpad.mirantis.com/v1beta3
kind: DockerEnterprise
metadata:
name: ucp-kube
spec:
Expand All @@ -126,8 +126,8 @@ spec:
If you're deploying on VirtualBox or other desktop virtualization solution and are using ‘bridged’ networking, you’ll need to make a few minor adjustments to your cluster.yaml (see below) — deliberately setting a –pod-cidr to ensure that pod IP addresses don’t overlap with node IP addresses (the latter are in the 192.168.x.x private IP network range on such a setup), and supplying appropriate labels for the target nodes’ private IP network cards using the privateInterface parameter (this typically defaults to ‘enp0s3’ on Ubuntu 18.04 &mdash; other Linux distributions use similar nomenclature). You may also need to set the username to use for logging into the host.
```yaml
apiVersion: launchpad.mirantis.com/v1beta2
kind: UCP
apiVersion: launchpad.mirantis.com/v1beta3
kind: DockerEnterprise
metadata:
name: my-ucp
spec:
Expand Down Expand Up @@ -170,7 +170,8 @@ The `launchpad` tool uses with SSH or WinRM to connect to the infrastructure you
At the end of the installation procedure, launchpad will show you the details you can use to connect to your cluster. You will see something like this:
```
INFO[0021] ==> Running phase: UCP cluster info
INFO[0021] Cluster is now configured. You can access your cluster admin UI at: https://test-ucp-cluster-master-lb-895b79a08e57c67b.elb.eu-north-1.amazonaws.com
INFO[0021] Cluster is now configured. You can access your admin UIs at:
INFO[0021] UCP cluster admin UI: https://test-ucp-cluster-master-lb-895b79a08e57c67b.elb.eu-north-1.amazonaws.com
INFO[0021] You can also download the admin client bundle with the following command: launchpad download-bundle --username <username> --password <password>
```
Expand Down
6 changes: 3 additions & 3 deletions docs/host-configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,17 +17,17 @@ Hosts must be configured to allow:
* _For hosts accessed via SSH: remote login using private key:_ &mdash; Launchpad, like most deployment tools, uses encryption keys rather than passwords to authenticate to hosts. You will need to create or use an existing keypair, copy the public key to an appropriate location on each host, configure SSH on hosts to permit keywise authentication (then restart the sshd server), and store the keypair (or just the private key) in an appropriate location on your deployer machine, with appropriate permissions. Google 'enable SSH with keys &lt;your chosen Linux&gt;' for OS-specific tutorials and instructions on creating and using SSH keypairs.
- Keywise login is the default for Linux instances on most public and private cloud platforms. Typically, you can use the platform to create an SSH keypair (or upload a private key created elsewhere, e.g., on your deployer machine), and assign this key to VMs at launch.
- For Linux hosts on desktop virtualization, assuming you're installing a new OS on each VM, you'll need to configure keywise SSH access after installing OpenSSH. This entails creating a private key, copying it to each host, then reconfiguring SSH on each host to use private keys instead of passwords before restarting the sshd service.
- For Windows hosts, access via SSH and keys must be configured manually after first boot, or can be automated. See [system requirements](system-requirements.md) or [this blog](https://www.mirantis.com/blog/today-i-learned-how-to-enable-ssh-with-keypair-login-on-windows-server-2019/).
- For Windows hosts, access via SSH and keys must be configured manually after first boot, or can be automated. See [system requirements](system-requirements.md) or [this blog](https://www.mirantis.com/blog/today-i-learned-how-to-enable-ssh-with-keypair-login-on-windows-server-2019/). It's also possible to use WinRM for connecting to Windows hosts.


* _For Linux hosts: passwordless sudo_ &mdash; Most Linux operating systems now default to enabling login by a privileged user with sudo permissions, rather than by 'root.' This is safer than permitting direct login by root (which is also prevented by the default configuration of most SSH servers). Launchpad requires that the user be allowed to issue 'sudo' commands without being prompted to enter a password.
- This is the default for Linux instances on most public and private cloud platforms. The username you create at VM launch will have passwordless sudo privileges.
- If installing Linux on a desktop (e.g., VirtualBox) VM, you will typically need to configure passwordless sudo after first boot of a newly-installed OS. Google 'configure passwordless sudo &lt;your chosen Linux&gt;' for tutorials and instructions.
- On Windows hosts, the Administrator account is given all privileges by default, and Launchpad can escalate permissions at need without a password.
- On Windows hosts, the Administrator account is given all privileges by default, and Launchpad can escalate permissions at need without a password. If when using WinRM you get `http 401` error, it is possibly due to a password policy. You need to have a sufficiently complex password, such as `,,UCP..Example123..`.

* _Configure Docker logging to enable auto-rotation and manage retention_ * &mdash; Additionally, we recommend configuring evaluation hosts, especially those with smaller SSDs/HDDs, to enable basic Docker log rotation and managing old-file retention, thus avoiding filling up cluster storage with retained logs.

This can be done by setting Docker engine config to cluster.yaml, for example:
This can be done by defining Docker engine configuration in cluster.yaml, for example:

```yaml
...
Expand Down
4 changes: 2 additions & 2 deletions docs/integrations.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# Integrating with Mirantis Launchpad

Currently Mirantis Launchpad is distributed only as binary executable. Hence the main integration point with cluster management is the `launchpad apply` command and the input [`cluster.yaml`](configuration-file.md) configuration for the cluster. As the configuration is YAML format it should be pretty easy to integrate other tooling with it. One of the most common use cases is when using some infrastructure management tooling such as Terraform.
Currently Mirantis Launchpad is distributed only as a binary executable. Hence the main integration point with cluster management is the `launchpad apply` command and the input [`cluster.yaml`](configuration-file.md) configuration for the cluster. As the configuration is in YAML format it should be pretty easy to integrate other tooling with it. One of the most common use cases is when using some infrastructure management tooling such as Terraform.

## Terraform with Mirantis Launchpad

When using cloud environments many people are using [Terraform](https://www.terraform.io/) to manage the infrastructure declaratively. The easiest way to integrate Terraform to Mirantis Launchpad is to use [Terraform output](https://www.terraform.io/docs/configuration/outputs.html) values to specify the whole [`cluster.yaml`](configuration-file.md) structure. For example:
```terraform
output "ucp_cluster" {
value = {
apiVersion = "launchpad.mirantis.com/v1beta2"
apiVersion = "launchpad.mirantis.com/v1beta3"
kind = "UCP"
spec = {
ucp = {
Expand Down
14 changes: 7 additions & 7 deletions docs/networking-considerations.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,24 @@
# Networking considerations

Most first-time Launchpad users will likely install Launchpad on a local laptop or VM, and wish to deploy Docker Enterprise onto VMs running on a public or private cloud that supports 'security groups' for IP access control. This makes it fairly simple to configure networking in a way that provides adequate security and convenient access to the cluster for evaluation and experimentation.
Most first-time Launchpad users will likely install Launchpad on a local laptop or a VM, and wish to deploy Docker Enterprise onto VMs running on a public or private cloud that supports 'security groups' for IP access control. This makes it fairly simple to configure networking in a way that provides adequate security and convenient access to the cluster for evaluation and experimentation.

The simplest way to configure networking for a small, temporary evaluation cluster is to:

1. Create a new virtual subnet (or VPC and subnet) for hosts.
1. Create a new security group called 'de_hosts' (or another name of your choice) that permits inbound IPv4 traffic on all ports, either from a) the security group de_hosts, or b) the new virtual subnet only.
1. Create a second new security group (e.g., 'admit_me') that permits inbound IPv4 traffic from your deployer machine's public IP address only (you can use the website [http://whatismyip.com](http://whatismyip.com)) to determine your public IP.
1. When launching hosts, attach them to the newly-created subnet, and apply both new security groups
1. Once you know the (public, or VPN-accessible private) IPv4 addresses of your nodes, if you aren't using local DNS, it makes sense to assign names to your hosts (e.g., manager, worker1, worker2 ... etc.) and insert IP addresses and names in your hostfile, letting you (and Launchpad) refer to hosts by hostname instead of IP address.
2. Create a new security group called `de_hosts` (or another name of your choice) that permits inbound IPv4 traffic on all ports, either from a) the security group `de_hosts`, or b) the new virtual subnet only.
3. Create a second new security group (e.g., `admit_me`) that permits inbound IPv4 traffic from your deployer machine's public IP address only (you can use the website [whatismyip.com](http://whatismyip.com)) to determine your public IP.
4. When launching hosts, attach them to the newly-created subnet, and apply both new security groups
5. Once you know the (public, or VPN-accessible private) IPv4 addresses of your nodes, if you aren't using local DNS, it makes sense to assign names to your hosts (e.g., manager, worker1, worker2 ... etc.) and insert IP addresses and names in your hostfile, letting you (and Launchpad) refer to hosts by hostname instead of IP address.

Once hosts are booted, you should be able to SSH into them from your deployer machine with your private key, e.g.:
Once the hosts are booted, you should be able to SSH into them from your deployer machine with your private key, e.g.:

```
ssh -i /my/private/keyfile username@mynode
```
... and determine if they can access the internet, perhaps by pinging a Google nameserver:

```
ping 8.8.8.8
$ ping 8.8.8.8
```

Once you can do this, you should be able to proceed with installing Launchpad and configuring a Docker Enterprise deployment. Once completed, you should be able to use your deployer machine to access the Docker Enterprise Universal Control Plane webUI, run kubectl (after authenticating to your cluster) and potentially other utilities (e.g., Postman, curl, etc.).
Expand Down
30 changes: 29 additions & 1 deletion docs/node-management.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Raft requires a majority of managers, also called the quorum, to agree on propos

Keep in mind that the manager nodes also host the control plane etcd cluster. Even more importantly, any changes to the cluster require a working etcd cluster with the majority of peers present and working.

As normally with quorum based systems, it is highly advisable to run an odd number of peers. As the control plane only works when a majority can be formed, once you grow the control plane to have more than one node, you can't (automatically) go back to having only one node.
As usual with quorum based systems, it is highly advisable to run an odd number of peers. As the control plane only works when a majority can be formed, once you grow the control plane to have more than one node, you can't (automatically) go back to having only one node.

### Adding Manager Nodes

Expand All @@ -38,6 +38,34 @@ Removing a worker node is currently a multi step process:
2. Run `launchpad apply --prune ...`
3. Terminate/remove the node in your infrastructure

### Notes on DTR Nodes

Docker Trusted Registry (DTR) nodes are identical to worker nodes. They participate in the UCP swarm but should not be used as traditional worker nodes for both DTR and cluster workloads. By default, UCP will prevent scheduling of containers on DTR nodes.

DTR forms it's own cluster and quorum in addition to the swarm formed by UCP. It is best practice to limit DTR nodes to 5, but there is no limit on the amount of DTR nodes that can be configured. Just like manager nodes, the decision about how many nodes to implement is a trade-off between performance and fault-tolerance. A larger amount of nodes added can incur severe performance penalties.

The quorum formed by DTR utilizes RethinkDB which just like swarm uses the Raft Consensus Algorithm.

### Adding DTR Nodes

Adding DTR nodes is as simple as adding them into the `cluster.yaml` file with a host role of `dtr`. If you intend to add a DTR node, you must ensure you specify both the `--admin-username` and `--admin-password` install flags via the `installFlags` section in UCP so that DTR knows what admin credentials to use:

```
spec:
ucp:
installFlags:
- --admin-username=admin
- --admin-password=passw0rd!
```

Next, re-run `launchpad apply ...` which will configure everything on the new node and join it into the cluster.

### Removing DTR Nodes

Removing a DTR node is currently a multi step process:

1. Remove the host from `cluster.yaml`.
2. Run `launchpad apply --prune ...`
3. Terminate/remove the node in your infrastructure


Loading

0 comments on commit a8d1632

Please sign in to comment.