diff --git a/.docs/bastion/bastion.md b/.docs/bastion/bastion.md new file mode 100644 index 000000000..3c4f41137 --- /dev/null +++ b/.docs/bastion/bastion.md @@ -0,0 +1,132 @@ +# Provisioning a bastion host by using Teleport with Secure Landing Zone + +Secure Landing Zone can provision the solution that is described in [Setting up a bastion host that uses Teleport](https://cloud.ibm.com/docs/allowlist/framework-financial-services?topic=framework-financial-services-vpc-architecture-connectivity-bastion-tutorial-teleport) (available by allowlist). This solution configures a bastion host in your VPC using Teleport Enterprise Edition, and provisions a Cloud Object Storage bucket and App ID for enhanced security. + +[App ID](https://cloud.ibm.com/docs/appid) is used to authenticate users to Teleport. Teleport session recordings are stored in the Object Storage bucket. The [cloud-init file](../../teleport_config/cloud-init.tpl) file installs teleport and configures App ID and Object Storage. The Teleport [variables.tf](../../teleport_config/variables.tf) file is used for the configuration. + +## Before you begin + +You need the following items to deploy and configure a bastion host that uses Teleport: + +- A Teleport Enterprise Edition license +- A generated SSL certificate and key for each of the provisioned virtual server instances or a wildcard certificate + +## Provision with Secure Landing Zone + +SLZ can provision the bastion host in two locations. You can place the bastion either within the management VPC or in the edge VPC if you're using BIG-IP from F5. + +| Management VPC | Edge or Transit VPC | +| ---------------------------------------------------| ----------------------------- | +| ![Management](../images/management-teleport.png) | ![Edge](../images/edge-f5.png)| + +### Provisioning a bastion host in the management VPC + +To provision Teleport within the management zone, you must set `teleport_management_zones` to the number of bastion hosts to deploy, up to a maximum of 3. For example, if you set the number to `1`, it provisions a bastion host in zone-1 of your management VPC. If you set the number to `2`, it provisions a bastion within zone-1 and zone-2 of your management VPC. Other variables that are needed for the setup and configuration of Teleport are mentioned in the following sections. + +### Provisioning a bastion host on the edge VPC with F5 BIG-IP + +The `provision_teleport_in_f5` and `add_edge_vpc` variables must both be set to true. For more information about F5 deployment, see [Provisioning a F5 BIG-IP host by using Secure Landing Zone](../f5-big-ip/f5-big-ip.md) and the following variables that are needed for the setup and configuration of Teleport. + +Don't set both `create_f5_network_on_management_vpc` to true and `teleport_management_zones` to a value greater than `0`. + +### Teleport configuration variables + +The following variables need to be set to provision the bastion host using Teleport. + +``` +provision_teleport_in_f5 # Provision Teleport in the Edge VPC alongside the F5 +use_existing_appid # Use an existing appid instance. If this is false, one will be automatically +appid_name # Name of appid instance. +appid_resource_group # Resource group for existing appid instance. This value is ignored if a new instance is created. +teleport_instance_profile # Machine type for Teleport VSI instances. Use the IBM Cloud CLI command `ibmcloud is instance-profiles` to see available image profiles. +teleport_vsi_image_name # Teleport VSI image name. Use the IBM Cloud CLI command `ibmcloud is images` to see available images. +teleport_license # The contents of the PEM license file +https_cert # The https certificate used by bastion host for teleport +https_key # The https private key used by bastion host for teleport +teleport_hostname # The name of the instance or bastion host +teleport_domain # The domain of the bastion host +teleport_version # Version of Teleport Enterprise to use +message_of_the_day # Banner message that is exposed to the user at authentication time +teleport_admin_email # Email for teleport vsi admin. +teleport_management_zones # Number of zones to create teleport VSI on Management VPC if not using F5. If you are using F5, ignore this value +``` + +For more details about specifying input variables, see [Customizing your environment](../../README.md#customizing-your-environment). For more information about the Teleport configuration variables, see the following documentation for the pattern: + +- [VSI](../../patterns/vsi/README.md#module-variables) +- [Mixed](../../patterns/mixed/README.md#module-variables) +- [ROKS](../../patterns/roks/README.md#module-variables) + +## Accessing Teleport + +After App ID is successfully configured to Teleport, you can log in to Teleport through a web console or tsh client. Tsh is the Teleport client tool that is the command-line tool for Teleport. For more information, see [Installing tsh](https://goteleport.com/docs/server-access/guides/tsh/#installing-tsh). You need the fully qualified domain name (FQDN) of the Teleport server to log in. + +### Log in through the web console + +1. Access the web console on port 3080. (`https://:3080`). +1. Start a terminal session under **Servers**. Look for a single server with a connect button. Click **Connect** and select the user that you would like to log in with. + +### Log in through the tsh client + +1. Install the [Teleport client tool tsh](https://goteleport.com/docs/server-access/guides/tsh/#installing-tsh). +1. [Log in using tsh](https://goteleport.com/docs/server-access/guides/tsh/#logging-in). + + ```sh + tsh login --proxy=:3080 + ``` + +1. Run the shell or run a command on a remote SSH node by using the [tsh ssh command](https://goteleport.com/docs/setup/reference/cli/#tsh-ssh). + + ```sh + tsh ssh <[user@]host> + ``` + +## Debugging bastion host VSI + +You might not be able to access Teleport that is installed on your virtual server after the bastion host is provisioned by the Secure Landing Zone. Follow these steps to login and verify the configuration of your virtual server through SSH. + +1. Connect to your bastion host VSI by using [SSH](https://cloud.ibm.com/docs/vpc?topic=vpc-vsi_is_connecting_linux). + + :information_source: **Tip:** SSH is not allowed by default. You must add rules to the [security groups](https://cloud.ibm.com/vpc-ext/network/securityGroups) and [ACLs](https://cloud.ibm.com/vpc-ext/network/acl) on our virtual server. + +1. Run each of the following commands and check whether the values match the ones that you configured: + + 1. Verify whether the content of the file matches your `teleport_license`: + + ```sh + cat ~/license.pem + ``` + 1. Verify whether the content of the file matches your `https_cert`: + + ```sh + cat ~/cert.pem + ``` + + 1. Verify whether the content of the file equals your `https_key`: + + ```sh + cat ~/key.pem + ``` + + 1. Verify both that the `redirect_url` value equals `https://.:3080/v1/webapi/oidc/callback` and that the `claims_to_roles` value is `- {claim: "email", value: "", roles: ["teleport-admin"]}`: + + ```sh + cat ~/oidc.yaml + ``` + + 1. Verify whether the `audit_sessions_uri` value contains your `cos_bucket_name`: + + ```sh + cat ~/../etc/teleport.yaml + ``` + 1. Verify that Teleport is running: + + ```sh + systemctl status teleport + ``` + +1. After you verify that Teleport is configured correctly, remove the security group and ACL rules you added in Step 1. Alternatively, you can run the script `/root/install.sh` to run the installation again. + +## ACL and security groups + +By default, Secure Landing Zone provisions ACLs and security groups that are more open and not customer dependent. Use the [override.json](../../README.md#customizing-by-using-the-overridejson-file) file to change, add, or delete rules for your environment. diff --git a/.docs/f5-big-ip/f5-big-ip.md b/.docs/f5-big-ip/f5-big-ip.md new file mode 100644 index 000000000..4b16c6bfa --- /dev/null +++ b/.docs/f5-big-ip/f5-big-ip.md @@ -0,0 +1,106 @@ +# Provisioning a F5 BIG-IP host by using Secure Landing Zone + +Through Secure Landing Zone, you can optionally provision the F5 BIG-IP so that you can set up the implemented solution of a client-to-site VPN or web application firewall (WAF). For more information, see [Deploying and configuring F5 BIG-IP](https://cloud.ibm.com/docs/allowlist/framework-financial-services?topic=framework-financial-services-vpc-architecture-connectivity-f5-tutorial) (available by allowlist). + +## Before you begin + +You need the following items to deploy and configure the reference architecture that is described in Deploying and configuring F5 BIG-IP: + +- F5 BIG-IP Virtual Edition license +- Additional IAM VPC Infrastructure Service service access of `IP Spoofing operator` +- [Contact support](https://cloud.ibm.com/unifiedsupport/cases/form) to increase the quota for subnets for each VPC. Thirty subnets per VPC cover most cases. + + The following chart shows the number of subnets that you need, depending on your F5 BIG-IP deployment. + + | Service | # of subnets without bastion | # of subnets with bastion | + | ----------- | ---------------------------- | ------------------------- | + | VPN and WAF | 21 | 24 | + | Full-tunnel | 18 | 21 + | WAF | 15 | 18 + + The following chart lists the CIDR blocks and the zones that each type is deployed. Additional subnets for VPEs are also provisioned along with bastion host, if that host is used. + + | CIDRs | Zone | WAF | Full-tunnel | VPN-and-WAF | + | ------------ | ----------- | :----: | :------------: | :------------: | + | 10.5.10.0/24 | zone-1 | X | X | X | + | 10.5.20.0/24 | zone-1 | X | X | X | + | 10.5.30.0/24 | zone-1 | X | X | X | + | 10.5.40.0/24 | zone-1 | | X | X | + | 10.5.50.0/24 | zone-1 | | X | X | + | 10.5.60.0/24 | zone-1 | | | X | + | 10.6.10.0/24 | zone-2 | X | X | X | + | 10.6.20.0/24 | zone-2 | X | X | X | + | 10.6.30.0/24 | zone-2 | X | X | X | + | 10.6.40.0/24 | zone-2 | | X | X | + | 10.6.50.0/24 | zone-2 | | X | X | + | 10.6.60.0/24 | zone-2 | | | X | + | 10.7.10.0/24 | zone-3 | X | X | X | + | 10.7.20.0/24 | zone-3 | X | X | X | + | 10.7.30.0/24 | zone-3 | X | X | X | + | 10.7.40.0/24 | zone-3 | | X | X | + | 10.7.50.0/24 | zone-3 | | X | X | + | 10.7.60.0/24 | zone-3 | | | X | + +## Provision with Secure Landing Zone + +The F5 BIG-IP can be provisioned in the management or edge/transit VPC. In this case, use the edge/transit VPC. By default, it provisions an F5 BIG-IP within each zone of the region. You can change this setting in the [override.json](../../README.md#customizing-by-using-the-overridejson-file) file. + +| Management VPC | Edge/Transit VPC | +| -------------------------------------------- | ----------------------------- | +| ![Management](../images/f5-management.png) | ![Edge](../images/edge-f5.png)| + +### F5 BIG-IP configuration variables + +Some of the configuration variables are optional, but several are needed to provision the F5 BIG-IP. The following variables are important: + +``` +add_edge_vpc # Automatically adds the edge/transit VPC along with the F5 BIG-IP +create_f5_network_on_management_vpc # Provision the F5 BIG-IP in the management VPC +provision_teleport_on_f5 # Provision Teleport bastion hosts within the edge VPC. See bastion documentation for more information about bastion hosts +vpn_firewall_type # The type of service you are using the BIG-IP for (full-tunnel, waf, vpn-and-waf). This is required if you enable the F5 BIG-IP +hostname # Hostname of the F5 BIG-IP +domain # The domain name of the F5 BIG-IP +tmos_admin_password # The admin password to log into the management console (Requirements: Minimum length of 15 characters/Required Characters: Numeric = 1, Uppercase = 1, Lowercase = 1) +enable_f5_external_fip # Enable a FIP on the external interface. Default is true +enable_f5_management_fip # Enable a FIP on the management interface. Default is false +``` + +The following example shows how to provision an F5 with the following configuration: + +- Create an edge/transit VPC +- Provision an F5 BIG-IP with the architecture setup for WAF in each zone +- Do not provision bastion host within the edge VPC +- Set the hostname to `example` +- Set the domain to `test.com` +- Set the console login to `Hello12345World` +- Enable a floating IP the external interface + + ``` + add_edge_vpc = true + create_f5_network_on_management_vpc = false + provision_teleport_on_f5 = false + vpn_firewall_type = "waf" + hostname = "example" + domain = "test.com" + tmos_admin_password = "Hello12345World" + enable_f5_external_fip = true + enable_f5_management_fip = false + ``` + +For more details about specifying input variables, see [Customizing your environment](../../README.md#customizing-your-environment). For more information about the F5 configuration variables, see the following documentation for the pattern: + +- [VSI](../../patterns/vsi#module-variables) +- [Mixed](../../patterns/mixed#module-variables) +- [ROKS](../../patterns/roks#module-variables) + +### Accessing the F5 BIG-IP + +You might not be able to access management console by floating IP address (if enabled) that is provisioned on your virtual server instance either on the management or external interface. Use the `tmos_admin_password` that you set earlier to access it. + +### Setup of the client-to-site VPN and WAF + +For more information about how to set up the client-to-site VPN and WAF, see [Deploying and configuring F5 BIG-IP](https://cloud.ibm.com/docs/allowlist/framework-financial-services?topic=framework-financial-services-vpc-architecture-connectivity-f5-tutorial) (available by allowlist). + +### ACL and security groups + +By default, Secure Landing Zone provisions ACLs and security groups that are more open and not customer dependent. Use the [override.json](../../README.md#using-overridejson) file to change, add, or delete rules for your environment. diff --git a/.docs/images/bastion-host.png b/.docs/images/bastion-host.png new file mode 100644 index 000000000..33aa3fa34 Binary files /dev/null and b/.docs/images/bastion-host.png differ diff --git a/.docs/images/edge-f5.png b/.docs/images/edge-f5.png new file mode 100644 index 000000000..9e477aa47 Binary files /dev/null and b/.docs/images/edge-f5.png differ diff --git a/.docs/images/f5-management.png b/.docs/images/f5-management.png new file mode 100644 index 000000000..81428c837 Binary files /dev/null and b/.docs/images/f5-management.png differ diff --git a/.docs/images/flowlogs.png b/.docs/images/flowlogs.png new file mode 100644 index 000000000..6800c5ef1 Binary files /dev/null and b/.docs/images/flowlogs.png differ diff --git a/.docs/images/management-teleport.png b/.docs/images/management-teleport.png new file mode 100644 index 000000000..95394c1f8 Binary files /dev/null and b/.docs/images/management-teleport.png differ diff --git a/.docs/images/mixed.png b/.docs/images/mixed.png new file mode 100644 index 000000000..cf0e528b2 Binary files /dev/null and b/.docs/images/mixed.png differ diff --git a/.docs/images/network.png b/.docs/images/network.png new file mode 100644 index 000000000..1cf7518bc Binary files /dev/null and b/.docs/images/network.png differ diff --git a/.docs/images/patterns/mixed-pattern.png b/.docs/images/patterns/mixed-pattern.png new file mode 100644 index 000000000..86b2e3361 Binary files /dev/null and b/.docs/images/patterns/mixed-pattern.png differ diff --git a/.docs/images/patterns/roks-pattern.png b/.docs/images/patterns/roks-pattern.png new file mode 100644 index 000000000..3eabb812a Binary files /dev/null and b/.docs/images/patterns/roks-pattern.png differ diff --git a/.docs/images/patterns/vsi-pattern.png b/.docs/images/patterns/vsi-pattern.png new file mode 100644 index 000000000..2fbbc712a Binary files /dev/null and b/.docs/images/patterns/vsi-pattern.png differ diff --git a/.docs/images/resources.png b/.docs/images/resources.png new file mode 100644 index 000000000..ec618627c Binary files /dev/null and b/.docs/images/resources.png differ diff --git a/.docs/images/roks.png b/.docs/images/roks.png new file mode 100644 index 000000000..b2e23dc30 Binary files /dev/null and b/.docs/images/roks.png differ diff --git a/.docs/images/vpc-module.png b/.docs/images/vpc-module.png new file mode 100644 index 000000000..7ebae5fb3 Binary files /dev/null and b/.docs/images/vpc-module.png differ diff --git a/.docs/images/vpe.png b/.docs/images/vpe.png new file mode 100644 index 000000000..64449338b Binary files /dev/null and b/.docs/images/vpe.png differ diff --git a/.docs/images/vsi-lb.png b/.docs/images/vsi-lb.png new file mode 100644 index 000000000..f93dacfd2 Binary files /dev/null and b/.docs/images/vsi-lb.png differ diff --git a/.docs/images/vsi.png b/.docs/images/vsi.png new file mode 100644 index 000000000..14e904471 Binary files /dev/null and b/.docs/images/vsi.png differ diff --git a/.docs/pattern-defaults.md b/.docs/pattern-defaults.md new file mode 100644 index 000000000..1ff334c21 --- /dev/null +++ b/.docs/pattern-defaults.md @@ -0,0 +1,213 @@ +# Default Secure Landing Zone configuration + +## Pattern variables + +Each landing zone pattern takes just a few variables, so you can get started with IBM Cloud quickly and easily. Each pattern requires only the `ibmcloud_api_key`, `prefix`, and `region` variables to get started (the `ssh_public_key` must also be provided by users when they create patterns that use virtual servers). + +--- + +### Variables available in all patterns + +The following variables are available in all patterns. + +| Name | Type | Description | Sensitive | Default | +|---|---|---|---|---| +| ibmcloud_api_key | string | The IBM Cloud platform API key needed to deploy IAM enabled resources. | true | | +| TF_VERSION | string | The version of the Terraform engine used in the Schematics workspace. | | 1.0 | +| prefix | string | A unique identifier for resources. Must begin with a lowercase letter and end with a lowercase letter or number. This prefix is added to any resources provisioned by this template. Prefixes must be 16 or fewer characters. | | | +| region | string | Region where VPC is created. To find your VPC region, use `ibmcloud is regions` command to find available regions. | | | +| tags | list(string) | List of tags to apply to resources created by this module. | | [] | +| network_cidr | string | Network CIDR for the VPC that is used to manage network ACL rules for cluster provisioning. | | 10.0.0.0/8 | +| vpcs | list(string) | List of VPCs to create. The first VPC in this list is always considered the `management` VPC and is where the VPN Gateway is connected. VPC names can have a maximum of 16 characters and can contain only lowercase letters, numbers, and `-` characters. VPC names must begin with a letter and end with a letter or number. | | ["management", "workload"] | +| enable_transit_gateway | bool | Whether to create a transit gateway | | true | +| add_atracker_route | bool | Whether to enable creating an Activity Tracker route. Activity Tracker can have only one route per zone. | | true | +| hs_crypto_instance_name | string | Optionally, you can bring you own Hyper Protect Crypto Service instance for key management. If you would like to use that instance, add the name here. Otherwise, leave as null | | null | +| hs_crypto_resource_group | string | If you're using Hyper Protect Crypto services in a resource group other than `Default`, provide the name here. | | null | +| override | bool | Whether to override default values with custom JSON template. When set to `true`, use the `override.json` file to allow users to create a fully customized environment. | | false | + +### Variables for patterns that include virtual servers + +The following variables apply to the [mixed pattern](../patterns/mixed/) and [VSI pattern](../patterns/vsi): + +| Name | Type | Description | Sensitive | Default | +|---|---|---|---|---| +| ssh_public_key | string | Public SSH Key for VSI creation. Must be a valid SSH key that does not exist in the deployment region. | | | +| vsi_image_name | string | VSI image name. Use the IBM Cloud CLI command `ibmcloud is images` to see available images. | | ibm-ubuntu-18-04-6-minimal-amd64-2 | +| vsi_instance_profile | string | VSI image profile. Use the IBM Cloud CLI command `ibmcloud is instance-profiles` to see available image profiles. | | cx2-4x8 | +| vsi_per_subnet | number | Number of Virtual Servers to create on each VSI subnet. | | 1 | + +### Variables for Patterns that include Red Hat OpenShift clusters + +The following variables apply to the [mixed pattern](../patterns/mixed/) and the [ROKS pattern](../patterns/roks/): + +| Name | Type | Description | Sensitive | Default | +|---|---|---|---|---| +| cluster_zones | number | Number of zones to provision clusters for each VPC. At least one zone is required. Can be 1, 2, or 3 zones. | | 3 | +| kube_version | string | Kubernetes version to use for cluster. To get available versions, use the IBM Cloud CLI command `ibmcloud ks versions`. To use the default version, leave as `default`. Updates to the default versions might force this to change. | | default | +| flavor | string | Machine type for cluster. Use the IBM Cloud CLI command `ibmcloud ks flavors` to find valid machine types | | bx2.16x64 | +| workers_per_zone | number | Number of workers in each zone of the cluster. Red Hat OpenShift requires at least two workers. | | 1 | +| wait_till | string | To avoid long waiting times when you run your Terraform code, you can specify the stage when you want Terraform to mark the cluster resource creation as completed. Depending on what stage you choose, the cluster creation might not be fully completed and continues to run in the background. However, your Terraform code can continue to run without waiting for the cluster to be fully created. Supported args are `MasterNodeReady`, `OneWorkerNodeReady`, and `IngressReady` | | IngressReady | +| update_all_workers | bool | Whether to update all workers to a new Kubernetes version | | false | +| entitlement | string | Leave as null if you don't have an entitlement. Entitlement reduces additional OpenShift Container Platform license cost in Red Hat OpenShift clusters. Use Cloud Pak with OpenShift Container Platform license entitlement to create the Red Hat OpenShift cluster. Set this argument to `cloud_pak` only if you use the cluster with a Cloud Pak that has an Red Hat OpenShift entitlement.

This variable is set only when you create the cluster. Further modifications are not affected by this setting. | | null | + +## Resource groups + +For each of the following resource groups, the `prefix` variable and a hyphen are added to the name (for example, `slz-management-rg` if `prefix` is `slz`). + +Name | Description +----------------|------------------------------------------------ +`management-rg` | Management virtual infrastructure components +`workload-rg` | Workload virtual infrastructure components +`service-rg` | Cloud service instances + +## Cloud services + +![Services](./images/resources.png) + +### Key management + +A Key Protect instance is created unless the `hs_crypto_instance_name` variable is provided. By default, Key Protect instances are provisioned in the `service-rg` resource group. + +#### Keys + +Name | Description +----------------|------------------------------------------------ +`atracker-key` | Encryption key for the Activity Tracker instance +`slz-key` | Encryption key for landing zone services + +### Cloud Object Storage + +Two Cloud Object Storage instances are created in the `service-rg` by default. + +Name | Description +----------------|------------------------------------------------ +`atracker-cos` | Object storage for Activity Tracker +`cos` | Object storage + +#### Storage buckets + +Name | Instance | Encryption key | Description +--------------------|----------------|----------------|--------------------------------------------- +`atracker-bucket` | `atracker-cos` | `atracker-key` | Bucket for activity tracker logs +`management-bucket` | `cos` | `slz-key` | Bucket for flow logs from Management VPC +`workload-bucket` | `cos` | `slz-key` | Bucket for flow logs from Workload VPC + +#### Storage API keys + +An API key is generated for the `atracker-cos` instance to allow Activity Tracker to connect to Cloud Object Storage. + +### Activity Tracker + +An [Activity Tracker](https://cloud.ibm.com/docs/activity-tracker) instance is provisioned for this architecture. + +## VPC infrastructure + +![network](./images/network.png) + +By default, two VPCs are created `management` and `workload`. All the components for the management VPC are provisioned in the `management-rg` resource group and the workload VPC components are all provisioned in the `workload-rg` resource group. + +### Network access control lists + +An [access control list](https://cloud.ibm.com/docs/vpc?topic=vpc-using-acls) is created for each VPC to allow inbound communication within the network, inbound communication from IBM services, and to allow all outbound traffic. + +Rule | Action | Direction | Source | Destination +----------------------------|--------|-----------|---------------|---------------- +`allow-ibm-inbound` | Allow | Inbound | 161.26.0.0/16 | 10.0.0.0/8 +`allow-all-network-inbound` | Allow | Inbound | 10.0.0.0/8 | 10.0.0.0/8 +`allow-all-outbound` | Allow | Outbound | 0.0.0.0/0 | 0.0.0.0/0 + +#### Cluster rules + +By default, to make sure that clusters can be created on VPCs, the following rules are added to ACLs where clusters are provisioned. For more information about controlling Red Hat OpenShift cluster traffic with ACLs, see the documentation [here](https://cloud.ibm.com/docs/openshift?topic=openshift-vpc-acls). + +Rule | Action | TCP / UDP | Direction | Source | Source Port | Destination | Destination Port +---------------------------------------------------|--------|-----------|-----------|---------------|---------------|---------------|------------------- +Create worker nodes | Allow | Any | inbound | 161.26.0.0/16 | Any | 10.0.0.0/8 | Any +Communicate with service instances | Allow | Any | inbound | 166.8.0.0/14 | Any | 10.0.0.0/8 | Any +Allow incoming application traffic | Allow | TCP | inbound | 10.0.0.0/8 | 30000 - 32767 | 10.0.0.0/8 | Any +Expose applications using load balancer or ingress | Allow | TCP | inbound | 10.0.0.0/8 | Any | 10.0.0.0/8 | 443 +Create worker nodes | Allow | Any | outbound | 10.0.0.0/8 | Any | 161.26.0.0/16 | Any +Communicate with service instances | Allow | Any | outbound | 10.0.0.0/8 | Any | 166.8.0.0/14 | Any +Allow incoming application traffic | Allow | TCP | outbound | 10.0.0.0/8 | Any | 10.0.0.0/8 | 30000 - 32767 +Expose applications using load balancer or ingress | Allow | TCP | outbound | 10.0.0.0/8 | 443 | 10.0.0.0/8 | Any + +### Subnets + +Each VPC creates two tiers of subnets, each attached to the network ACL created for that VPC. The management VPC also creates a subnet for the VPN Gateway. + +#### Management VPC subnets + +Subnet Tier | Zone 1 Subnet Name | Zone 1 CIDR | Zone 2 Subnet Name | Zone 2 CIDR | Zone 3 Subnet Name | Zone 3 CIDR | +------------|--------------------|---------------|--------------------|---------------|--------------------|---------------| +`vsi` | `vsi-zone-1` | 10.10.10.0/24 | `vsi-zone-2` | 10.10.20.0/24 | `vsi-zone-3` | 10.10.30.0/24 | +`vpe` | `vpe-zone-1` | 10.20.10.0/24 | `vpe-zone-2` | 10.20.20.0/24 | `vsi-zone-3` | 10.20.30.0/24 | +`vpn` | `vpn-zone-1` | 10.30.10.0/24 | + +#### Workload VPC subnets + +Subnet Tier | Zone 1 Subnet Name | Zone 1 CIDR | Zone 2 Subnet Name | Zone 2 CIDR | Zone 3 Subnet Name | Zone 3 CIDR | +------------|--------------------|---------------|--------------------|---------------|--------------------|---------------| +`vsi` | `vsi-zone-1` | 10.40.10.0/24 | `vsi-zone-2` | 10.40.20.0/24 | `vsi-zone-3` | 10.40.30.0/24 | +`vpe` | `vpe-zone-1` | 10.50.10.0/24 | `vpe-zone-2` | 10.50.20.0/24 | `vsi-zone-3` | 10.50.30.0/24 | + +### Flow logs + +Using the Cloud Object Storage bucket provisioned for each VPC network, a flow log collector is created. + +![Flow logs](./images/flowlogs.png) + +### Virtual private endpoints + +Each VPC dynamically has a Virtual Private Endpoint address for the `cos` instance that is created in each zone of that VPC's `vpe` subnet tier. + +![vpe](./images/vpe.png) + +### Default VPC security group + +The default VPC security group allows all outbound traffic and inbound traffic from within the security group. + +## Virtual server deployments + +For the `vsi` pattern, identical virtual server deployments are created on each zone of the `vsi` tier of each VPC. For the `mixed` pattern, virtual servers are created only on the Management VPC. The number of these Virtual servers can be changed using the `vsi_per_subnet` variable. + +### Boot volume encryption + +Boot volumes for each virtual server are encrypted by the `slz-key` + +### Virtual server image + +To find available virtual servers in your region, use the following IBM Cloud CLI command: + +```sh +ibmcloud is images +``` + +### Virtual server profile + +To find available hardware configurations in your region, use the following IBM Cloud CLI command: + +```sh +ibmcloud is instance-profiles +``` + +### Additional components + +Virtual Server components, like additional block storage and load balancers, can be configured by using the `override.json` file. You can find the variable definitions in the [variables.tf](../variables.tf) file. + +## Red Hat OpenShift cluster deployments + +For the ROKS pattern, identical Red Hat OpenShift cluster deployments are created on each zone of the `vsi` tier of each VPC. For the `mixed` pattern, clusters are created only on the workload VPC. Clusters can be deployed across 1, 2, or 3 zones by using the `cluster_zones` variable. + +Clusters deployed use the most recent default cluster version. + +### Workers per zone + +The number of workers in each zone of the cluster can be changed by using the `workers_per_subnet` variable. At least two workers must be available for clusters to successfully provision. + +### Cluster flavor + +To find available hardware configurations in your region, use the following IBM Cloud CLI command: + +```sh +ibmcloud ks flavors +``` diff --git a/.docs/patterns/mixed-pattern.md b/.docs/patterns/mixed-pattern.md new file mode 100644 index 000000000..c6c740664 --- /dev/null +++ b/.docs/patterns/mixed-pattern.md @@ -0,0 +1,30 @@ +# IBM Secure Landing Zone for the mixed pattern + +## Architecture diagram + +![Mixed pattern architecture diagram](../images/patterns/mixed-pattern.png) + +## Configured components and services + +The following common services are created: + +- Resource groups +- Access groups +- Transit gateway + +The following components are configured through automation. + +| Multi-Zone Region (MZR) management | Multi-Zone Region (MZR) workload | +|---|---| +| Management access group | Workload access group | +| Management KMS key | Workload KMS key | +| Management Cloud Object Storage Instance and Cloud Object Storage buckets | Workload Cloud Object Storage instance and Cloud Object Storage buckets | +| Management Cloud Object Storage Authorization for Hyper Protect Crypto Services and KeyProtect | Workload Cloud Object Storage Authorization for Hyper Protect Crypto Services and KeyProtect | +| Management Flow Log, Flow log Cloud Object Storage buckets and authorization | Workload flow log, Flow log Cloud Object Storage buckets and authorization | +| Management VPC | Workload VPC | +| Management VPC VSI | Workload Red Hat OpenShift cluster | +| Management VPC VSI encryption authorization | Workload Kubernetes encryption authorization | +| Management VPC VSI SSH module | Workload subnets for OpenShift Container Platform cluster, VPE, and VPN resources | +| Management subnets for VSI, VPE, and VPN resources | Workload VPE gateway (for Cloud Object Storage) | +| Management VPE gateway (for Cloud Object Storage) | Workload VPE gateway (for Container Registry) | +| Management VPE gateway (for Container Registry) | | diff --git a/.docs/patterns/roks-pattern.md b/.docs/patterns/roks-pattern.md new file mode 100644 index 000000000..327007cb0 --- /dev/null +++ b/.docs/patterns/roks-pattern.md @@ -0,0 +1,35 @@ +# IBM Secure Landing Zone for the IBM Cloud Red Hat OpenShift Kubernetes pattern + +## Architecture diagram + +![ROKS pattern architecture diagram](../images/patterns/roks-pattern.png) + +## Configured components and services + +The following components are configured through automation: + +* Resource groups +* KMS service +* Management access group +* Management KMS key +* Management Cloud Object Storage instance and Cloud Object Storage buckets +* Management Cloud Object Storage authorization for KMS +* Management flow log, Flow log Cloud Object Storage buckets and authorization +* Management VPC +* Management OpenShift Container Platform cluster +* Management VPC Kubernetes encryption authorization +* Management subnets for OpenShift Container Platform cluster, VPE, and VPN resources +* Management VPE gateway (for Cloud Object Storage) +* Management VPE gateway (for Container Registry) +* Workload access group +* Workload KMS key +* Workload Cloud Object Storage instance and Cloud Object Storage buckets +* Workload Cloud Object Storage authorization for KMS +* Workload Flow log, Flow log Cloud Object Storage buckets and authorization +* Workload VPC +* Workload OpenShift Container Platform cluster +* Workload VPC Kubernetes encryption authorization +* Workload subnets for VPC OpenShift Container Platform Cluster, VPE, and VPN resources +* Workload VPE gateway (for Cloud Object Storage) +* Workload VPE gateway (for Container Registry) +* IBM transit gateway diff --git a/.docs/patterns/vsi-pattern.md b/.docs/patterns/vsi-pattern.md new file mode 100644 index 000000000..2abe7a866 --- /dev/null +++ b/.docs/patterns/vsi-pattern.md @@ -0,0 +1,35 @@ +# IBM Secure Landing Zone for the VSI pattern + +## Architecture diagram + +![VSI pattern architecture diagram](../images/patterns/vsi-pattern.png) + +## Configured components and services + +The following components are configured through automation: + +* Resource groups +* KMS service +* Management access group +* Management KMS key +* Management Cloud Object Storage instance and Cloud Object Storage buckets +* Management Cloud Object Storage authorization for Hyper Protect Crypto Services +* Management Flow log, Flow log Cloud Object Storage buckets and authorization +* Management VPC +* Management VPC VSI +* Management VPC VSI encryption authorization +* Management VPC VSI SSH module +* Management Subnets for VSI, VPE, and VPN resources +* Management VPE gateway (for Cloud Object Storage) +* Workload access group +* Workload KMS key +* Workload Cloud Object Storage instance and Cloud Object Storage buckets +* Workload Cloud Object Storage authorization for Hyper Protect Crypto Services +* Workload Flow log, Flow log Cloud Object Storage buckets and authorization +* Workload VPC +* Workload VPC VSI +* Workload VPC VSI encryption authorization +* Workload VPC VSI SSH module +* Workload subnets for VPC VSI, VPE, and VPN resources +* Workload VPE gateway (for Cloud Object Storage) +* IBM Transit gateway diff --git a/README.md b/README.md index c25199645..7b6ddc3dd 100644 --- a/README.md +++ b/README.md @@ -11,17 +11,115 @@ -# Secure Landing Zone +The landing zone module can be used to create a fully customizable VPC environment within a single region. The three following patterns are starting templates that can be used to get started quickly with Landing Zone. These patterns are located in the [patterns](/patterns/) directory. -This module creates a secure landing zone within a single region. +Each of these patterns creates the following infrastructure: -## VPC +- A resource group for cloud services and for each VPC. +- Cloud Object Storage instances for flow logs and Activity Tracker +- Encryption keys in either a Key Protect or Hyper Protect Crypto Services instance +- A management and workload VPC connected by a transit gateway +- A flow log collector for each VPC +- All necessary networking rules to allow communication +- Virtual Private Endpoint (VPE) for Cloud Object Storage in each VPC +- A VPN gateway in the management VPC + +Each pattern creates the following infrastructure on the VPC: + +- The virtual server (VSI) pattern deploys identical virtual servers across the VSI subnet tier in each VPC +- The Red Hat OpenShift Kubernetes (ROKS) pattern deploys identical clusters across the VSI subnet tier in each VPC +- The mixed pattern provisions both of these elements + +For more information about the default configuration, see [Default Secure Landing Zone configuration](.docs/pattern-defaults.md). + +| Virtual server pattern | Red Hat OpenShift pattern | Mixed pattern | +| -------------------------------- | -------------------------------- | ---------------------------------- | +| ![VSI](./.docs/images/vsi.png) | ![ROKS](./.docs/images/roks.png) | ![Mixed](./.docs/images/mixed.png) | + +## Customizing your environment + +You can customize your environment with Secure Landing Zone in two ways: by using Terraform input variables and by using the `override.json` file. + +### Customizing by using Terraform input variables + +In the first method, you set a couple of required input variables of your respective pattern, and then provision the environment. + +You can find the list of input variables in the `variables.tf` file of the pattern directory: + +- [VSI pattern input variables](./patterns/vsi/variables.tf) +- [ROKS pattern input variables](./patterns/roks/variables.tf) +- [Mixed pattern input variables](./patterns/mixed/variables.tf) + +Terraform supports multiple ways to set input variables. For more information, see [Input Variables](https://www.terraform.io/language/values/variables#assigning-values-to-root-module-variables) in the Terraform language documentation. + +For example, you can add more VPCs by adding the name of the new VPC to the `vpcs` variable in the `variables.tf` file in your patterns directory. + +``` +vpcs = ["management", "workload", ""] +``` + +You can get more specific after you use this method. Running the Terraform outputs a JSON-based file that you can use in `override.json`. + +### Customizing by using the override.json file + +The second route is to use the `override.json` to create a fully customized environment based on the starting template. By default, each pattern's `override.json` is set to contain the default environment configuration. You can use the `override.json` in the respective pattern directory by setting the template input `override` variable to `true`. Each value in `override.json` corresponds directly to a variable value from this root module, which each pattern uses to create your environment. + +#### Supported variables + +Through the `override.json`, you can pass any variable or supported optional variable attributes from this root module, which each pattern uses to provision infrastructure. For a complete list of supported variables and attributes, see the [variables.tf ](variables.tf) file. + +#### Overriding variables + +After every execution of `terraform apply`, a JSON-encoded definition is output. This definition of your environment is based on the defaults for the Landing Zone and any variables that are changed in the `override.json` file. You can then use the output in the `override.json` file. + +You can redirect the contents between the output lines by running the following commands: + +```sh +config = <