-
Hey all, I want to achieve private GKE Autopilot clusters per projects with masters available via VPN: In 1-resman I created a team folder(tf.tvfars) with corresponding Cloudbuild Trigger and Source Repo: team_folders = {
ejp = {
descriptive_name = "EJP"
group_iam = {
"[email protected]" = [
"roles/viewer"
]
}
impersonation_groups = ["[email protected]"]
cicd = {
branch = "master"
identity_provider = null
type = "sourcerepo"
name = "ejp-infra"
}
} Using 2-networking I created a subnet somename.yaml region: europe-west3
description: Subnet for project somesuffix-dev-ejpcc-0
ip_cidr_range: 10.120.3.0/24
secondary_ip_ranges:
pods: 100.70.0.0/16
services: 100.71.3.0/24 Using 3-project-factory I created a project for that team: somename.yaml: labels:
team: ejp
parent: folders/somefolder
services:
- compute.googleapis.com
- container.googleapis.com
- secretmanager.googleapis.com
- sqladmin.googleapis.com
shared_vpc_service_config:
host_project: someprefix-dev-net-spoke-0
service_identity_iam:
"roles/compute.networkUser":
- cloudservices
- container-engine
"roles/container.hostServiceAgentUser":
- container-engine So far, so good - I have a project, shared VPC with a subnet ready for GKE and a CI/CD pipeline created via teamfolder and an SA that can create clusters within that project. Within the Cloudbuild / Cloud Source CI/CD pipeline, I create a GKE Autopilot cluster: module "gke-autopilot" {
source = "git::https://source.developers.google.com/p/someprefix-prod-iac-core-0/r/fast-modules//modules//gke-cluster-autopilot?ref=v1.0"
name = "gke-ejpcc-dev"
project_id = var.project_id
location = var.default_region
deletion_protection = false
labels = { environment = "dev" }
vpc_config = {
host_project_id = var.host_project_ids.dev-spoke-0
network = var.vpc_self_links.dev-spoke-0
subnetwork = var.subnet_self_links.dev-spoke-0["europe-west3/dev-ejp-ew3"]
master_authorized_ranges = {
someprefix_onprem = var.someprefix_onprem
}
master_ipv4_cidr_block = "10.120.4.0/28"
secondary_range_names = {
pods = var.secondary_range_names.pods
services = var.secondary_range_names.services
}
}
private_cluster_config = {
enable_private_endpoint = true
master_global_access = false
peering_config = {
export_routes = true
import_routes = true
project_id = var.host_project_ids.dev-spoke-0
}
}
} I get the following error during creation:
Obviously, the service account which runs the CloudBuild trigger, doesn't have the permission compute.networks.updatePeering for the VPC in the host project. As a workaround, I assigned the FAST fabric custom role "serviceProjectNetworkAdmin" (which has the compute.networks.updatePeering permission) manually to someprefix-prod-teams-ejp-0@wlc-prod-iac-core-0.iam.gserviceaccount.com (ServiceAccount, which is used in the CloudBuild trigger) and everything is being created fine and I can access the cluster from the OnPrem location via VPN. Without the peering configuration to import/export routes everything works fine, but enabling peering_config = {
export_routes = true
import_routes = true
project_id = var.host_project_ids.dev-spoke-0 throws the error. Is this a bug / missing feature? Best, |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
This is working as intended. The project factory, gke and data platform service accounts are all assigned serviceProjectNetworkAdmin exactly for this reason.
What you're doing makes sense (i.e. grant serviceProjectNetworkAdmin to the team's SA). I'd do that directly in resman when the team SA is created. How are you doing it? |
Beta Was this translation helpful? Give feedback.
Yes and no. As you mention, resman doesn't create any projects/vpcs but you can still grant permissions on the folders. The easiest approach is to grant the teams service accounts serviceProjectNetworkAdmin on one of those folders (net/dev, net/prod, or the level one containing the other two)