Set up the Autoscaler in Cloud Run functions in a distributed
deployment using Terraform
Home
·
Scaler component
·
Poller component
·
Forwarder component
·
Terraform configuration
·
Monitoring
Cloud Run functions
·
Google Kubernetes Engine
Per-Project
·
Centralized
·
Distributed
- Table of Contents
- Overview
- Architecture
- Before you begin
- Preparing the Autoscaler Project
- Preparing the Application Project
- Verifying your deployment
This directory contains Terraform configuration files to quickly set up the infrastructure for your Autoscaler with a distributed deployment.
In this deployment option all the components of the Autoscaler reside in a single project, with the exception of Cloud Scheduler (step 1) and the Forwarder topic and function
This deployment is the best of both worlds between the per-project and the centralized deployments: Teams who own the Memorystore Cluster instances, called Application teams, are able to manage the Autoscaler configuration parameters for their instances with their own Cloud Scheduler jobs. On the other hand, the rest of the Autoscaler infrastructure is managed by a central team.
For an explanation of the components of the Autoscaler and the interaction flow, please read the main Architecture section.
Cloud Scheduler can only publish messages to topics in the same project. Therefore in step 2, we transparently introduce an intermediate component to make this architecture possible. For more information, see the Forwarder function.
The distributed deployment has the following pros and cons:
- Configuration and infrastructure: application teams are in control of their config and schedules
- Maintenance: Scaler infrastructure is centralized, reducing up-keep overhead
- Policies and audit: Best practices across teams might be easier to specify and enact. Audits might be easier to execute.
- Configuration: application teams need to provide service accounts to write to the polling topic.
- Risk: the centralized team itself may become a single point of failure even if the infrastructure is designed with high availability in mind.
-
Open the Cloud Console
-
Activate Cloud Shell
At the bottom of the Cloud Console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Cloud SDK already installed, including thegcloud
command-line tool, and with values already set for your current project. It can take a few seconds for the session to initialize. -
In Cloud Shell, clone this repository:
gcloud source repos clone memorystore-cluster-autoscaler --project=memorystore-oss-preview
-
Change into the directory of the cloned repository, and check out the
main
branch:cd memorystore-cluster-autoscaler && git checkout main
-
Export variables for the working directories:
export AUTOSCALER_DIR="$(pwd)/terraform/cloud-functions/distributed/autoscaler-project" export APP_DIR="$(pwd)/terraform/cloud-functions/distributed/app-project"
In this section you prepare the deployment of the project where the centralized Autoscaler infrastructure, with the exception of Cloud Scheduler, lives.
-
Go to the project selector page in the Cloud Console. Select or create a Cloud project.
-
Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project.
-
In Cloud Shell, set environment variables with the ID of your autoscaler project:
export AUTOSCALER_PROJECT_ID=<INSERT_YOUR_PROJECT_ID> gcloud config set project "${AUTOSCALER_PROJECT_ID}"
-
Choose the region where the Autoscaler infrastructure will be located.
export AUTOSCALER_REGION=us-central1
-
Enable the required Cloud APIs :
gcloud services enable \ appengine.googleapis.com \ artifactregistry.googleapis.com \ cloudbuild.googleapis.com \ cloudfunctions.googleapis.com \ cloudresourcemanager.googleapis.com \ compute.googleapis.com \ eventarc.googleapis.com \ iam.googleapis.com \ networkconnectivity.googleapis.com \ pubsub.googleapis.com \ logging.googleapis.com \ monitoring.googleapis.com \ run.googleapis.com \ serviceconsumermanagement.googleapis.com
-
There are two options for deploying the state store for the Autoscaler:
For Firestore, follow the steps in Using Firestore for Autoscaler State. For Spanner, follow the steps in Using Spanner for Autoscaler state.
-
To use Firestore for the Autoscaler state, enable the additional APIs:
gcloud services enable firestore.googleapis.com
-
Create a Google App Engine app to enable the API for Firestore:
gcloud app create --region="${REGION}"
-
To store the state of the Autoscaler, update the database created with the Google App Engine app to use Firestore native mode.
gcloud firestore databases update --type=firestore-native
-
Next, continue to Deploying the Autoscaler.
-
To use Spanner for the Autoscaler state, enable the additional API:
gcloud services enable spanner.googleapis.com
-
If you want Terraform to create a Spanner instance (named
memorystore-autoscaler-state
by default) to store the state, set the following variable:export TF_VAR_terraform_spanner_state=true
If you already have a Spanner instance where state must be stored, set the the name of your instance:
export TF_VAR_spanner_state_name=<INSERT_YOUR_STATE_SPANNER_INSTANCE_NAME>
If you want to manage the state of the Autoscaler in your own Cloud Spanner instance, please create the following table in advance:
CREATE TABLE memorystoreClusterAutoscaler ( id STRING(MAX), lastScalingTimestamp TIMESTAMP, createdOn TIMESTAMP, updatedOn TIMESTAMP, lastScalingCompleteTimestamp TIMESTAMP, scalingOperationId STRING(MAX), scalingRequestedSize INT64, scalingPreviousSize INT64, scalingMethod STRING(MAX), ) PRIMARY KEY (id)
-
Next, continue to Deploying the Autoscaler.
-
Set the project ID and region in the corresponding Terraform environment variables:
export TF_VAR_project_id="${AUTOSCALER_PROJECT_ID}" export TF_VAR_region="${AUTOSCALER_REGION}"
-
Change directory into the Terraform scaler-project directory and initialize it.
cd "${AUTOSCALER_DIR}" terraform init
-
Create the Autoscaler infrastructure. Answer
yes
when prompted, after reviewing the resources that Terraform intends to create.terraform apply -parallelism=2
- If you are running this command in Cloud Shell and encounter errors of
the form "
Error: cannot assign requested address
", this is a known issue in the Terraform Google provider, please retry with -parallelism=1.
- If you are running this command in Cloud Shell and encounter errors of
the form "
In this section you prepare the deployment of the Cloud Scheduler, Forwarder topic and function in the project where the Memorystore Cluster instances live.
-
Go to the project selector page in the Cloud Console. Select or create a Cloud project.
-
Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project.
-
In Cloud Shell, set the environment variables with the ID of your application project:
export APP_PROJECT_ID=<INSERT_YOUR_APP_PROJECT_ID> gcloud config set project "${APP_PROJECT_ID}"
-
Choose the region where the Application project will be located:
export APP_REGION=us-central1
-
Use the following command to enable the Cloud APIs:
gcloud services enable \ appengine.googleapis.com \ artifactregistry.googleapis.com \ cloudbuild.googleapis.com \ cloudfunctions.googleapis.com \ cloudresourcemanager.googleapis.com \ cloudscheduler.googleapis.com \ compute.googleapis.com \ eventarc.googleapis.com \ iam.googleapis.com \ networkconnectivity.googleapis.com \ pubsub.googleapis.com \ logging.googleapis.com \ monitoring.googleapis.com \ redis.googleapis.com \ run.googleapis.com \ serviceconsumermanagement.googleapis.com
-
Create an App to enable Cloud Scheduler, but do not create a Firestore database:
gcloud app create --region="${APP_REGION}"
-
Set the project ID, region, and App Engine location in the corresponding Terraform environment variables
export TF_VAR_project_id="${APP_PROJECT_ID}" export TF_VAR_region="${APP_REGION}"
-
By default, a new Memorystore Cluster instance will be created for testing. If you want to scale an existing Memorystore Cluster instance, set the following variable:
export TF_VAR_terraform_memorystore_cluster=false
Set the following variable to choose the name of a new or existing cluster to scale:
export TF_VAR_memorystore_cluster_name=<memorystore-cluster-name>
If you do not set this variable,
autoscaler-target-memorystore-cluster
will be used. -
Set the project ID where the Firestore instance resides.
export TF_VAR_state_project_id="${AUTOSCALER_PROJECT_ID}"
-
To create a testbench VM with utilities for testing Memorystore, including generating load, set the following variable:
export TF_VAR_terraform_test_vm=true
Note that this option can only be selected when you have chosen to create a new Memorystore cluster.
-
Change directory into the Terraform app-project directory and initialize it.
cd "${APP_DIR}" terraform init
-
Create the infrastructure in the application project. Answer
yes
when prompted, after reviewing the resources that Terraform intends to create.terraform import module.autoscaler-scheduler.google_app_engine_application.app "${APP_PROJECT_ID}" terraform apply -parallelism=2
- If you are running this command in Cloud Shell and encounter errors of
the form "
Error: cannot assign requested address
", this is a known issue in the Terraform Google provider, please retry with -parallelism=1
- If you are running this command in Cloud Shell and encounter errors of
the form "
-
Switch back to the Autoscaler project and ensure that Terraform variables are correctly set.
cd "${AUTOSCALER_DIR}" export TF_VAR_project_id="${AUTOSCALER_PROJECT_ID}" export TF_VAR_region="${AUTOSCALER_REGION}"
-
Set the Terraform variables for your Forwarder service accounts, updating and adding your service accounts as needed. Answer
yes
when prompted, after reviewing the resources that Terraform intends to create.export TF_VAR_forwarder_sa_emails='["serviceAccount:forwarder-sa@'"${APP_PROJECT_ID}"'.iam.gserviceaccount.com"]' terraform apply -parallelism=2
If you are running this command in Cloud Shell and encounter errors of the form
"Error: cannot assign requested address
", this is a
known issue in the Terraform Google provider, please retry
with -parallelism=1
Your Autoscaler infrastructure is ready, follow the instructions in the main page to configure your Autoscaler. Please take in account that in a distributed deployment: Logs from the Poller and Scaler functions will appear in the Logs Viewer for the Autoscaler project. Logs about syntax errors in the JSON configuration of the Cloud Scheduler payload will appear in the Logs viewer of each Application project, so that the team responsible for a specific Cloud Spanner instance can troubleshoot its configuration issues independently.