Set up the Autoscaler in Cloud Run functions in a centralized
deployment using Terraform
Home
·
Scaler component
·
Poller component
·
Forwarder component
·
Terraform configuration
·
Monitoring
Cloud Run functions
·
Google Kubernetes Engine
Per-Project
·
Centralized
·
Distributed
This document shows the centralized deployment of the Autoscaler. In the centralized deployment all the components of the Autoscaler reside in the same project, but the Memorystore Cluster instances may be located in different projects.
This deployment is suited for a team managing the configuration and infrastructure of one or more Autoscalers in a central place. The Memorystore Cluster instances reside in other projects, called Application projects, which are owned by the same or other teams.
For an explanation of the components of the Autoscaler and the interaction flow, please read the main Architecture section.
The centralized deployment has the following pros and cons:
- Configuration and infrastructure: The scheduler parameters and the Autoscaler infrastructure is controlled by a single team. This may desirable on highly regulated industries.
- Maintenance: Maintenance and setup is expected to require less effort overall when compared to single project deployment.
- Policies and audit: Best practices across teams might be easier to specify and enact. Audits might be easier to execute.
- Configuration: any change to the Autoscaler parameters needs to go through the centralized team, even though the team requesting the change owns the Memorystore Cluster instance.
- Risk: the centralized team itself may become a single point of failure even if the infrastructure is designed with high availability in mind.
The centralized deployment is a slight departure from the per-project option where the Memorystore Cluster instances and the Autoscaler reside in different projects. Therefore, most of the instructions to set it up are the same.
Follow the instructions for the per-project option starting with the Before you begin section and stop before the Deploying the Autoscaler section
In this section you configure the project where your Memorystore cluster resides. This project is called an "Application project" because the Memorystore Cluster serves one or more specific applications. The teams responsible for those applications are assumed to be separate from the team responsible for the Autoscaler infrastructure and configuration.
-
Go to the project selector page in the Cloud Console. Select or create a Cloud project.
-
Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project.
-
In Cloud Shell, set environment variables with the ID of your application project. Replace the
<INSERT_YOUR_APP_PROJECT_ID>
placeholder and run the following command:export APP_PROJECT_ID=<INSERT_YOUR_APP_PROJECT_ID>
-
Enable the Redis API:
gcloud services enable --project="${APP_PROJECT_ID}" \ redis.googleapis.com
-
Set the Application project ID in the corresponding Terraform environment variable
export TF_VAR_app_project_id="${APP_PROJECT_ID}"
You have configured your Application project. Please continue from the Deploying the Autoscaler section in the per-project deployment documentation.