Skip to content

Latest commit

 

History

History
141 lines (109 loc) · 4.98 KB

File metadata and controls

141 lines (109 loc) · 4.98 KB

OSS Memorystore Cluster Autoscaler

Autoscaler

Set up the Autoscaler in Cloud Run functions in a centralized deployment using Terraform
Home · Scaler component · Poller component · Forwarder component · Terraform configuration · Monitoring
Cloud Run functions · Google Kubernetes Engine
Per-Project · Centralized · Distributed

Table of Contents

Overview

This document shows the centralized deployment of the Autoscaler. In the centralized deployment all the components of the Autoscaler reside in the same project, but the Memorystore Cluster instances may be located in different projects.

This deployment is suited for a team managing the configuration and infrastructure of one or more Autoscalers in a central place. The Memorystore Cluster instances reside in other projects, called Application projects, which are owned by the same or other teams.

Architecture

architecture-centralized

For an explanation of the components of the Autoscaler and the interaction flow, please read the main Architecture section.

The centralized deployment has the following pros and cons:

Pros

  • Configuration and infrastructure: The scheduler parameters and the Autoscaler infrastructure is controlled by a single team. This may desirable on highly regulated industries.
  • Maintenance: Maintenance and setup is expected to require less effort overall when compared to single project deployment.
  • Policies and audit: Best practices across teams might be easier to specify and enact. Audits might be easier to execute.

Cons

  • Configuration: any change to the Autoscaler parameters needs to go through the centralized team, even though the team requesting the change owns the Memorystore Cluster instance.
  • Risk: the centralized team itself may become a single point of failure even if the infrastructure is designed with high availability in mind.

Before you begin

The centralized deployment is a slight departure from the per-project option where the Memorystore Cluster instances and the Autoscaler reside in different projects. Therefore, most of the instructions to set it up are the same.

Follow the instructions for the per-project option starting with the Before you begin section and stop before the Deploying the Autoscaler section

Configuring your Application project

In this section you configure the project where your Memorystore cluster resides. This project is called an "Application project" because the Memorystore Cluster serves one or more specific applications. The teams responsible for those applications are assumed to be separate from the team responsible for the Autoscaler infrastructure and configuration.

  1. Go to the project selector page in the Cloud Console. Select or create a Cloud project.

  2. Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project.

  3. In Cloud Shell, set environment variables with the ID of your application project. Replace the <INSERT_YOUR_APP_PROJECT_ID> placeholder and run the following command:

    export APP_PROJECT_ID=<INSERT_YOUR_APP_PROJECT_ID>
  4. Enable the Redis API:

    gcloud services enable --project="${APP_PROJECT_ID}" \
      redis.googleapis.com
  5. Set the Application project ID in the corresponding Terraform environment variable

    export TF_VAR_app_project_id="${APP_PROJECT_ID}"

You have configured your Application project. Please continue from the Deploying the Autoscaler section in the per-project deployment documentation.