Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Cockroach DB multi-region deployments in terraform and tanka configs #1131

Draft
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

callumdmay
Copy link

As per the CRDB docs specifying a region in the locality flag is a prerequisite for creating a multi-region deployment. This is a step towards addressing #482

Copy link

linux-foundation-easycla bot commented Oct 18, 2024

CLA Not Signed

@callumdmay callumdmay force-pushed the patch-1 branch 2 times, most recently from 566309a to c20e3f4 Compare October 18, 2024 16:59
@callumdmay callumdmay changed the title Update CockroachDB Tanka template to support multi-region deployments Support Cockroach DB multi-region deployments in terraform and tanka configs Oct 18, 2024
@BenjaminPelletier
Copy link
Member

/easycla

@BenjaminPelletier
Copy link
Member

To check in on this one and summarize some offline conversations: my concern is that we want to make sure each DSS instance (the unit of deployment hosted by an organization) is prioritized to receive a complete replica of the DAR (i.e., Company 1's DSS instance should receive a full replica, Company 2's DSS instance should receive a full replica, Company 3's DSS instance should receive a full replica). Since CRDB attempts to spread replicas most widely at the highest level of locality, putting geographical region (region=${var.crdb_region}) before DSS instance identity (zone=${var.crdb_locality}) means that this change will make it less likely for each DSS instance to receive a full DAR replica (replicas will be spread across regions rather than DSS instances). If the important thing is to use the region key in the locality, it seems like we could simply replace zone= with region=, but I'm not sure that's important because the simplicity of our deployment structure means we can simply use explicit low-level replication constraints rather than the higher-level feature of multi-region capabilities. If we do want both levels of locality hierarchy (DSS instance + geographical region), the DSS instance should be the top level of hierarchy rather than geographical region.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants