Skip to content

Commit

Permalink
Merge branch 'main' into linea-mainnet-deployment
Browse files Browse the repository at this point in the history
  • Loading branch information
LayneHaber committed Nov 14, 2023
2 parents 034caea + 30e8549 commit b4bcb03
Show file tree
Hide file tree
Showing 28 changed files with 447 additions and 208 deletions.
4 changes: 2 additions & 2 deletions .github/ISSUE_TEMPLATE/enhancement.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@ assignees: ''
---

## Problem
_What problem are we solving?_
_What is the problem or opportunity?_

## Impact
_Why does this matter? Who does it impact? How much does it impact them? What is the urgency?_
_Why is it important? Who requested it? Who does it impact? What data do we have to suggest this is a problem? Is there specific timelines that increase urgency?_

## Proposed Solution
_[OPTIONAL] Thoughts on solution design. We could do <insert idea> to solve this problem._
Expand Down
5 changes: 3 additions & 2 deletions .github/ISSUE_TEMPLATE/xERC20.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,9 @@
name: New xERC20
about: This is for whitelisting a new xERC20.
title: "[TOKEN] [Mainnet/Testnet] xERC20 Whitelisting"
labels: "xERC20 🪙"
assignees: ""
labels: "xERC20 \U0001FA99"
assignees: ''

---

## Token Details
Expand Down
22 changes: 11 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
<br />

<h3 align="center">About Connext</h3>
<h4 align="center">Connext is public infrastructure powering fast, trust-minimized communication between blockchains.</h4>
<h4 align="center">Connext is a public infrastructure powering fast, trust-minimized communication between blockchains.</h4>

<p align="center">
Useful Links
Expand Down Expand Up @@ -79,7 +79,7 @@ Connext is a modular stack for trust-minimized, generalized communication betwee

- [adapters](https://github.com/connext/monorepo/tree/main/packages/adapters) - Wrappers around external modules. These adapters can be shared between different packages.

- [Cache](https://github.com/connext/monorepo/tree/main/packages/adapters/cache) is a wrapper around all the redis based caches that are used.
- [Cache](https://github.com/connext/monorepo/tree/main/packages/adapters/cache) is a wrapper around all the Redis-based caches that are used.
- [Database](https://github.com/connext/monorepo/tree/main/packages/adapters/database) is implementation of schema and client for the database.
- [Subrgaph](https://github.com/connext/monorepo/tree/main/packages/adapters/subgraph) includes graphclient implementation and reader functions for subgraph.
- [TxService](https://github.com/connext/monorepo/tree/main/packages/adapters/txservice) resiliently attempts to send transactions to chain (with retries, etc.) and is used to read and write to RPC providers, and has fallback providers if needed. Fallbacks can be defined as arrays and this way we can provide resiliency in case of failure
Expand All @@ -97,13 +97,13 @@ Connext is a modular stack for trust-minimized, generalized communication betwee
- [deployments](https://github.com/connext/monorepo/tree/main/packages/deployments)

- [Contracts](https://github.com/connext/monorepo/tree/main/packages/deployments/contracts) - Contracts are the contracts that we deploy and the deployment scripts
- [Subgraph](https://github.com/connext/monorepo/tree/main/packages/deployments/subgraph) is all the subgraph source code to define all the mappings and contains all the configurations to deploy to different graph hosted services or third party graph providers
- [Subgraph](https://github.com/connext/monorepo/tree/main/packages/deployments/subgraph) is all the subgraph source code to define all the mappings and contains all the configurations to deploy to different graph hosted services or third-party graph providers

- [examples](https://github.com/connext/monorepo/tree/main/packages/examples) - these are not used in production, but contains ways to use the SDK that are illustrative of how to integrate Connext
- [examples](https://github.com/connext/monorepo/tree/main/packages/examples) - these are not used in production, but contain ways to use the SDK that are illustrative of how to integrate Connext
- [integration](https://github.com/connext/monorepo/tree/main/packages/integration) - Utilities for integration test
- [utils](https://github.com/connext/monorepo/tree/main/packages/utils) - Collection of helper functions that are shared throughout the different packages

<p align="right">(<a href="#top">back to top</a>)</p>
<p align="right">(<a href="#top">⬆️ back to top</a>)</p>

## First time setup

Expand Down Expand Up @@ -134,7 +134,7 @@ To run Redis, execute the following command:

`docker run -it --rm --name redis -p 6379:6379 redis`

This command will download the latest Redis image and start a container with the name redis.
This command will download the latest Redis image and start a container with the name Redis.

And now you are all ready to interact with Monorepo.

Expand Down Expand Up @@ -181,7 +181,7 @@ Note: We use `node-lib` as the template for all the packages. There are some oth

- Update the [`CHANGELOG.md`](./CHANGELOG.md).
- Run `yarn version:all X.X.X` where `X.X.X` is the full version string of the NPM version to deploy (i.e. `0.0.1`).
- Use `X.X.X-beta.N` for Amarok releases from `production` branch and `X.X.X-alpha.N` for Amarok releases from `main` branch.
- Use `X.X.X-beta.N` for Amarok releases from the `production` branch and `X.X.X-alpha.N` for Amarok releases from `main` branch.
- Commit and add a tag matching the version: `git commit -am "<version>" && git tag -am "<version>"`
- Run `git push --follow-tags`.
- The [GitHub action will](./.github/workflows/build-docker-image-and-verify.yml) publish the packages by recognizing the version tag.
Expand All @@ -190,7 +190,7 @@ Note: We use `node-lib` as the template for all the packages. There are some oth

## Contributing

Contributions are what makes the open source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**.
Contributions are what makes the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement".
Don't forget to give the project a star! Thanks again!
Expand All @@ -201,19 +201,19 @@ Don't forget to give the project a star! Thanks again!
4. Push to the Branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request

<p align="right">(<a href="#top">back to top</a>)</p>
<p align="right">(<a href="#top">⬆️ back to top</a>)</p>

<!-- LICENSE -->

## License

Distributed under the MIT License. See `LICENSE.txt` for more information.

<p align="right">(<a href="#top">back to top</a>)</p>
<p align="right">(<a href="#top">⬆️ back to top</a>)</p>

Project Link: [https://github.com/connext/monorepo](https://github.com/connext/monorepo)

<p align="right">(<a href="#top">back to top</a>)</p>
<p align="right">(<a href="#top">⬆️ back to top</a>)</p>

<!-- MARKDOWN LINKS & IMAGES -->
<!-- https://www.markdownguide.org/basic-syntax/#reference-style-links -->
Expand Down
12 changes: 8 additions & 4 deletions ops/mainnet/prod/core/config.tf
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,10 @@ locals {
{ name = "DD_ENV", value = "${var.environment}-${var.stage}" },
{ name = "GRAPH_API_KEY", value = var.graph_api_key }
]
router_publisher_env_vars = concat(
local.router_env_vars, [
{ name = "NODE_OPTIONS", value = "--max-old-space-size=1536" }
])
lighthouse_env_vars = {
NXTP_CONFIG = local.local_lighthouse_config,
ENVIRONMENT = var.environment,
Expand Down Expand Up @@ -101,22 +105,22 @@ locals {
excludeListFromRelayerFee = ["0x5b9315ce1304df3b2a83b2074cbf849d160642ab"]
},
"1869640809" = {
providers = ["https://optimism-mainnet.blastapi.io/${var.blast_key}", "https://rpc.ankr.com/optimism"],
providers = ["https://optimism-mainnet.blastapi.io/${var.blast_key}", "https://rpc.ankr.com/optimism"],
excludeListFromRelayerFee = ["0x9D9ce29Dc7812ccb63aB14EA987B52d9aF053Eb3"]
},
"1886350457" = {
providers = ["https://polygon-mainnet.blastapi.io/${var.blast_key}", "https://rpc.ankr.com/polygon"],
providers = ["https://polygon-mainnet.blastapi.io/${var.blast_key}", "https://rpc.ankr.com/polygon"],
excludeListFromRelayerFee = ["0x83e8Cf4A51035665BAF97DdB0cf03b565AC76B44"]
}
"1634886255" = {
providers = ["https://arb-mainnet.g.alchemy.com/v2/${var.arbitrum_alchemy_key_0}", "https://rpc.ankr.com/arbitrum"],
providers = ["https://arb-mainnet.g.alchemy.com/v2/${var.arbitrum_alchemy_key_0}", "https://rpc.ankr.com/arbitrum"],
excludeListFromRelayerFee = ["0xE6B7aB9EBCfBF1A72E489ff00CdF9C6473ff6224"]
}
"6450786" = {
providers = ["https://bsc-mainnet.blastapi.io/${var.blast_key}", "https://bsc-dataseed1.binance.org", "https://bsc-dataseed2.binance.org", "https://rpc.ankr.com/bsc"]
}
"6778479" = {
providers = ["https://gnosis-mainnet.blastapi.io/${var.blast_key}", "https://rpc.gnosischain.com", "https://rpc.ankr.com/gnosis"],
providers = ["https://gnosis-mainnet.blastapi.io/${var.blast_key}", "https://rpc.gnosischain.com", "https://rpc.ankr.com/gnosis"],
excludeListFromRelayerFee = ["0x6D4D82aE73DC9059Ac83B085b2505e00b5eF8511"]
}
"1818848877" = {
Expand Down
46 changes: 25 additions & 21 deletions ops/mainnet/prod/core/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -70,15 +70,15 @@ module "router_publisher" {
health_check_path = "/ping"
container_port = 8080
loadbalancer_port = 80
cpu = 512
memory = 1024
cpu = 1024
memory = 2048
instance_count = 1
timeout = 180
ingress_cdir_blocks = ["0.0.0.0/0"]
ingress_ipv6_cdir_blocks = []
service_security_groups = flatten([module.network.allow_all_sg, module.network.ecs_task_sg])
cert_arn = var.certificate_arn
container_env_vars = local.router_env_vars
container_env_vars = local.router_publisher_env_vars
}

module "router_executor" {
Expand Down Expand Up @@ -209,14 +209,16 @@ module "sequencer_publisher" {
}

module "sequencer_publisher_auto_scaling" {
source = "../../../modules/auto-scaling"
stage = var.stage
environment = var.environment
domain = var.domain
ecs_service_name = module.sequencer_publisher.service_name
ecs_cluster_name = module.ecs.ecs_cluster_name
min_capacity = 10
max_capacity = 300
source = "../../../modules/auto-scaling"
stage = var.stage
environment = var.environment
domain = var.domain
ecs_service_name = module.sequencer_publisher.service_name
ecs_cluster_name = module.ecs.ecs_cluster_name
avg_cpu_utilization_target = 40
avg_mem_utilization_target = 60
min_capacity = 10
max_capacity = 100
}

module "sequencer_subscriber" {
Expand Down Expand Up @@ -249,14 +251,16 @@ module "sequencer_subscriber" {
}

module "sequencer_subscriber_auto_scaling" {
source = "../../../modules/auto-scaling"
stage = var.stage
environment = var.environment
domain = var.domain
ecs_service_name = module.sequencer_subscriber.service_name
ecs_cluster_name = module.ecs.ecs_cluster_name
min_capacity = 10
max_capacity = 100
source = "../../../modules/auto-scaling"
stage = var.stage
environment = var.environment
domain = var.domain
ecs_service_name = module.sequencer_subscriber.service_name
ecs_cluster_name = module.ecs.ecs_cluster_name
avg_cpu_utilization_target = 40
avg_mem_utilization_target = 60
min_capacity = 10
max_capacity = 40
}


Expand Down Expand Up @@ -345,8 +349,8 @@ module "lighthouse_prover_subscriber_auto_scaling" {
ecs_cluster_name = module.ecs.ecs_cluster_name
min_capacity = 10
max_capacity = 200
avg_cpu_utilization_target = 10
avg_mem_utilization_target = 15
avg_cpu_utilization_target = 20
avg_mem_utilization_target = 40
}

module "lighthouse_process_from_root_cron" {
Expand Down
6 changes: 5 additions & 1 deletion ops/testnet/prod/core/config.tf
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,10 @@ locals {
{ name = "DD_PROFILING_ENABLED", value = "true" },
{ name = "DD_ENV", value = "${var.environment}-${var.stage}" },
]
router_publisher_env_vars = concat(
local.router_env_vars, [
{ name = "NODE_OPTIONS", value = "--max-old-space-size=1536" }
])
lighthouse_env_vars = {
NXTP_CONFIG = local.local_lighthouse_config,
ENVIRONMENT = var.environment,
Expand Down Expand Up @@ -250,7 +254,7 @@ locals {
"1734439522" = {
providers = ["https://arb-goerli.g.alchemy.com/v2/${var.arbgoerli_alchemy_key_0}", "https://goerli-rollup.arbitrum.io/rpc"]
}
"1668247156" = {
"1668247156" = {
providers = ["https://linea-goerli.infura.io/v3/${var.infura_key}", "https://rpc.goerli.linea.build", "${var.linea_node}"]
}
}
Expand Down
56 changes: 30 additions & 26 deletions ops/testnet/prod/core/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -71,15 +71,15 @@ module "router_publisher" {
health_check_path = "/ping"
container_port = 8080
loadbalancer_port = 80
cpu = 512
memory = 1024
cpu = 1024
memory = 2048
instance_count = 1
timeout = 180
ingress_cdir_blocks = ["0.0.0.0/0"]
ingress_ipv6_cdir_blocks = []
service_security_groups = flatten([module.network.allow_all_sg, module.network.ecs_task_sg])
cert_arn = var.certificate_arn_testnet
container_env_vars = local.router_env_vars
container_env_vars = local.router_publisher_env_vars
}

module "router_executor" {
Expand Down Expand Up @@ -211,14 +211,16 @@ module "sequencer_publisher" {
}

module "sequencer_publisher_auto_scaling" {
source = "../../../modules/auto-scaling"
stage = var.stage
environment = var.environment
domain = var.domain
ecs_service_name = module.sequencer_publisher.service_name
ecs_cluster_name = module.ecs.ecs_cluster_name
min_capacity = 10
max_capacity = 300
source = "../../../modules/auto-scaling"
stage = var.stage
environment = var.environment
domain = var.domain
ecs_service_name = module.sequencer_publisher.service_name
ecs_cluster_name = module.ecs.ecs_cluster_name
avg_cpu_utilization_target = 40
avg_mem_utilization_target = 60
min_capacity = 1
max_capacity = 100
}

module "sequencer_subscriber" {
Expand All @@ -241,7 +243,7 @@ module "sequencer_subscriber" {
loadbalancer_port = 80
cpu = 256
memory = 1024
instance_count = 10
instance_count = 1
timeout = 180
ingress_cdir_blocks = ["0.0.0.0/0"]
ingress_ipv6_cdir_blocks = []
Expand All @@ -251,14 +253,16 @@ module "sequencer_subscriber" {
}

module "sequencer_subscriber_auto_scaling" {
source = "../../../modules/auto-scaling"
stage = var.stage
environment = var.environment
domain = var.domain
ecs_service_name = module.sequencer_subscriber.service_name
ecs_cluster_name = module.ecs.ecs_cluster_name
min_capacity = 10
max_capacity = 100
source = "../../../modules/auto-scaling"
stage = var.stage
environment = var.environment
domain = var.domain
ecs_service_name = module.sequencer_subscriber.service_name
ecs_cluster_name = module.ecs.ecs_cluster_name
avg_cpu_utilization_target = 40
avg_mem_utilization_target = 60
min_capacity = 1
max_capacity = 40
}


Expand Down Expand Up @@ -329,7 +333,7 @@ module "lighthouse_prover_subscriber" {
loadbalancer_port = 80
cpu = 4096
memory = 8192
instance_count = 10
instance_count = 5
timeout = 290
ingress_cdir_blocks = ["0.0.0.0/0"]
ingress_ipv6_cdir_blocks = []
Expand All @@ -344,10 +348,10 @@ module "lighthouse_prover_subscriber_auto_scaling" {
domain = var.domain
ecs_service_name = module.lighthouse_prover_subscriber.service_name
ecs_cluster_name = module.ecs.ecs_cluster_name
min_capacity = 10
min_capacity = 5
max_capacity = 200
avg_cpu_utilization_target = 10
avg_mem_utilization_target = 15
avg_cpu_utilization_target = 20
avg_mem_utilization_target = 40
}

module "lighthouse_process_from_root_cron" {
Expand All @@ -359,7 +363,7 @@ module "lighthouse_process_from_root_cron" {
stage = var.stage
container_env_vars = merge(local.lighthouse_env_vars, { LIGHTHOUSE_SERVICE = "process" })
schedule_expression = "rate(5 minutes)"
memory_size = 512
memory_size = 1536
}


Expand All @@ -384,7 +388,7 @@ module "lighthouse_sendoutboundroot_cron" {
stage = var.stage
container_env_vars = merge(local.lighthouse_env_vars, { LIGHTHOUSE_SERVICE = "sendoutboundroot" })
schedule_expression = "rate(30 minutes)"
memory_size = 512
memory_size = 2048
}


Expand Down
8 changes: 4 additions & 4 deletions ops/testnet/staging/backend/config.tf
Original file line number Diff line number Diff line change
Expand Up @@ -65,10 +65,10 @@ locals {
local_cartographer_config = jsonencode({
logLevel = "debug"
chains = {
"1735356532" = {}
"1735353714" = {}
"9991" = {}
"1734439522" = {}
"1735356532" = { confirmations = 1 }
"1735353714" = { confirmations = 10 }
"9991" = { confirmations = 200 }
"1734439522" = { confirmations = 1 }
}
environment = var.stage
})
Expand Down
2 changes: 1 addition & 1 deletion ops/testnet/staging/backend/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ module "cartographer-db-alarms" {
enable_free_storage_space_too_low_alarm = true
stage = var.stage
environment = var.environment
sns_topic_subscription_emails = ["[email protected]", "rahul@connext.network"]
sns_topic_subscription_emails = ["[email protected]", "rahul@proximalabs.io", "[email protected]", "[email protected]"]
}

module "postgrest" {
Expand Down
Loading

0 comments on commit b4bcb03

Please sign in to comment.