Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attempting to use same pool name for multiple VIPs on same node_listen_port causes error due to name collision #2

Open
roncanepa-tufts opened this issue Dec 6, 2024 · 2 comments
Assignees

Comments

@roncanepa-tufts
Copy link
Contributor

In the F5s ecosystem, a pool can have >1 VIP pointing at it. This is a common scenario when there's a web env that has a port 80 VIP and another VIP for 443 that both point back to nodes that are listening on port 80.

When using this module, the following will cause a conflict (with most required fields removed for clarity):

module "my-vip-80" {
  source = "github.com/Tufts-Technology-Services/tf-f5-vip-components-module?ref=v0.0.6"

  vip_name                       = "my-vip.it.tufts.edu"
  vip_destination_ip             = "130.64.x.y"
  vip_port                       = 80

  # both examples have same values here 
  pool_name        = "my-vip.it.tufts.edu"
  node_listen_port = 80
}

module "my-vip-443" {
  source = "github.com/Tufts-Technology-Services/tf-f5-vip-components-module?ref=v0.0.6"

  vip_name                       = "my-vip.it.tufts.edu"
  vip_destination_ip             = "130.64.x.y"
  vip_port                       = 443

  # both examples have same values here
  pool_name        = "my-vip.it.tufts.edu"
  node_listen_port = 80
}

because in versions <0.0.6, pools are created in the following format: $pool_name-$node_listen_port and, given that in this situation both pools will be listening on the same port 80, we get a collision, because in both cases, the module will name the resulting pool my-vip.it.tufts.edu-80 due to the node_listen_port being the same:

Error: error retrieving pool (/some-partition/my-vip.it.tufts.edu-80): 01020066:3: 
The requested Pool (/some-partition/my-vip.it.tufts.edu-80) already exists 
in partition some-partition.

This module makes tradeoffs to help facilitate ease-of-use, and this is one of them, regarding the naming and having the module create the associated pool. Since each module{} is an entirely separate invocation and thus set of resources, the pool can't be shared in this use case.

Options here are:

    1. Don't use the module at all and create everything manually
    • comment: not necessarily desirable at the moment; although usefulness of this module approach over time remains TBD
    1. Don't use the module for "simple port 80 vips with a redirect" situations, since if the VIP is solely for a redirect, it doesn't even need to be assigned a target pool
    • comment: while this gets around the current issue, this is a very reasonable use case overall and so we need to solve it one way or another
    1. Pass a slightly different pool_name to the second module when creating the 443 vip
    • comment: absolutely works, but requires the user to be aware of this situation and get themselves out of it; not necessarily desirable when the aim of the module is to abstract away some of the complexity
    1. make a breaking change to the module such that pools get created with a naming scheme of $pool_name-$vip_port-$node_listen_port
    • comment: it's always best to avoid breaking changes when providing functionality meant for reuse by others, but I think this might be the best longterm solution because:
      • it's a solid accomodation of this situation that should work for many future cases
      • doesn't require a user to understand the inner workings of the module
      • very few things have been built using the module yet
      • the few existing VIPs that have been built can stay pinned at v0.0.6 without issue and comments can be added to the module calls to increase awareness before upgrading
    • downsides:
      • if someone blindly updates their module version AND blindly runs terraform apply -y, their traffic will be interrupted as terraform recreates the pool (and since pools need to be uniquely named, I can't necessary help mitigate this by including create_before_destroy because that can cause conflicts later)
      • we end up with more pools than we would theoretically need otherwise if we were building things manually. but since there's no harm/charge per pool, and pools don't consume a lot of resources, and it's all automated, this feels like a reasonable trade-off
@roncanepa-tufts roncanepa-tufts self-assigned this Dec 6, 2024
@roncanepa-tufts
Copy link
Contributor Author

Forgot to add:

  • option 5: do more dependency inversion and pass in a pool instead
    • While this is doable, if this is the best solution, the module is approaching the point at which it's not very useful anymore, so I'm not a fan of this one at the moment

@roncanepa-tufts
Copy link
Contributor Author

Now that I'm thinking about this on Monday, I like option 4 much less (besides it being a breaking change, and impacting all future VIPs). Especially since a pool doesn't care about VIP listening ports at all: all it's concerned with is the service port of the members. So I'm leaning options 2+3 now where needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant