Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for targeting subnets for Load Balancers (Private placement o… #30

Closed
wants to merge 0 commits into from

Conversation

tush4hworks
Copy link
Contributor

@tush4hworks tush4hworks commented Sep 7, 2023

…f load balancers).

With CRB-2275, we are introducing a feature that lets customers choose private subnets for LB. Some customers (for example- Banks), have stricter rules where they need to keep the workloads completely isoalted (non-routable) from even their network but provide access to API/UI endpoints for people on the network by placing the LBs in routable frontend private subnet. See https://docs.google.com/document/d/1qfdmFKHAN9NrE60ElPZNORvqhD7HwWAyt238wQvvvgo/edit#heading=h.1bz8nyaoflni for more details on how-to-use this feature.

Implementation
We have introduced a new variable "cdp_lb_subnet_ids" which is honoured ONLY when deployment_type is "private" and overrides "endpoint_access_gateway_subnet_ids" when provided.

If the deployment_type is "semi-private", it still uses "PUBLIC" subnets to bring-up environment, even when overriding "cdp_lb_subnet_ids" are provided.

This is because private placement is intuitively "more-aligned" to "private" deployment type, where workloads can be even more restricted ("non-routable").

With this approach, we don't need to have another "private-private" type topology. "private" is used for both with "cdp_lb_subnet_ids" being used to dictate private placement of LBs

Testing

  1. Private placement in explicit LB subntes
deployment_template = "private" 
cdp_lb_subnet_ids = [
  "subnet-lb1",
  "subnet-lb2",
  "subnet-lb3"
]

cdp_private_subnet_ids = [
  "subnet-05bbb3a6c02d0a76a",
  "subnet-0ac6a804221711fc2",
  "subnet-0ee2ba7b5bb2fcc29"
]

cdp_public_subnet_ids = [
  "subnet-05bbb3a6c02d0a76a",
  "subnet-0ac6a804221711fc2",
  "subnet-0ee2ba7b5bb2fcc29"
]

Output:

  + resource "cdp_environments_aws_environment" "cdp_env" {
      + authentication                     = {
          + public_key_id = "cdpe2e-keypair"
        }
      + create_private_subnets             = false
      + create_service_endpoints           = false
      + credential_name                    = "isol1-xaccount-cred"
      + crn                                = (known after apply)
      + description                        = (known after apply)
      + enable_tunnel                      = false
      + endpoint_access_gateway_scheme     = "PRIVATE"
      + endpoint_access_gateway_subnet_ids = [
          + "subnet-lb1",
          + "subnet-lb2",
          + "subnet-lb3",
        ]
      + environment_name                   = "isol1-cdp-env"
      + freeipa                            = {
          + instance_count_by_group = 3
          + multi_az                = true
        }
      + id                                 = (known after apply)
      + log_storage                        = {
          + backup_storage_location_base = (known after apply)
          + instance_profile             = (known after apply)
          + storage_location_base        = (known after apply)
        }
      + network_cidr                       = (known after apply)
      + region                             = "us-west-2"
      + report_deployment_logs             = (known after apply)
      + security_access                    = {
          + cidr                       = (known after apply)
          + default_security_group_id  = (known after apply)
          + security_group_id_for_knox = (known after apply)
        }
      + status                             = (known after apply)
      + status_reason                      = (known after apply)
      + subnet_ids                         = [
          + "subnet-05bbb3a6c02d0a76a",
          + "subnet-0ac6a804221711fc2",
          + "subnet-0ee2ba7b5bb2fcc29",
        ]
      + tags                               = (known after apply)
      + tunnel_type                        = (known after apply)
      + vpc_id                             = "vpc-0c55644ba799825a2"
      + workload_analytics                 = true
    }
  1. semi-private behaviour is unchanged
cdp_private_subnet_ids = [
  "subnet-05bbb3a6c02d0a76a",
  "subnet-0ac6a804221711fc2",
  "subnet-0ee2ba7b5bb2fcc29"
]

cdp_public_subnet_ids = [
  "subnet-05bbb3a6c02d0a76a",
  "subnet-0ac6a804221711fc2",
  "subnet-0ee2ba7b5bb2fcc29"
]
enable_ccm_tunnel = false
deployment_template = "semi-private"

Output

# module.cdp_deploy.module.cdp_on_aws[0].cdp_environments_aws_environment.cdp_env will be created
  + resource "cdp_environments_aws_environment" "cdp_env" {
      + authentication                     = {
          + public_key_id = "cdpe2e-keypair"
        }
      + create_private_subnets             = false
      + create_service_endpoints           = false
      + credential_name                    = "isol1-xaccount-cred"
      + crn                                = (known after apply)
      + description                        = (known after apply)
      + enable_tunnel                      = false
      + endpoint_access_gateway_scheme     = "PUBLIC"
      + endpoint_access_gateway_subnet_ids = [
          + "subnet-05bbb3a6c02d0a76a",
          + "subnet-0ac6a804221711fc2",
          + "subnet-0ee2ba7b5bb2fcc29",
        ]
      + environment_name                   = "isol1-cdp-env"
      + freeipa                            = {
          + instance_count_by_group = 3
          + multi_az                = true
        }
      + id                                 = (known after apply)
      + log_storage                        = {
          + backup_storage_location_base = (known after apply)
          + instance_profile             = (known after apply)
          + storage_location_base        = (known after apply)
        }
      + network_cidr                       = (known after apply)
      + region                             = "us-west-2"
      + report_deployment_logs             = (known after apply)
      + security_access                    = {
          + cidr                       = (known after apply)
          + default_security_group_id  = (known after apply)
          + security_group_id_for_knox = (known after apply)
        }
      + status                             = (known after apply)
      + status_reason                      = (known after apply)
      + subnet_ids                         = [
          + "subnet-05bbb3a6c02d0a76a",
          + "subnet-0ac6a804221711fc2",
          + "subnet-0ee2ba7b5bb2fcc29",
        ]
      + tags                               = (known after apply)
      + tunnel_type                        = (known after apply)
      + vpc_id                             = "vpc-0c55644ba799825a2"
      + workload_analytics                 = true
    }

@tush4hworks
Copy link
Contributor Author

Note: we need customers to "already" have the subnets for LB and private isolated subnets for this feature.
We don't have support yet to create private-isolated configuration via terraform yet (as it includes PRIVATE NAT GATEWAY) and this is too specific for one customer (someone else may implement it in a different way)

So this is only applicable when customers use existing networks.

@jimright
Copy link
Contributor

jimright commented Sep 7, 2023

@tush4hworks -

Thanks for this PR.

Some other PRs were recently merged into main which has introduced conflicts.

Can you rebase this onto the updated main and I will review the code changes?

Jim

@jimright
Copy link
Contributor

jimright commented Sep 7, 2023

@balazsgaspar, @wmudge

I've added you as reviewers to this PR to get your take on the changes and introduction of the cdp_lb_subnet_ids variable.

@tush4hworks
Copy link
Contributor Author

rebased it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants