Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can I not add an IAM group to my ConfigMap? #176

Open
mrichman opened this issue Nov 21, 2018 · 38 comments
Open

Can I not add an IAM group to my ConfigMap? #176

mrichman opened this issue Nov 21, 2018 · 38 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@mrichman
Copy link

I have an IAM user named Alice, and she's a member of the IAM group eks-admin.

The following configuration works, but when I remove Alice from mapUsers, kubectl commands give me the error error: You must be logged in to the server (Unauthorized).

Can't I add an IAM group to this ConfigMap, just like I can add a user or role?

aws sts get-caller-identity 
{
    "Account": "123456789012", 
    "UserId": "AIDAxxxxxxxxxxxxxxx", 
    "Arn": "arn:aws:iam::123456789012:user/Alice"
}
apiVersion: v1
data:
  mapRoles: |
    - rolearn: arn:aws:iam::123456789012:role/EKS-WorkerNodes-NodeInstanceRole-1R46GDBD928V5
      username: system:node:{{EC2PrivateDNSName}}
      groups: 
        - system:bootstrappers
        - system:nodes
  mapUsers: |
    - userarn: arn:aws:iam::123456789012:user/Alice
      username: alice
      groups: 
        - system:masters
    - userarn: arn:aws:iam::123456789012:group/eks-admin
      username: eks-admin
      groups: 
        - system:masters
@luthes
Copy link

luthes commented Nov 28, 2018

Didn't read, you can only add roles and users.

I think this is a duplicate of # 157 (which is probably a duplicate of another, honestly)

@cschiewek
Copy link

Can we re-open this as a feature request? Managing permissions would be significantly improved if we could add groups.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 10, 2019
@alexppg
Copy link

alexppg commented Jun 18, 2019

/remove-lifecycle stale

@jclynny
Copy link

jclynny commented Jul 2, 2019

Is there any progress on this one? This is significant functionality for managing access to EKS for larger Engineering groups, right now this is requiring me to list out a bunch of users and adding to the list every time someone new needs access.

@alexppg
Copy link

alexppg commented Jul 4, 2019

With terraform 12 this can be easily workarounded, thanks to the for loop. A very improbable code:

locals {
  k8s_admins = [
    for user in data.terraform_remote_state.iam.outputs.admin_members[0] :
    {
      user_arn = join("", ["arn:aws:iam::whatever:user/", user])
      username = user
      group    = "system:masters"
    }
  ]

  k8s_developers = [
    for user in data.terraform_remote_state.iam.outputs.developers_members[0] :
    {
      user_arn = join("", ["arn:aws:iam::whatever:user/", user])
      username = user
      group    = "system:developers-write"
    }
  ]
  k8s_map_users = concat(local.k8s_admins, local.k8s_developers)
}

@casey-robertson
Copy link

Is there any progress on this one? This is significant functionality for managing access to EKS for larger Engineering groups, right now this is requiring me to list out a bunch of users and adding to the list every time someone new needs access.

It's even more fun when you don't have IAM users and everybody accesses an assumed role session via Okta.

@adeakrvbd
Copy link

With terraform 12 this can be easily workarounded, thanks to the for loop. A very improbable code:

locals {
  k8s_admins = [
    for user in data.terraform_remote_state.iam.outputs.admin_members[0] :
    {
      user_arn = join("", ["arn:aws:iam::whatever:user/", user])
      username = user
      group    = "system:masters"
    }
  ]

  k8s_developers = [
    for user in data.terraform_remote_state.iam.outputs.developers_members[0] :
    {
      user_arn = join("", ["arn:aws:iam::whatever:user/", user])
      username = user
      group    = "system:developers-write"
    }
  ]
  k8s_map_users = concat(local.k8s_admins, local.k8s_developers)
}

I'm struggling to put the k8s_admins generated here into the configmap to apply later in automation, how did you manage to do that?

@selslack
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 19, 2019
@mazay
Copy link

mazay commented Aug 27, 2019

With terraform 12 this can be easily workarounded, thanks to the for loop. A very improbable code:

locals {
  k8s_admins = [
    for user in data.terraform_remote_state.iam.outputs.admin_members[0] :
    {
      user_arn = join("", ["arn:aws:iam::whatever:user/", user])
      username = user
      group    = "system:masters"
    }
  ]

  k8s_developers = [
    for user in data.terraform_remote_state.iam.outputs.developers_members[0] :
    {
      user_arn = join("", ["arn:aws:iam::whatever:user/", user])
      username = user
      group    = "system:developers-write"
    }
  ]
  k8s_map_users = concat(local.k8s_admins, local.k8s_developers)
}

I'm struggling to put the k8s_admins generated here into the configmap to apply later in automation, how did you manage to do that?

probably using jsonencode(local.k8s_admins)

@jclynny
Copy link

jclynny commented Aug 28, 2019

I would also like to see adeakrvbd's question answered above. +1

@galindro
Copy link

galindro commented Sep 3, 2019

Me too. It's really weird to not support IAM groups.

@prestonvanloon
Copy link

Please consider adding IAM group support for EKS.
This would be the easiest way to manage user access control by far.

@amitsaha
Copy link

amitsaha commented Sep 5, 2019

Till we have this, I have come up with a strategy using AssumeRole which I describe in my blog post.

@jclynny
Copy link

jclynny commented Sep 9, 2019

@prestonvanloon I know we discussed in a different thread, but I basically do the same thing that @amitsaha describes in his blog post, although mine seems a bit simpler:

  1. Create IAM roles for a Readonly and Admin
  2. Attach these role ARN's under the mapUsers section of aws-auth.yaml
  3. Create an IAM group for Readonly and Admin
  4. Add an AssumeRole policy for each group "Readonly" and "Admin" so be able to assume the roles created in step 1.
  5. Ensure I have a trust relationship setup to allow account users to assume the 2 roles.
  6. Once I apply, everyone in the IAM groups have their respective permissions. All they have to do is follow the assumerole instructions here: https://aws.amazon.com/premiumsupport/knowledge-center/iam-assume-role-cli/

I'll obviously automate the last step so people don't have to run the commands and set the keys, but yeah, that's pretty much it. It's not pretty, but it allows me to abstract user management into a group which is really the goal here since IAM Groups are still not supported.

@iffyuva
Copy link

iffyuva commented Sep 23, 2019

@adeakrvbd @jclynny I used yamlencode to get this working. You can see more code here dockup/terraform-aws@fd8c679#diff-a338da04c3bdfe4c0e6b5db98bc233bdR93

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 22, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 21, 2020
@garo
Copy link

garo commented Jan 30, 2020

@adeakrvbd @jclynny here's a complete example how I build the userMap in terraform 0.12 based on IAM users bound to IAM groups.

data "aws_iam_group" "developer-members" {
  group_name = "developer"
}

data "aws_iam_group" "admin-members" {
  group_name = "admin"
}

locals {
  k8s_admins = [
    for user in data.aws_iam_group.admin-members.users :
    {
      user_arn = user.arn
      username = user.user_name
      groups    = ["system:masters"]
    }
  ]

  k8s_analytics_users = [
    for user in data.aws_iam_group.developer-members.users :
    {
      user_arn = user.arn
      username = user.user_name
      groups    = ["company:developer-users"]
    }
  ]

  k8s_map_users = concat(local.k8s_admins, local.k8s_analytics_users)
}


resource "kubernetes_config_map" "aws_auth" {
  metadata {
    name      = "aws-auth"
    namespace = "kube-system"
  }

  data = {
    mapRoles = <<YAML
- rolearn: ${module.eks.eks_worker_node_role_arn}
  username: system:node:{{EC2PrivateDNSName}}
  groups:
    - system:bootstrappers
    - system:nodes
YAML

    mapUsers = yamlencode(local.k8s_map_users)

    mapAccounts = <<YAML
- "${data.aws_caller_identity.current.account_id}"
YAML

  }
}

The downside is that you will need to run "terraform apply" each time you add or remove users from IAM groups and that one user shouldn't be in more than one group at a time.

@jz-wilson
Copy link

I came across this: https://github.com/ygrene/iam-eks-user-mapper. Maybe this is a viable workaround for you?

@nckturner
Copy link
Contributor

/kind feature
/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Feb 25, 2020
@rmlsun
Copy link

rmlsun commented Feb 29, 2020

/remove-lifecycle stale

@rmlsun
Copy link

rmlsun commented Feb 29, 2020

please reconsider support for IAM group!

I can get it working with either IAM role or IAM user with tf. However, for our use case where we're trying to use Hashicorp Vault to grant dynamic time-bound access longer than 12hours (which is the max session duration for IAM role based approach), capability to map k8s group to IAM group is key.

@Sodki
Copy link

Sodki commented Apr 27, 2020

If your company is using SSO (via Okta, for example), there are no IAM users and everyone is using assumed roles with temporary credentials. This makes it impossible for our developers to use EKS in a sane way and hits enterprise customers the hardest.

@foriequal0
Copy link

foriequal0 commented Dec 3, 2020

I've created a cluster using @aws-cdk/aws-eks (I believe it would be same for quickstart). It creates a cluster with a dedicated role since EKS has a weird rule:

When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator (with system:masters permissions).
https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html

Yesterday, new console has been deployed for EKS. It queries the cluster directly to get nodes and workloads: https://aws.amazon.com/blogs/containers/introducing-the-new-amazon-eks-console/

But I only see the following error with IAM user.

Error loading Namespaces
Unauthorized: Verify you have access to the Kubernetes cluster

I've tried some ways, but I have some nitpicks for every one of them:

  1. I can switch to the role that created the cluster or enlisted in mapRole
    -> but I have to apply eks:* policies to it, and switching to a role in a console is a little bit cumbersome.
  2. mapUser every IAM user that has 'eks:DescribeCluster' to some group, and bind them to view ClusterRole.
    -> every
  3. mapAccount
    -> then I see
    Error loading Namespaces
    namespaces is forbidden: User "arn:aws:iam::<accountId>:user/<userName>" cannot list resource "namespaces" in API group "" at the cluster scope
    
  4. mapAccount and bind every mapped accounts that has 'eks:DescribeCluster' to view ClusterRole.
    -> every

I wish some other ways:

  1. map a IAM group to a k8s group. (best)
  2. apply k8s groups to mapAccount (would be too permissive?)
  3. at least other out-of-box ways to take errors away for them.

@ZillaG
Copy link

ZillaG commented Feb 27, 2021

+1 for this feature!

1 similar comment
@FreebeJan
Copy link

+1 for this feature!

@AlKapkone
Copy link

+1 for this feature!

@ugurcemozturk
Copy link

+1

1 similar comment
@dlahoza
Copy link

dlahoza commented Jan 18, 2022

+1

@guygrip
Copy link

guygrip commented Jun 9, 2022

Hey! after a long research on this topic, we've decided to write a blog post covering this issue.
In the blog post, we cover most of the possible solutions for this and share the solution that we find to work best.



Enabling AWS IAM Group Access to an EKS Cluster Using RBAC

@ftasbasi
Copy link

+1

1 similar comment
@dtherhtun
Copy link

+1

@boris-infinit
Copy link

hey hey community 5 years past and we are still asking when it's will be ready ? and when AWS will allow us to manage group as normal teams ?

@rneto12
Copy link

rneto12 commented Apr 12, 2023

@adeakrvbd @jclynny here's a complete example how I build the userMap in terraform 0.12 based on IAM users bound to IAM groups.

data "aws_iam_group" "developer-members" {
  group_name = "developer"
}

data "aws_iam_group" "admin-members" {
  group_name = "admin"
}

locals {
  k8s_admins = [
    for user in data.aws_iam_group.admin-members.users :
    {
      user_arn = user.arn
      username = user.user_name
      groups    = ["system:masters"]
    }
  ]

  k8s_analytics_users = [
    for user in data.aws_iam_group.developer-members.users :
    {
      user_arn = user.arn
      username = user.user_name
      groups    = ["company:developer-users"]
    }
  ]

  k8s_map_users = concat(local.k8s_admins, local.k8s_analytics_users)
}


resource "kubernetes_config_map" "aws_auth" {
  metadata {
    name      = "aws-auth"
    namespace = "kube-system"
  }

  data = {
    mapRoles = <<YAML
- rolearn: ${module.eks.eks_worker_node_role_arn}
  username: system:node:{{EC2PrivateDNSName}}
  groups:
    - system:bootstrappers
    - system:nodes
YAML

    mapUsers = yamlencode(local.k8s_map_users)

    mapAccounts = <<YAML
- "${data.aws_caller_identity.current.account_id}"
YAML

  }
}

The downside is that you will need to run "terraform apply" each time you add or remove users from IAM groups and that one user shouldn't be in more than one group at a time.

It worked! I just had to change "user_arn" to "userarn".
Thanks bro!

@darren-recentive
Copy link

darren-recentive commented Jun 28, 2023

props to @guygrip for the inspo
tf for creating aws iam / k8s resources for group/role access
notes:

  • this is just hastily put together so would need some tweaks to fit your needs
  • Terraform v1.4.6
  • assumes you have providers installed
    • aws, ver "5.4.0"
    • gavinbunney/kubectl, ver "1.14.0" (could probably use kubernetes' k8s_manifest)

config/aws-auth

  • my eks cluster was created with this module: terraform-aws-modules
  • thus i just added this to the inputs
  manage_aws_auth_configmap = true
  aws_auth_roles = [
    {
      rolearn  = aws_iam_role.cluster-admin-access.arn
      username = local.eks-cluster-admin-role-name
      groups = [
        "system:masters",
        "system:bootstrappers",
        "system:nodes",
      ]
    },
    ...
  ]

below creates:

  • iam group
    • attached policy to allow assume role
  • iam role with access to eks cluster
  • k8s manifest
    • clusterrole
    • clusterrolebinding
data "aws_iam_policy_document" "cluster-admin-access" {
  statement {
    sid       = "1"
    effect    = "Allow"
    resources = ["*"]
    actions = [
      "eks:ListClusters",
      "eks:DescribeAddonVersions",
      "eks:CreateCluster"
    ]
  }
  statement {
    sid       = "2"
    effect    = "Allow"
    resources = ["arn:aws:eks:${local.region}:${data.aws_caller_identity.current.account_id}:cluster/${module.eks.cluster_name}"]
    actions = [
      "eks:*"
    ]
  }
}

resource "aws_iam_policy" "cluster-admin-access" {
  name   = "${local.application}-${local.environment}-eks-cluster-admin-access"
  path   = "/"
  policy = data.aws_iam_policy_document.cluster-admin-access.json
  tags   = local.tags
}

resource "aws_iam_role" "cluster-admin-access" {
  name = "${local.application}-${local.environment}-eks-cluster-admin-access"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          AWS = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:root"
        }
      }
    ]
  })
  tags = local.tags
}

resource "aws_iam_role_policy_attachment" "cluster-admin-access" {
  role       = aws_iam_role.cluster-admin-access.name
  policy_arn = aws_iam_policy.cluster-admin-access.arn
}

data "aws_iam_policy_document" "assume-eks-admin-role" {
  statement {
    sid       = "1"
    effect    = "Allow"
    resources = [aws_iam_role.cluster-admin-access.arn]
    actions   = ["sts:AssumeRole"]
  }
}

resource "aws_iam_policy" "assume-eks-admin-role" {
  name   = "${local.application}-${local.environment}-eks-admin-assume-role"
  path   = "/"
  policy = data.aws_iam_policy_document.assume-eks-admin-role.json
  tags   = local.tags
}

resource "aws_iam_group" "cluster-admin-access" {
  name = "eks-${local.environment}-admin-access"
  path = "/"
}

resource "aws_iam_group_policy_attachment" "cluster-admin-access" {
  group      = aws_iam_group.cluster-admin-access.name
  policy_arn = aws_iam_policy.assume-eks-admin-role.arn
}

resource "kubectl_manifest" "iam-user-group-admin-cluster-role" {
  yaml_body = <<-YAML
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: ${local.eks-cluster-admin-role-name}
    rules:
      - apiGroups: [""]
        resources: ["namespaces"]
        verbs: ["list"]
  YAML
}

resource "kubectl_manifest" "iam-user-group-admin-cluster-role-binding" {
  yaml_body = <<-YAML
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: ${local.eks-cluster-admin-role-name}
    subjects:
      - kind: User
        name: ${local.eks-cluster-admin-role-name}
        namespace: kube-system
        apiGroup: rbac.authorization.k8s.io
    roleRef:
      kind: ClusterRole
      name: ${local.eks-cluster-admin-role-name}
      apiGroup: rbac.authorization.k8s.io
  YAML
}


@doldoldol21
Copy link

There are ways to use Terraform or SSO, but providing group-level RBAC will have a huge effect in terms of EKS management.

@KacperBlaz
Copy link

Hello everyone, based on previous comments , i find a way to add users to EKS API which are in eks_iam_group. putting terraform code :

`locals {
k8s_developers = toset([for user in data.aws_iam_group.developer_members.users : user.arn])
}

resource "aws_iam_group" "developers_group" {
name = "developers_group_dev"
}

data "aws_iam_group" "developer_members" {
group_name = aws_iam_group.developers_group.name
}

resource "aws_eks_access_entry" "developer" {
for_each = local.k8s_developers
cluster_name = var.cluster_name
principal_arn = each.value
kubernetes_groups = ["read-only"]
}`

Of course you have to create policy for iam_group , wanted to show way to implement it.
It's not used by configmap, it's through API , so it will appear in AWS EKS Console via access_entry. Be aware that clusterrole and clusterrolebinding needs to be created inside cluster and it needs to be mapped with access_entry kubernetes_groups inside terraform.

apply yaml manifests through terraform:

data "kubectl_path_documents" "test" {
pattern = "./manifests/rbac/*.yaml"
}

resource "kubectl_manifest" "config" {
for_each = toset(data.kubectl_path_documents.test.documents)
yaml_body = each.value
}

Providing also clusterrole and clusterrolebinding :

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: developers-cluster-role
rules:

  • apiGroups: [""]
    resources: ["nodes", "pods", "deployments", "statefulsets", "daemonsets", "services", "configmaps", "secrets"]
    verbs: ["get", "watch", "list"]

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: developers-cl-role-binding
roleRef:
kind: ClusterRole
name: developers-cluster-role
apiGroup: rbac.authorization.k8s.io
subjects:

  • kind: Group
    name: read-only # Name is case sensitive IAM ROLE GROUP <-- map to kubernetes_groups
    apiGroup: rbac.authorization.k8s.io

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests