In my last post, I covered how to deploy a Kubernetes cluster to AWS GovCloud (US). In this post, I will cover how to add authentication using AWS IAM using aws-iam-authenticator.

The AWS IAM Authenticator runs a DaemonSet on all of the master nodes within the cluster and uses a webhook to integrate with the Kubernetes API Server. AWS IAM Authenticator also runs on your local machine via your local kubeconfig to create a temporary token used for authentication.

To get started, we need to copy the aws-iam-authenticator image to our GovCloud account so that our cluster nodes can pull the image. You will need access to a commercial account, CLI profiles configured for each account, and Docker installed to complete the following steps.

# create variables for the two registries we will be interacting with
AWS_REGISTRY="602401143452.dkr.ecr.us-west-2.amazonaws.com"
GOV_REGISTRY="$(aws sts get-caller-identity --query="Account" --region=us-gov-west-1 --output=text).dkr.ecr.us-gov-west-1.amazonaws.com"

# create a registry for aws-iam-authenticator
aws ecr create-repository \
  --region=us-gov-west-1 \
  --repository-name "aws-iam-authenticator" \
  --image-scanning-configuration "{ \"scanOnPush\": true }"

# login to the official aws and our govcloud account
aws ecr get-login-password --region=us-west-2 | docker login -u AWS --password-stdin $AWS_REGISTRY
aws ecr get-login-password --region=us-gov-west-1 --profile=govcloud | docker login -u AWS --password-stdin $GOV_REGISTRY

# pull the latest aws-iam-authenticator image
docker pull $AWS_REGISTRY/amazon/aws-iam-authenticator:v0.5.0-scratch

# retag the image for our registry
docker tag \
  $AWS_REGISTRY/amazon/aws-iam-authenticator:v0.5.0-scratch \
  $GOV_REGISTRY/aws-iam-authenticator:v0.5.0-scratch

# push the image to our registry
docker push $GOV_REGISTRY/aws-iam-authenticator:v0.5.0-scratch

Once we have the image copied to GovCloud we need to update our master node configurations.

kops supports configuring aws-iam-authenticator however, it had issues with the new version of aws-iam-authenticator. The steps below are similar to how kops deploys aws-iam-authenticator.

To get started, begin editing your master instance group configurations by running one command at a time:

kops edit ig master-us-gov-west-1a --name="$KOPS_NAME" --state="s3://$KOPS_BUCKET_NAME"
kops edit ig master-us-gov-west-1b --name="$KOPS_NAME" --state="s3://$KOPS_BUCKET_NAME"
kops edit ig master-us-gov-west-1c --name="$KOPS_NAME" --state="s3://$KOPS_BUCKET_NAME"

Once your editor is open, add the additionalUserData field to the bottom of the file under the spec object. This script will run as user data on the instance and configure aws-iam-authenticator ahead of time to prevent having to restart the API Server.

spec:
  additionalUserData:
  - name: aws-iam-authenticator.sh
    type: text/x-shellscript
    content: |
      #!/bin/bash -ex

      # log file
      LOG_FILE=/var/log/aws-iam-authenticator-init.log

      # create all of the directories
      mkdir -p /tmp/aws-iam-authenticator >> $LOG_FILE
      mkdir -p /var/aws-iam-authenticator >> $LOG_FILE
      mkdir -p /srv/kubernetes/aws-iam-authenticator >> $LOG_FILE

      # create aws-iam-authenticator users, hard coded 10000
      echo "Creating an AWS IAM Authenticator User..."  >> $LOG_FILE 
      useradd -s /sbin/nologin -d /srv/kubernetes/aws-iam-authenticator -u 10000 aws-iam-authenticator >> $LOG_FILE

      # download aws-iam-authenticator

      echo "Downloading AWS IAM Authenticator Binary..."  >> $LOG_FILE 
      curl -fL -o /usr/bin/aws-iam-authenticator https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/download/v0.5.0/aws-iam-authenticator_0.5.0_linux_amd64 >> $LOG_FILE
      chmod +x /usr/bin/aws-iam-authenticator >> $LOG_FILE

      # run the init task
      pushd /tmp/aws-iam-authenticator
      echo "Initializing aws-iam-authenticator..."  >> $LOG_FILE 
      aws-iam-authenticator init --cluster-id="k8s.local" >> $LOG_FILE
      popd

      # move the files into the right place
      echo "Moving aws-iam-authenticator files into the right directory..."  >> $LOG_FILE 
      mv /tmp/aws-iam-authenticator/key.pem /var/aws-iam-authenticator/key.pem >> $LOG_FILE
      mv /tmp/aws-iam-authenticator/cert.pem /var/aws-iam-authenticator/cert.pem >> $LOG_FILE
      mv /tmp/aws-iam-authenticator/aws-iam-authenticator.kubeconfig /srv/kubernetes/aws-iam-authenticator/kubeconfig.yaml >> $LOG_FILE

      # change the permissions
      echo "Setting Permissions..."  >> $LOG_FILE 
      chown -R aws-iam-authenticator:aws-iam-authenticator /var/aws-iam-authenticator >> $LOG_FILE
      chown -R aws-iam-authenticator:aws-iam-authenticator /srv/kubernetes/aws-iam-authenticator >> $LOG_FILE

      # clean up
      echo "Cleaning up temporary directory..."  >> $LOG_FILE 
      rm -rf /tmp/aws-iam-authenticator >> $LOG_FILE

      echo "Complete!"  >> $LOG_FILE       

Once you have updated all of instance groups, it is time to update the cluster configuration to add the webhook configuration. The webhook is configured with the --authentication-token-webhook-config-file flag on the API server. To add this parameter, edit the cluster configuration.

kops edit cluster --name="$KOPS_NAME" --state="s3://$KOPS_BUCKET_NAME"

Add the following configuration to your kops cluster configuration. This will point the API Server to our webhook configuration created by our user data script.

spec:
  kubeAPIServer:
    authenticationTokenWebhookConfigFile: /srv/kubernetes/aws-iam-authenticator/kubeconfig.yaml

Next, it is time to update the cluster configuration and roll out the changes to all of the master nodes.

kops update cluster --name="$KOPS_NAME" --state="s3://$KOPS_BUCKET_NAME" --yes
kops --name="$KOPS_NAME" --state="s3://$KOPS_BUCKET_NAME" rolling-update cluster --instance-group=master-us-gov-west-1a --yes
kops --name="$KOPS_NAME" --state="s3://$KOPS_BUCKET_NAME" rolling-update cluster --instance-group=master-us-gov-west-1b --yes
kops --name="$KOPS_NAME" --state="s3://$KOPS_BUCKET_NAME" rolling-update cluster --instance-group=master-us-gov-west-1c --yes

Once all of the master nodes are updated, it is time to deploy the aws-iam-authenticator DaemonSet to the cluster. You will need to update the clusterID, image, and arns with the proper AWS account IDs. I recommend writing this file to a repository that houses the rest of your cluster configuration.

# file: aws-iam-authenticator.yml
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: aws-iam-authenticator
rules:
- apiGroups:
  - iamauthenticator.k8s.aws
  resources:
  - iamidentitymappings
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - iamauthenticator.k8s.aws
  resources:
  - iamidentitymappings/status
  verbs:
  - patch
  - update
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - update
  - patch
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: aws-iam-authenticator
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: aws-iam-authenticator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: aws-iam-authenticator
subjects:
- kind: ServiceAccount
  name: aws-iam-authenticator
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: kube-system
  name: aws-iam-authenticator
  labels:
    k8s-app: aws-iam-authenticator
data:
  config.yaml: |
    clusterID: k8s.local
    server:
      port: 21362
      stateDir: /var/aws-iam-authenticator
      mapAccounts: []
      mapUsers: []
      mapRoles:
        - roleARN: arn:aws-us-gov:iam::123456789123:role/nodes.k8s.local
          username: aws:{{AccountID}}:instance:{{SessionName}}
          groups:
          - system:bootstrappers
          - aws:instances
        - roleARN: arn:aws-us-gov:iam::123456789123:role/masters.k8s.local
          username: aws:{{AccountID}}:instance:{{SessionName}}
          groups:
          - system:bootstrappers
          - aws:instances    
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  namespace: kube-system
  name: aws-iam-authenticator
  labels:
    k8s-app: aws-iam-authenticator
spec:
  selector:
    matchLabels:
      k8s-app: aws-iam-authenticator
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ""
      labels:
        k8s-app: aws-iam-authenticator
    spec:

      # use service account with access to
      serviceAccountName: aws-iam-authenticator

      # run on the host network (don't depend on CNI)
      hostNetwork: true

      # run on each master node
      nodeSelector:
        node-role.kubernetes.io/master: ""

      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
      - key: CriticalAddonsOnly
        operator: Exists

      # run `aws-iam-authenticator server` with three volumes
      # - config (mounted from the ConfigMap at /etc/aws-iam-authenticator/config.yaml)
      # - state (persisted TLS certificate and keys, mounted from the host)
      # - output (output kubeconfig to plug into your apiserver configuration, mounted from the host)
      containers:
      - name: aws-iam-authenticator
        image: 123456789123.dkr.ecr.us-gov-west-1.amazonaws.com/aws-iam-authenticator:v0.5.0-alpine-3.7
        args:
        - server
        - --backend-mode=ConfigMap,File
        - --config=/etc/aws-iam-authenticator/config.yaml
        - --state-dir=/var/aws-iam-authenticator
        - --kubeconfig-pregenerated=true
        - 
        resources:
          requests:
            memory: 20Mi
            cpu: 10m
          limits:
            memory: 20Mi
            cpu: 100m

        volumeMounts:
        - name: config
          mountPath: /etc/aws-iam-authenticator/
        - name: state
          mountPath: /var/aws-iam-authenticator/
        - name: output
          mountPath: /etc/kubernetes/aws-iam-authenticator/

      volumes:
      - name: config
        configMap:
          name: aws-iam-authenticator
      - name: output
        hostPath:
          path: /srv/kubernetes/aws-iam-authenticator/
      - name: state
        hostPath:
          path: /var/aws-iam-authenticator/

When you have updated the file to match your settings, deploy the configuration to the cluster.

kubectl apply -f ./aws-iam-authenticator.yml

Once we have the aws-iam-authenticator configured, we need to create the IAM roles we will use for authentication. I recommend creating a role for each type of user access pattern. Most likely, you will have an administrator role, ci/cd role, power user role, read only role, and maybe one for each application team. In this post, I will show creating the administrator role.

# capture our govcloud account id
export ACCOUNT_ID=$(aws sts get-caller-identity --region=us-gov-west-1 --profile=govcloud --output=text --query='Account')

# create the policy document that allows users to assume this role
export POLICY=$(echo -n '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"arn:aws-us-gov:iam::'; echo -n "$ACCOUNT_ID"; echo -n ':root"},"Action":"sts:AssumeRole","Condition":{}}]}')

# create the role
aws iam create-role \
  --role-name K8s-Admin \
  --description "Kubernetes administrator role (for AWS IAM Authenticator for AWS)." \
  --assume-role-policy-document "$POLICY" \
  --output text \
  --query 'Role.Arn' \
  --profile=govcloud \
  --region=us-gov-west-1

Now that we have an IAM role and the aws-iam-authenticator it is time to deploy a ConfigMap that will house our mappings. Using this ConfigMap, it will allow you to update your configuration without having to restart the aws-iam-authenticator pods. I mapped the K8s-Admin role we created to system:masters and I appended the AWS IAM SessionName to username so that we can Kubernetes users to IAM users.

# file: aws-auth.yml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: kube-system
  name: aws-auth
data:
  mapRoles: |
    - roleARN: arn:aws-us-gov:iam::123456789123:role/K8s-Admin
      username: iam-admin-{{SessionName}}
      groups:
        - system:masters    
kubectl apply -f aws-auth.yml

Now that we have the role configured, we can update our local kubeconfig to use the role the authenticate. Open your local kubeconfig, most likely located at $HOME/.kube/config. Locate the user for your kops cluster and update the configuration, my cluster is named k8s.local:

users:
- name: k8s.local
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - k8s.local
      - -r
      - arn:aws-us-gov:iam::123456789123:role/K8s-Admin
      command: aws-iam-authenticator
      env:
      - name: AWS_STS_REGIONAL_ENDPOINTS
        value: regional
      - name: AWS_DEFAULT_REGION
        value: us-gov-west-1
      - name: AWS_PROFILE
        value: govcloud

Make sure you update the Account ID and AWS Profile to match your configuration. The critical component here is the -r and role name. This will use our local AWS Profile that we have configured and switch role (this will not effect your local CLI) into the dedicated Kubernetes role. By using switch role, we can allow multiple users to use the same role while still maintaining traceability. Once this updated, now we can authenticate to our cluster using the role.

Now, you can give users the ability to assume this role via IAM so that they can access your cluster.