sigs.k8s.io/external-dns@v0.14.1/docs/tutorials/aws.md (about)

     1  # Setting up ExternalDNS for Services on AWS
     2  
     3  This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster on AWS. Make sure to use **>=0.11.0** version of ExternalDNS for this tutorial
     4  
     5  ## IAM Policy
     6  
     7  The following IAM Policy document allows ExternalDNS to update Route53 Resource
     8  Record Sets and Hosted Zones. You'll want to create this Policy in IAM first. In
     9  our example, we'll call the policy `AllowExternalDNSUpdates` (but you can call
    10  it whatever you prefer).
    11  
    12  If you prefer, you may fine-tune the policy to permit updates only to explicit
    13  Hosted Zone IDs.
    14  
    15  ```json
    16  {
    17    "Version": "2012-10-17",
    18    "Statement": [
    19      {
    20        "Effect": "Allow",
    21        "Action": [
    22          "route53:ChangeResourceRecordSets"
    23        ],
    24        "Resource": [
    25          "arn:aws:route53:::hostedzone/*"
    26        ]
    27      },
    28      {
    29        "Effect": "Allow",
    30        "Action": [
    31          "route53:ListHostedZones",
    32          "route53:ListResourceRecordSets",
    33          "route53:ListTagsForResource"
    34        ],
    35        "Resource": [
    36          "*"
    37        ]
    38      }
    39    ]
    40  }
    41  ```
    42  
    43  If you are using the AWS CLI, you can run the following to install the above policy (saved as `policy.json`).  This can be use in subsequent steps to allow ExternalDNS to access Route53 zones.
    44  
    45  ```bash
    46  aws iam create-policy --policy-name "AllowExternalDNSUpdates" --policy-document file://policy.json
    47  
    48  # example: arn:aws:iam::XXXXXXXXXXXX:policy/AllowExternalDNSUpdates
    49  export POLICY_ARN=$(aws iam list-policies \
    50   --query 'Policies[?PolicyName==`AllowExternalDNSUpdates`].Arn' --output text)
    51  ```
    52  
    53  ## Provisioning a Kubernetes cluster
    54  
    55  You can use [eksctl](https://eksctl.io) to easily provision an [Amazon Elastic Kubernetes Service](https://aws.amazon.com/eks) ([EKS](https://aws.amazon.com/eks)) cluster that is suitable for this tutorial.  See [Getting started with Amazon EKS – eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html).
    56  
    57  
    58  ```bash
    59  export EKS_CLUSTER_NAME="my-externaldns-cluster"
    60  export EKS_CLUSTER_REGION="us-east-2"
    61  export KUBECONFIG="$HOME/.kube/${EKS_CLUSTER_NAME}-${EKS_CLUSTER_REGION}.yaml"
    62  
    63  eksctl create cluster --name $EKS_CLUSTER_NAME --region $EKS_CLUSTER_REGION
    64  ```
    65  
    66  Feel free to use other provisioning tools or an existing cluster.  If [Terraform](https://www.terraform.io/) is used, [vpc](https://registry.terraform.io/modules/terraform-aws-modules/vpc/aws/) and [eks](https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/) modules are recommended for standing up an EKS cluster.  Amazon has a workshop called [Amazon EKS Terraform Workshop](https://catalog.us-east-1.prod.workshops.aws/workshops/afee4679-89af-408b-8108-44f5b1065cc7/) that may be useful for this process.
    67  
    68  ## Permissions to modify DNS zone
    69  
    70  You will need to use the above policy (represented by the `POLICY_ARN` environment variable) to allow ExternalDNS to update records in Route53 DNS zones. Here are three common ways this can be accomplished:
    71  
    72  * [Node IAM Role](#node-iam-role)
    73  * [Static credentials](#static-credentials)
    74  * [IAM Roles for Service Accounts](#iam-roles-for-service-accounts)
    75  
    76  For this tutorial, ExternalDNS will use the environment variable `EXTERNALDNS_NS` to represent the namespace, defaulted to `default`.  Feel free to change this to something else, such `externaldns` or `kube-addons`.  Make sure to edit the `subjects[0].namespace` for the `ClusterRoleBinding` resource when deploying ExternalDNS with RBAC enabled.  See [Manifest (for clusters with RBAC enabled)](#manifest-for-clusteres-with-rbac-enabled)  for more information.
    77  
    78  Additionally, throughout this tutorial, the example domain of `example.com` is used.  Change this to appropriate domain under your control.  See [Set up a hosted zone](#set-up-a-hosted-zone) section.
    79  
    80  ### Node IAM Role
    81  
    82  In this method, you can attach a policy to the Node IAM Role.  This will allow nodes in the Kubernetes cluster to access Route53 zones, which allows ExternalDNS to update DNS records.  Given that this allows all containers to access Route53, not just ExternalDNS, running on the node with these privileges, this method is not recommended, and is only suitable for limited test environments.
    83  
    84  If you are using eksctl to provision a new cluster, you add the policy at creation time with:
    85  
    86  ```bash
    87  eksctl create cluster --external-dns-access \
    88    --name $EKS_CLUSTER_NAME --region $EKS_CLUSTER_REGION \
    89  ```
    90  
    91  :warning: **WARNING**: This will assign allow read-write access to all nodes in the cluster, not just ExternalDNS.  For this reason, this method is only suitable for limited test environments.
    92  
    93  If you already provisioned a cluster or use other provisioning tools like Terraform, you can use AWS CLI to attach the policy to the Node IAM Role.
    94  
    95  #### Get the Node IAM role name
    96  
    97  The role name of the role associated with the node(s) where ExternalDNS will run is needed.  An easy way to get the role name is to use the AWS web console (https://console.aws.amazon.com/eks/), and find any instance in the target node group and copy the role name associated with that instance.
    98  
    99  ##### Get role name with a single managed nodegroup
   100  
   101  From the command line, if you have a single managed node group, the default with `eksctl create cluster`, you can find the role name with the following:
   102  
   103  ```bash
   104  # get managed node group name (assuming there's only one node group)
   105  GROUP_NAME=$(aws eks list-nodegroups --cluster-name $EKS_CLUSTER_NAME \
   106    --query nodegroups --out text)
   107  # fetch role arn given node group name
   108  ROLE_ARN=$(aws eks describe-nodegroup --cluster-name $EKS_CLUSTER_NAME \
   109    --nodegroup-name $GROUP_NAME --query nodegroup.nodeRole --out text)
   110  # extract just the name part of role arn
   111  ROLE_NAME=${NODE_ROLE_ARN##*/}
   112  ```
   113  
   114  ##### Get role name with other configurations
   115  
   116  If you have multiple node groups or any unmanaged node groups, the process gets more complex.  The first step is to get the instance host name of the desired node to where ExternalDNS will be deployed or is already deployed:
   117  
   118  ```bash
   119  # node instance name of one of the external dns pods currently running
   120  INSTANCE_NAME=$(kubectl get pods --all-namespaces \
   121    --selector app.kubernetes.io/instance=external-dns \
   122    --output jsonpath='{.items[0].spec.nodeName}')
   123  
   124  # instance name of one of the nodes (change if node group is different)
   125  INSTANCE_NAME=$(kubectl get nodes --output name | cut -d'/' -f2 | tail -1)
   126  ```
   127  
   128  With the instance host name, you can then get the instance id:
   129  
   130  ```bash
   131  get_instance_id() {
   132    INSTANCE_NAME=$1 # example: ip-192-168-74-34.us-east-2.compute.internal
   133  
   134    # get list of nodes
   135    # ip-192-168-74-34.us-east-2.compute.internal	aws:///us-east-2a/i-xxxxxxxxxxxxxxxxx
   136    # ip-192-168-86-105.us-east-2.compute.internal	aws:///us-east-2a/i-xxxxxxxxxxxxxxxxx
   137    NODES=$(kubectl get nodes \
   138     --output jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.providerID}{"\n"}{end}')
   139  
   140    # print instance id from matching node
   141    grep $INSTANCE_NAME <<< "$NODES" | cut -d'/' -f5
   142  }
   143  
   144  INSTANCE_ID=$(get_instance_id $INSTANCE_NAME)
   145  ```
   146  
   147  With the instance id, you can get the associated role name:
   148  
   149  ```bash
   150  findRoleName() {
   151    INSTANCE_ID=$1
   152  
   153    # get all of the roles
   154    ROLES=($(aws iam list-roles --query Roles[*].RoleName --out text))
   155    for ROLE in ${ROLES[*]}; do
   156      # get instance profile arn
   157      PROFILE_ARN=$(aws iam list-instance-profiles-for-role \
   158        --role-name $ROLE --query InstanceProfiles[0].Arn --output text)
   159      # if there is an instance profile
   160      if [[ "$PROFILE_ARN" != "None" ]]; then
   161        # get all the instances with this associated instance profile
   162        INSTANCES=$(aws ec2 describe-instances \
   163          --filters Name=iam-instance-profile.arn,Values=$PROFILE_ARN \
   164          --query Reservations[*].Instances[0].InstanceId --out text)
   165        # find instances that match the instant profile
   166        for INSTANCE in ${INSTANCES[*]}; do
   167          # set role name value if there is a match
   168          if [[ "$INSTANCE_ID" == "$INSTANCE" ]]; then ROLE_NAME=$ROLE; fi
   169        done
   170      fi
   171    done
   172  
   173    echo $ROLE_NAME
   174  }
   175  
   176  NODE_ROLE_NAME=$(findRoleName $INSTANCE_ID)
   177  ```
   178  
   179  Using the role name, you can associate the policy that was created earlier:
   180  
   181  ```bash
   182  # attach policy arn created earlier to node IAM role
   183  aws iam attach-role-policy --role-name $NODE_ROLE_NAME --policy-arn $POLICY_ARN
   184  ```
   185  
   186  :warning: **WARNING**: This will assign allow read-write access to all pods running on the same node pool, not just the ExternalDNS pod(s).
   187  
   188  #### Deploy ExternalDNS with attached policy to Node IAM Role
   189  
   190  If ExternalDNS is not yet deployed, follow the steps under [Deploy ExternalDNS](#deploy-externaldns) using either RBAC or non-RBAC.
   191  
   192  **NOTE**: Before deleting the cluster during, be sure to run `aws iam detach-role-policy`.  Otherwise, there can be errors as the provisioning system, such as `eksctl` or `terraform`, will not be able to delete the roles with the attached policy.
   193  
   194  ### Static credentials
   195  
   196  In this method, the policy is attached to an IAM user, and the credentials secrets for the IAM user are then made available using a Kubernetes secret.
   197  
   198  This method is not the preferred method as the secrets in the credential file could be copied and used by an unauthorized threat actor.  However, if the Kubernetes cluster is not hosted on AWS, it may be the only method available.  Given this situation, it is important to limit the associated privileges to just minimal required privileges, i.e. read-write access to Route53, and not used a credentials file that has extra privileges beyond what is required.
   199  
   200  #### Create IAM user and attach the policy
   201  
   202  ```bash
   203  # create IAM user
   204  aws iam create-user --user-name "externaldns"
   205  
   206  # attach policy arn created earlier to IAM user
   207  aws iam attach-user-policy --user-name "externaldns" --policy-arn $POLICY_ARN
   208  ```
   209  
   210  #### Create the static credentials
   211  
   212  ```bash
   213  SECRET_ACCESS_KEY=$(aws iam create-access-key --user-name "externaldns")
   214  cat <<-EOF > /local/path/to/credentials
   215  
   216  [default]
   217  aws_access_key_id = $(echo $SECRET_ACCESS_KEY | jq -r '.AccessKey.AccessKeyId')
   218  aws_secret_access_key = $(echo $SECRET_ACCESS_KEY | jq -r '.AccessKey.SecretAccessKey')
   219  EOF
   220  ```
   221  
   222  #### Create Kubernetes secret from credentials
   223  
   224  ```bash
   225  kubectl create secret generic external-dns \
   226    --namespace ${EXTERNALDNS_NS:-"default"} --from-file /local/path/to/credentials
   227  ```
   228  
   229  #### Deploy ExternalDNS using static credentials
   230  
   231  Follow the steps under [Deploy ExternalDNS](#deploy-externaldns) using either RBAC or non-RBAC.  Make sure to uncomment the section that mounts volumes, so that the credentials can be mounted.
   232  
   233  ### IAM Roles for Service Accounts
   234  
   235  [IRSA](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) ([IAM roles for Service Accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html)) allows cluster operators to map AWS IAM Roles to Kubernetes Service Accounts.  This essentially allows only ExternalDNS pods to access Route53 without exposing any static credentials.
   236  
   237  This is the preferred method as it implements [PoLP](https://csrc.nist.gov/glossary/term/principle_of_least_privilege) ([Principal of Least Privilege](https://csrc.nist.gov/glossary/term/principle_of_least_privilege)).
   238  
   239  **IMPORTANT**: This method requires using KSA (Kubernetes service account) and RBAC.
   240  
   241  This method requires deploying with RBAC.  See [Manifest (for clusters with RBAC enabled)](#manifest-for-clusters-with-rbac-enabled) when ready to deploy ExternalDNS.
   242  
   243  **NOTE**: Similar methods to IRSA on AWS are [kiam](https://github.com/uswitch/kiam), which is in maintenence mode, and has [instructions](https://github.com/uswitch/kiam/blob/HEAD/docs/IAM.md) for creating an IAM role, and also [kube2iam](https://github.com/jtblin/kube2iam).  IRSA is the officially supported method for EKS clusters, and so for non-EKS clusters on AWS, these other tools could be an option.
   244  
   245  #### Verify OIDC is supported
   246  
   247  ```bash
   248  aws eks describe-cluster --name $EKS_CLUSTER_NAME \
   249    --query "cluster.identity.oidc.issuer" --output text
   250  ```
   251  
   252  #### Associate OIDC to cluster
   253  
   254  Configure the cluster with an OIDC provider and add support for [IRSA](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) ([IAM roles for Service Accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html)).
   255  
   256  If you used `eksctl` to provision the EKS cluster, you can update it with the following command:
   257  
   258  ```bash
   259  eksctl utils associate-iam-oidc-provider \
   260    --cluster $EKS_CLUSTER_NAME --approve
   261  ```
   262  
   263  If the cluster was provisioned with Terraform, you can use the `iam_openid_connect_provider` resource ([ref](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_openid_connect_provider)) to associate to the OIDC provider.
   264  
   265  #### Create an IAM role bound to a service account
   266  
   267  For the next steps in this process, we will need to associate the `external-dns` service account and a role used to grant access to Route53.  This requires the following steps:
   268  
   269  1. Create a role with a trust relationship to the cluster's OIDC provider
   270  2. Attach the `AllowExternalDNSUpdates` policy to the role
   271  3. Create the `external-dns` service account
   272  4. Add annotation to the service account with the role arn
   273  
   274  ##### Use eksctl with eksctl created EKS cluster
   275  
   276  If `eksctl` was used to provision the EKS cluster, you can perform all of these steps with the following command:
   277  
   278  ```bash
   279  eksctl create iamserviceaccount \
   280    --cluster $EKS_CLUSTER_NAME \
   281    --name "external-dns" \
   282    --namespace ${EXTERNALDNS_NS:-"default"} \
   283    --attach-policy-arn $POLICY_ARN \
   284    --approve
   285  ```
   286  
   287  ##### Use aws cli with any EKS cluster
   288  
   289  Otherwise, we can do the following steps using `aws` commands (also see [Creating an IAM role and policy for your service account](https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html)):
   290  
   291  ```bash
   292  ACCOUNT_ID=$(aws sts get-caller-identity \
   293    --query "Account" --output text)
   294  OIDC_PROVIDER=$(aws eks describe-cluster --name $EKS_CLUSTER_NAME \
   295    --query "cluster.identity.oidc.issuer" --output text | sed -e 's|^https://||')
   296  
   297  cat <<-EOF > trust.json
   298  {
   299      "Version": "2012-10-17",
   300      "Statement": [
   301          {
   302              "Effect": "Allow",
   303              "Principal": {
   304                  "Federated": "arn:aws:iam::$ACCOUNT_ID:oidc-provider/$OIDC_PROVIDER"
   305              },
   306              "Action": "sts:AssumeRoleWithWebIdentity",
   307              "Condition": {
   308                  "StringEquals": {
   309                      "$OIDC_PROVIDER:sub": "system:serviceaccount:${EXTERNALDNS_NS:-"default"}:external-dns",
   310                      "$OIDC_PROVIDER:aud": "sts.amazonaws.com"
   311                  }
   312              }
   313          }
   314      ]
   315  }
   316  EOF
   317  
   318  IRSA_ROLE="external-dns-irsa-role"
   319  aws iam create-role --role-name $IRSA_ROLE --assume-role-policy-document file://trust.json
   320  aws iam attach-role-policy --role-name $IRSA_ROLE --policy-arn $POLICY_ARN
   321  
   322  ROLE_ARN=$(aws iam get-role --role-name $IRSA_ROLE --query Role.Arn --output text)
   323  
   324  # Create service account (skip is already created)
   325  kubectl create serviceaccount "external-dns" --namespace ${EXTERNALDNS_NS:-"default"}
   326  
   327  # Add annotation referencing IRSA role
   328  kubectl patch serviceaccount "external-dns" --namespace ${EXTERNALDNS_NS:-"default"} --patch \
   329   "{\"metadata\": { \"annotations\": { \"eks.amazonaws.com/role-arn\": \"$ROLE_ARN\" }}}"
   330  ```
   331  
   332  If any part of this step is misconfigured, such as the role with incorrect namespace configured in the trust relationship, annotation pointing the the wrong role, etc., you will see errors like `WebIdentityErr: failed to retrieve credentials`. Check the configuration and make corrections.
   333  
   334  When the service account annotations are updated, then the current running pods will have to be terminated, so that new pod(s) with proper configuration (environment variables) will be created automatically.
   335  
   336  When annotation is added to service account, the ExternalDNS pod(s) scheduled will have `AWS_ROLE_ARN`, `AWS_STS_REGIONAL_ENDPOINTS`, and `AWS_WEB_IDENTITY_TOKEN_FILE` environment variables injected automatically.
   337  
   338  #### Deploy ExternalDNS using IRSA
   339  
   340  Follow the steps under [Manifest (for clusters with RBAC enabled)](#manifest-for-clusters-with-rbac-enabled).  Make sure to comment out the service account section if this has been created already.
   341  
   342  If you deployed ExternalDNS before adding the service account annotation and the corresponding role, you will likely see error with `failed to list hosted zones: AccessDenied: User`.  You can delete the current running ExternalDNS pod(s) after updating the annotation, so that new pods scheduled will have appropriate configuration to access Route53.
   343  
   344  
   345  ## Set up a hosted zone
   346  
   347  *If you prefer to try-out ExternalDNS in one of the existing hosted-zones you can skip this step*
   348  
   349  Create a DNS zone which will contain the managed DNS records.  This tutorial will use the fictional domain of `example.com`.
   350  
   351  ```bash
   352  aws route53 create-hosted-zone --name "example.com." \
   353    --caller-reference "external-dns-test-$(date +%s)"
   354  ```
   355  
   356  Make a note of the nameservers that were assigned to your new zone.
   357  
   358  ```bash
   359  ZONE_ID=$(aws route53 list-hosted-zones-by-name --output json \
   360    --dns-name "example.com." --query HostedZones[0].Id --out text)
   361  
   362  aws route53 list-resource-record-sets --output text \
   363   --hosted-zone-id $ZONE_ID --query \
   364   "ResourceRecordSets[?Type == 'NS'].ResourceRecords[*].Value | []" | tr '\t' '\n'
   365  ```
   366  
   367  This should yield something similar this:
   368  
   369  ```
   370  ns-695.awsdns-22.net.
   371  ns-1313.awsdns-36.org.
   372  ns-350.awsdns-43.com.
   373  ns-1805.awsdns-33.co.uk.
   374  ```
   375  
   376  If using your own domain that was registered with a third-party domain registrar, you should point your domain's name servers to the values in the from the list above.  Please consult your registrar's documentation on how to do that.
   377  
   378  ## Deploy ExternalDNS
   379  
   380  Connect your `kubectl` client to the cluster you want to test ExternalDNS with.
   381  Then apply one of the following manifests file to deploy ExternalDNS. You can check if your cluster has RBAC by `kubectl api-versions | grep rbac.authorization.k8s.io`.
   382  
   383  For clusters with RBAC enabled, be sure to choose the correct `namespace`.  For this tutorial, the enviornment variable `EXTERNALDNS_NS` will refer to the namespace.  You can set this to a value of your choice:
   384  
   385  ```bash
   386  export EXTERNALDNS_NS="default" # externaldns, kube-addons, etc
   387  
   388  # create namespace if it does not yet exist
   389  kubectl get namespaces | grep -q $EXTERNALDNS_NS || \
   390    kubectl create namespace $EXTERNALDNS_NS
   391  ```
   392  
   393  ### Manifest (for clusters without RBAC enabled)
   394  
   395  Save the following below as `externaldns-no-rbac.yaml`.
   396  
   397  ```yaml
   398  apiVersion: apps/v1
   399  kind: Deployment
   400  metadata:
   401    name: external-dns
   402    labels:
   403      app.kubernetes.io/name: external-dns
   404  spec:
   405    strategy:
   406      type: Recreate
   407    selector:
   408      matchLabels:
   409        app.kubernetes.io/name: external-dns
   410    template:
   411      metadata:
   412        labels:
   413          app.kubernetes.io/name: external-dns
   414      spec:
   415        containers:
   416          - name: external-dns
   417            image: registry.k8s.io/external-dns/external-dns:v0.14.0
   418            args:
   419              - --source=service
   420              - --source=ingress
   421              - --domain-filter=example.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
   422              - --provider=aws
   423              - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
   424              - --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
   425              - --registry=txt
   426              - --txt-owner-id=my-hostedzone-identifier
   427            env:
   428              - name: AWS_DEFAULT_REGION
   429                value: us-east-1 # change to region where EKS is installed
   430        # # Uncomment below if using static credentials
   431        #       - name: AWS_SHARED_CREDENTIALS_FILE
   432        #        value: /.aws/credentials
   433        #     volumeMounts:
   434        #       - name: aws-credentials
   435        #         mountPath: /.aws
   436        #         readOnly: true
   437        # volumes:
   438        #   - name: aws-credentials
   439        #     secret:
   440        #       secretName: external-dns
   441  ```
   442  
   443  When ready you can deploy:
   444  
   445  ```bash
   446  kubectl create --filename externaldns-no-rbac.yaml \
   447    --namespace ${EXTERNALDNS_NS:-"default"}
   448  ```
   449  
   450  ### Manifest (for clusters with RBAC enabled)
   451  
   452  Save the following below as `externaldns-with-rbac.yaml`.
   453  
   454  ```yaml
   455  # comment out sa if it was previously created
   456  apiVersion: v1
   457  kind: ServiceAccount
   458  metadata:
   459    name: external-dns
   460    labels:
   461      app.kubernetes.io/name: external-dns
   462  ---
   463  apiVersion: rbac.authorization.k8s.io/v1
   464  kind: ClusterRole
   465  metadata:
   466    name: external-dns
   467    labels:
   468      app.kubernetes.io/name: external-dns
   469  rules:
   470    - apiGroups: [""]
   471      resources: ["services","endpoints","pods","nodes"]
   472      verbs: ["get","watch","list"]
   473    - apiGroups: ["extensions","networking.k8s.io"]
   474      resources: ["ingresses"]
   475      verbs: ["get","watch","list"]
   476  ---
   477  apiVersion: rbac.authorization.k8s.io/v1
   478  kind: ClusterRoleBinding
   479  metadata:
   480    name: external-dns-viewer
   481    labels:
   482      app.kubernetes.io/name: external-dns
   483  roleRef:
   484    apiGroup: rbac.authorization.k8s.io
   485    kind: ClusterRole
   486    name: external-dns
   487  subjects:
   488    - kind: ServiceAccount
   489      name: external-dns
   490      namespace: default # change to desired namespace: externaldns, kube-addons
   491  ---
   492  apiVersion: apps/v1
   493  kind: Deployment
   494  metadata:
   495    name: external-dns
   496    labels:
   497      app.kubernetes.io/name: external-dns
   498  spec:
   499    strategy:
   500      type: Recreate
   501    selector:
   502      matchLabels:
   503        app.kubernetes.io/name: external-dns
   504    template:
   505      metadata:
   506        labels:
   507          app.kubernetes.io/name: external-dns
   508      spec:
   509        serviceAccountName: external-dns
   510        containers:
   511          - name: external-dns
   512            image: registry.k8s.io/external-dns/external-dns:v0.14.0
   513            args:
   514              - --source=service
   515              - --source=ingress
   516              - --domain-filter=example.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
   517              - --provider=aws
   518              - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
   519              - --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
   520              - --registry=txt
   521              - --txt-owner-id=external-dns
   522            env:
   523              - name: AWS_DEFAULT_REGION
   524                value: us-east-1 # change to region where EKS is installed
   525       # # Uncommend below if using static credentials
   526       #        - name: AWS_SHARED_CREDENTIALS_FILE
   527       #          value: /.aws/credentials
   528       #      volumeMounts:
   529       #        - name: aws-credentials
   530       #          mountPath: /.aws
   531       #          readOnly: true
   532       #  volumes:
   533       #    - name: aws-credentials
   534       #      secret:
   535       #        secretName: external-dns
   536  ```
   537  
   538  When ready deploy:
   539  
   540  ```bash
   541  kubectl create --filename externaldns-with-rbac.yaml \
   542    --namespace ${EXTERNALDNS_NS:-"default"}
   543  ```
   544  
   545  ## Arguments
   546  
   547  This list is not the full list, but a few arguments that where chosen.
   548  
   549  ### aws-zone-type
   550  
   551  `aws-zone-type` allows filtering for private and public zones
   552  
   553  ## Annotations
   554  
   555  Annotations which are specific to AWS.
   556  
   557  ### alias
   558  
   559  `external-dns.alpha.kubernetes.io/alias` if set to `true` on an ingress, it will create an ALIAS record when the target is an ALIAS as well. To make the target an alias, the ingress needs to be configured correctly as described in [the docs](./nginx-ingress.md#with-a-separate-tcp-load-balancer). In particular, the argument `--publish-service=default/nginx-ingress-controller` has to be set on the `nginx-ingress-controller` container. If one uses the `nginx-ingress` Helm chart, this flag can be set with the `controller.publishService.enabled` configuration option.
   560  
   561  ### target-hosted-zone
   562  
   563  `external-dns.alpha.kubernetes.io/aws-target-hosted-zone` can optionally be set to the ID of a Route53 hosted zone. This will force external-dns to use the specified hosted zone when creating an ALIAS target.
   564  
   565  ### aws-zone-match-parent
   566  `aws-zone-match-parent` allows support subdomains within the same zone by using their parent domain, i.e --domain-filter=x.example.com would create a DNS entry for x.example.com (and subdomains thereof).
   567  
   568  ```yaml
   569  ## hosted zone domain: example.com
   570  --domain-filter=x.example.com,example.com
   571  --aws-zone-match-parent
   572  ```
   573  
   574  ## Verify ExternalDNS works (Service example)
   575  
   576  Create the following sample application to test that ExternalDNS works.
   577  
   578  > For services ExternalDNS will look for the annotation `external-dns.alpha.kubernetes.io/hostname` on the service and use the corresponding value.
   579  
   580  > If you want to give multiple names to service, you can set it to external-dns.alpha.kubernetes.io/hostname with a comma `,` separator.
   581  
   582  For this verification phase, you can use default or another namespace for the nginx demo, for example:
   583  
   584  ```bash
   585  NGINXDEMO_NS="nginx"
   586  kubectl get namespaces | grep -q $NGINXDEMO_NS || kubectl create namespace $NGINXDEMO_NS
   587  ```
   588  
   589  Save the following manifest below as `nginx.yaml`:
   590  
   591  ```yaml
   592  apiVersion: v1
   593  kind: Service
   594  metadata:
   595    name: nginx
   596    annotations:
   597      external-dns.alpha.kubernetes.io/hostname: nginx.example.com
   598  spec:
   599    type: LoadBalancer
   600    ports:
   601    - port: 80
   602      name: http
   603      targetPort: 80
   604    selector:
   605      app: nginx
   606  ---
   607  apiVersion: apps/v1
   608  kind: Deployment
   609  metadata:
   610    name: nginx
   611  spec:
   612    selector:
   613      matchLabels:
   614        app: nginx
   615    template:
   616      metadata:
   617        labels:
   618          app: nginx
   619      spec:
   620        containers:
   621        - image: nginx
   622          name: nginx
   623          ports:
   624          - containerPort: 80
   625            name: http
   626  ```
   627  
   628  Deploy the nginx deployment and service with:
   629  
   630  ```bash
   631  kubectl create --filename nginx.yaml --namespace ${NGINXDEMO_NS:-"default"}
   632  ```
   633  
   634  Verify that the load balancer was allocated with:
   635  
   636  ```bash
   637  kubectl get service nginx --namespace ${NGINXDEMO_NS:-"default"}
   638  ```
   639  
   640  This should show something like:
   641  
   642  ```bash
   643  NAME    TYPE           CLUSTER-IP     EXTERNAL-IP                                                                   PORT(S)        AGE
   644  nginx   LoadBalancer   10.100.47.41   ae11c2360188411e7951602725593fd1-1224345803.eu-central-1.elb.amazonaws.com.   80:32749/TCP   12m
   645  ```
   646  
   647  After roughly two minutes check that a corresponding DNS record for your service that was created.
   648  
   649  ```bash
   650  aws route53 list-resource-record-sets --output json --hosted-zone-id $ZONE_ID \
   651    --query "ResourceRecordSets[?Name == 'nginx.example.com.']|[?Type == 'A']"
   652  ```
   653  
   654  This should show something like:
   655  
   656  ```json
   657  [
   658      {
   659          "Name": "nginx.example.com.",
   660          "Type": "A",
   661          "AliasTarget": {
   662              "HostedZoneId": "ZEWFWZ4R16P7IB",
   663              "DNSName": "ae11c2360188411e7951602725593fd1-1224345803.eu-central-1.elb.amazonaws.com.",
   664              "EvaluateTargetHealth": true
   665          }
   666      }
   667  ]
   668  ```
   669  
   670  You can also fetch the corresponding text records:
   671  
   672  ```bash
   673  aws route53 list-resource-record-sets --output json --hosted-zone-id $ZONE_ID \
   674    --query "ResourceRecordSets[?Name == 'nginx.example.com.']|[?Type == 'TXT']"
   675  ```
   676  
   677  This will show something like:
   678  
   679  ```json
   680  [
   681      {
   682          "Name": "nginx.example.com.",
   683          "Type": "TXT",
   684          "TTL": 300,
   685          "ResourceRecords": [
   686              {
   687                  "Value": "\"heritage=external-dns,external-dns/owner=external-dns,external-dns/resource=service/default/nginx\""
   688              }
   689          ]
   690      }
   691  ]
   692  ```
   693  
   694  Note created TXT record alongside ALIAS record. TXT record signifies that the corresponding ALIAS record is managed by ExternalDNS. This makes ExternalDNS safe for running in environments where there are other records managed via other means.
   695  
   696  For more information about ALIAS record, see [Choosing between alias and non-alias records](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosing-alias-non-alias.html).
   697  
   698  Let's check that we can resolve this DNS name. We'll ask the nameservers assigned to your zone first.
   699  
   700  ```bash
   701  dig +short @ns-5514.awsdns-53.org. nginx.example.com.
   702  ```
   703  
   704  This should return 1+ IP addresses that correspond to the ELB FQDN, i.e. `ae11c2360188411e7951602725593fd1-1224345803.eu-central-1.elb.amazonaws.com.`.
   705  
   706  Next try the public nameservers configured by DNS client on your system:
   707  
   708  ```bash
   709  dig +short nginx.example.com.
   710  ```
   711  
   712  If you hooked up your DNS zone with its parent zone correctly you can use `curl` to access your site.
   713  
   714  ```bash
   715  curl nginx.example.com.
   716  ```
   717  
   718  This should show something like:
   719  
   720  ```html
   721  <!DOCTYPE html>
   722  <html>
   723  <head>
   724  <title>Welcome to nginx!</title>
   725  ...
   726  </head>
   727  <body>
   728  <h1>Welcome to nginx!</h1>
   729  ...
   730  </body>
   731  </html>
   732  ```
   733  
   734  ## Verify ExternalDNS works (Ingress example)
   735  
   736  With the previous `deployment` and `service` objects deployed, we can add an `ingress` object and configure a FQDN value for the `host` key.  The ingress controller will match incoming HTTP traffic, and route it to the appropriate backend service based on the `host` key.
   737  
   738  > For ingress objects ExternalDNS will create a DNS record based on the host specified for the ingress object.
   739  
   740  For this tutorial, we have two endpoints, the service with `LoadBalancer` type and an ingress.  For practical purposes, if an ingress is used, the service type can be changed to `ClusterIP` as two endpoints are unecessary in this scenario.
   741  
   742  **IMPORTANT**: This requires that an ingress controller has been installed in your Kubernetes cluster.  EKS does not come with an ingress controller by default.  A popular ingress controller is [ingress-nginx](https://github.com/kubernetes/ingress-nginx/), which can be installed by a [helm chart](https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx) or by [manifests](https://kubernetes.github.io/ingress-nginx/deploy/#aws).
   743  
   744  Create an ingress resource manifest file named `ingress.yaml` with the contents below:
   745  
   746  ```yaml
   747  ---
   748  apiVersion: networking.k8s.io/v1
   749  kind: Ingress
   750  metadata:
   751    name: nginx
   752  spec:
   753    ingressClassName: nginx
   754    rules:
   755      - host: server.example.com
   756        http:
   757          paths:
   758            - backend:
   759                service:
   760                  name: nginx
   761                  port:
   762                    number: 80
   763              path: /
   764              pathType: Prefix
   765  ```
   766  
   767  When ready, you can deploy this with:
   768  
   769  ```bash
   770  kubectl create --filename ingress.yaml --namespace ${NGINXDEMO_NS:-"default"}
   771  ```
   772  
   773  Watch the status of the ingress until the ADDRESS field is populated.
   774  
   775  ```bash
   776  kubectl get ingress --watch --namespace ${NGINXDEMO_NS:-"default"}
   777  ```
   778  
   779  You should see something like this:
   780  
   781  ```
   782  NAME    CLASS    HOSTS                ADDRESS   PORTS   AGE
   783  nginx   <none>   server.example.com             80      47s
   784  nginx   <none>   server.example.com   ae11c2360188411e7951602725593fd1-1224345803.eu-central-1.elb.amazonaws.com.   80      54s
   785  ```
   786  
   787  
   788  For the ingress test, run through similar checks, but using domain name used for the ingress:
   789  
   790  ```bash
   791  # check records on route53
   792  aws route53 list-resource-record-sets --output json --hosted-zone-id $ZONE_ID \
   793    --query "ResourceRecordSets[?Name == 'server.example.com.']"
   794  
   795  # query using a route53 name server
   796  dig +short @ns-5514.awsdns-53.org. server.example.com.
   797  # query using the default name server
   798  dig +short server.example.com.
   799  
   800  # connect to the nginx web server through the ingress
   801  curl server.example.com.
   802  ```
   803  
   804  ## More service annotation options
   805  
   806  ### Custom TTL
   807  
   808  The default DNS record TTL (Time-To-Live) is 300 seconds. You can customize this value by setting the annotation `external-dns.alpha.kubernetes.io/ttl`.
   809  e.g., modify the service manifest YAML file above:
   810  
   811  ```yaml
   812  apiVersion: v1
   813  kind: Service
   814  metadata:
   815    name: nginx
   816    annotations:
   817      external-dns.alpha.kubernetes.io/hostname: nginx.example.com
   818      external-dns.alpha.kubernetes.io/ttl: "60"
   819  spec:
   820      ...
   821  ```
   822  
   823  This will set the DNS record's TTL to 60 seconds.
   824  
   825  ### Routing policies
   826  
   827  Route53 offers [different routing policies](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html). The routing policy for a record can be controlled with the following annotations:
   828  
   829  * `external-dns.alpha.kubernetes.io/set-identifier`: this **needs** to be set to use any of the following routing policies
   830  
   831  For any given DNS name, only **one** of the following routing policies can be used:
   832  
   833  * Weighted records: `external-dns.alpha.kubernetes.io/aws-weight`
   834  * Latency-based routing: `external-dns.alpha.kubernetes.io/aws-region`
   835  * Failover:`external-dns.alpha.kubernetes.io/aws-failover`
   836  * Geolocation-based routing:
   837    * `external-dns.alpha.kubernetes.io/aws-geolocation-continent-code`
   838    * `external-dns.alpha.kubernetes.io/aws-geolocation-country-code`
   839    * `external-dns.alpha.kubernetes.io/aws-geolocation-subdivision-code`
   840  * Multi-value answer:`external-dns.alpha.kubernetes.io/aws-multi-value-answer`
   841  
   842  ### Associating DNS records with healthchecks
   843  
   844  You can configure Route53 to associate DNS records with healthchecks for automated DNS failover using
   845  `external-dns.alpha.kubernetes.io/aws-health-check-id: <health-check-id>` annotation.
   846  
   847  Note: ExternalDNS does not support creating healthchecks, and assumes that `<health-check-id>` already exists.
   848  
   849  ## Canonical Hosted Zones
   850  
   851  When creating ALIAS type records in Route53 it is required that external-dns be aware of the canonical hosted zone in which
   852  the specified hostname is created. External-dns is able to automatically identify the canonical hosted zone for many
   853  hostnames based upon known hostname suffixes which are defined in [aws.go](../../provider/aws/aws.go). If a hostname
   854  does not have a known suffix then the suffix can be added into `aws.go` or the [target-hosted-zone annotation](#target-hosted-zone)
   855  can be used to manually define the ID of the canonical hosted zone.
   856  
   857  ## Govcloud caveats
   858  
   859  Due to the special nature with how Route53 runs in Govcloud, there are a few tweaks in the deployment settings.
   860  
   861  * An Environment variable with name of `AWS_REGION` set to either `us-gov-west-1` or `us-gov-east-1` is required. Otherwise it tries to lookup a region that does not exist in Govcloud and it errors out.
   862  
   863  ```yaml
   864  env:
   865  - name: AWS_REGION
   866    value: us-gov-west-1
   867  ```
   868  
   869  * Route53 in Govcloud does not allow aliases. Therefore, container args must be set so that it uses CNAMES and a txt-prefix must be set to something. Otherwise, it will try to create a TXT record with the same value than the CNAME itself, which is not allowed.
   870  
   871  ```yaml
   872  args:
   873  - --aws-prefer-cname
   874  - --txt-prefix={{ YOUR_PREFIX }}
   875  ```
   876  
   877  * The first two changes are needed if you use Route53 in Govcloud, which only supports private zones. There are also no cross account IAM whatsoever between Govcloud and commercial AWS accounts. If services and ingresses need to make Route 53 entries to an public zone in a commercial account, you will have set env variables of `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` with a key and secret to the commercial account that has the sufficient rights.
   878  
   879  ```yaml
   880  env:
   881  - name: AWS_ACCESS_KEY_ID
   882    value: XXXXXXXXX
   883  - name: AWS_SECRET_ACCESS_KEY
   884    valueFrom:
   885      secretKeyRef:
   886        name: {{ YOUR_SECRET_NAME }}
   887        key: {{ YOUR_SECRET_KEY }}
   888  ```
   889  
   890  ## Clean up
   891  
   892  Make sure to delete all Service objects before terminating the cluster so all load balancers get cleaned up correctly.
   893  
   894  ```bash
   895  kubectl delete service nginx
   896  ```
   897  
   898  **IMPORTANT** If you attached a policy to the Node IAM Role, then you will want to detach this before deleting the EKS cluster.  Otherwise, the role resource will be locked, and the cluster cannot be deleted, especially if it was provisioned by automation like `terraform` or `eksctl`.
   899  
   900  ```bash
   901  aws iam detach-role-policy --role-name $NODE_ROLE_NAME --policy-arn $POLICY_ARN
   902  ```
   903  
   904  If the cluster was provisioned using `eksctl`, you can delete the cluster with:
   905  
   906  ```bash
   907  eksctl delete cluster --name $EKS_CLUSTER_NAME --region $EKS_CLUSTER_REGION
   908  ```
   909  
   910  Give ExternalDNS some time to clean up the DNS records for you. Then delete the hosted zone if you created one for the testing purpose.
   911  
   912  ```bash
   913  aws route53 delete-hosted-zone --id $NODE_ID # e.g /hostedzone/ZEWFWZ4R16P7IB
   914  ```
   915  
   916  If IAM user credentials were used, you can remove the user with:
   917  
   918  ```bash
   919  aws iam detach-user-policy --user-name "externaldns" --policy-arn $POLICY_ARN
   920  aws iam delete-user --user-name "externaldns"
   921  ```
   922  
   923  If IRSA was used, you can remove the IRSA role with:
   924  
   925  ```bash
   926  aws iam detach-role-policy --role-name $IRSA_ROLE --policy-arn $POLICY_ARN
   927  aws iam delete-role --role-name $IRSA_ROLE
   928  ```
   929  
   930  Delete any unneeded policies:
   931  
   932  ```bash
   933  aws iam delete-policy --policy-arn $POLICY_ARN
   934  ```
   935  
   936  ## Throttling
   937  
   938  Route53 has a [5 API requests per second per account hard quota](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/DNSLimitations.html#limits-api-requests-route-53).
   939  Running several fast polling ExternalDNS instances in a given account can easily hit that limit. Some ways to reduce the request rate include:
   940  * Reduce the polling loop's synchronization interval at the possible cost of slower change propagation (but see `--events` below to reduce the impact).
   941    * `--interval=5m` (default `1m`)
   942  * Trigger the polling loop on changes to K8s objects, rather than only at `interval`, to have responsive updates with long poll intervals
   943    * `--events`
   944  * Limit the [sources watched](https://github.com/kubernetes-sigs/external-dns/blob/master/pkg/apis/externaldns/types.go#L364) when the `--events` flag is specified to specific types, namespaces, labels, or annotations
   945    * `--source=ingress --source=service` - specify multiple times for multiple sources
   946    * `--namespace=my-app`
   947    * `--label-filter=app in (my-app)`
   948    * `--ingress-class=nginx-external`
   949  * Limit services watched by type (not applicable to ingress or other types)
   950    * `--service-type-filter=LoadBalancer` default `all`
   951  * Limit the hosted zones considered
   952    * `--zone-id-filter=ABCDEF12345678` - specify multiple times if needed
   953    * `--domain-filter=example.com` by domain suffix - specify multiple times if needed
   954    * `--regex-domain-filter=example*` by domain suffix but as a regex - overrides domain-filter
   955    * `--exclude-domains=ignore.this.example.com` to exclude a domain or subdomain
   956    * `--regex-domain-exclusion=ignore*` subtracts it's matches from `regex-domain-filter`'s matches
   957    * `--aws-zone-type=public` only sync zones of this type `[public|private]`
   958    * `--aws-zone-tags=owner=k8s` only sync zones with this tag
   959  * If the list of zones managed by ExternalDNS doesn't change frequently, cache it by setting a TTL.
   960    * `--aws-zones-cache-duration=3h` (default `0` - disabled)
   961  * Increase the number of changes applied to Route53 in each batch
   962    * `--aws-batch-change-size=4000` (default `1000`)
   963  * Increase the interval between changes
   964    * `--aws-batch-change-interval=10s` (default `1s`)
   965  * Introducing some jitter to the pod initialization, so that when multiple instances of ExternalDNS are updated at the same time they do not make their requests on the same second.
   966  
   967  A simple way to implement randomised startup is with an init container:
   968  
   969  ```
   970  ...
   971      spec:
   972        initContainers:
   973        - name: init-jitter
   974          image: registry.k8s.io/external-dns/external-dns:v0.14.0
   975          command:
   976          - /bin/sh
   977          - -c
   978          - 'FOR=$((RANDOM % 10))s;echo "Sleeping for $FOR";sleep $FOR'
   979        containers:
   980  ...
   981  ```
   982  
   983  ### EKS
   984  
   985  An effective starting point for EKS with an ingress controller might look like:
   986  
   987  ```bash
   988  --interval=5m
   989  --events
   990  --source=ingress
   991  --domain-filter=example.com
   992  --aws-zones-cache-duration=1h
   993  ```
   994  
   995  ### Batch size options
   996  
   997  After external-dns generates all changes, it will perform a task to group those changes into batches. Each change will be validated against batch-change-size limits. If at least one of those parameters out of range - the change will be moved to a separate batch. If the change can't fit into any batch - *it will be skipped.*<br>
   998  There are 3 options to control batch size for AWS provider:
   999  * Maximum amount of changes added to one batch
  1000    * `--aws-batch-change-size` (default `1000`)
  1001  * Maximum size of changes in bytes added to one batch
  1002    * `--aws-batch-change-size-bytes` (default `32000`)
  1003  * Maximum value count of changes added to one batch
  1004    * `aws-batch-change-size-values` (default `1000`)
  1005  
  1006  `aws-batch-change-size` can be very useful for throttling purposes and can be set to any value.
  1007  
  1008  Default values for flags `aws-batch-change-size-bytes` and `aws-batch-change-size-values` are taken from [AWS documentation](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/DNSLimitations.html#limits-api-requests) for Route53 API. **You should not change those values until you really have to.** <br>
  1009  Because those limits are in place, `aws-batch-change-size` can be set to any value: Even if your batch size is `4000` records, your change will be split to separate batches due to bytes/values size limits and apply request will be finished without issues.