sigs.k8s.io/cluster-api-provider-aws@v1.5.5/docs/book/src/topics/upgrade-to-0.3.0.md (about)

     1  # Upgrading to 0.3.0
     2  
     3  In 0.3.0, the tagging scheme changed for identifying AWS resources. In order not to lose track, there is a partial migration tool included in 0.3.0's `clusterawsadm`.
     4  
     5  The migration path is as follows:
     6  
     7  1. `kubectl scale statefulset -n aws-provider-system aws-provider-controller-manager --replicas=0`
     8  2. `clusterawsadm migrate -n CLUSTER_NAME 0.3.0`
     9  3. Update the image for the aws-provider-controller-manager
    10  4. `kubectl scale statefulset -n aws-provider-system aws-provider-controller-manager --replicas=1`
    11  5. Wait ~2 minutes for the security group changes to all settle
    12     - All of the nodes and control plane machines should have exactly one security group tagged with `kubernetes.io/cluster/<CLUSTER_NAME>=owned`: the new `CLUSTER_NAME-lb` group.
    13  6. Find the names of your controller-manager pods, and run `kubectl exec -n kube-system -it CONTROLLER_MANAGER_POD_NAME -- sh -c 'kill 1'` as a workaround for [kubernetes/kubernetes#77019](https://github.com/kubernetes/kubernetes/issues/77019)