github.com/crossplane/upjet@v1.3.0/docs/design-doc-provider-identity-based-auth.md (about)

     1  <!--
     2  SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
     3  
     4  SPDX-License-Identifier: CC-BY-4.0
     5  -->
     6  
     7  # Identity Based Authentication for Crossplane Providers
     8  
     9  - Owner: Alper Rifat Uluçınar (@ulucinar)
    10  - Reviewers: Crossplane Maintainers
    11  - Status: Draft
    12  
    13  ## Background
    14  
    15  Crossplane providers need to authenticate themselves to their respective Cloud
    16  providers. This establishes an identity for the Crossplane provider that's later
    17  used by the Cloud provider to authorize the requests made by the Crossplane
    18  provider and for various other purposes such as audit logging, etc. Each
    19  Crossplane provider supports a subset of the underlying Cloud provider's
    20  authentication mechanisms and this subset is currently implemented in-tree,
    21  i.e., in the Crossplane provider's repo, there exists a CRD that's
    22  conventionally named as `ProviderConfig` and each managed resource of the
    23  provider has a
    24  [v1.Reference](https://docs.crossplane.io/v1.12/concepts/managed-resources/#providerconfigref)
    25  to a `ProviderConfig` CR. This `ProviderConfig` holds the authentication
    26  configuration (chosen authentication method, any required credentials for that
    27  method, etc.) together with any other provider specific configuration. Different
    28  authentication methods and/or different sets of credentials can be configured
    29  using separate cluster-scoped `ProviderConfig` CRs and by having different
    30  managed resources refer to these `ProviderConfig` instances.
    31  
    32  The Crossplane provider establishes an identity for the requests it will issue
    33  to the Cloud provider in the
    34  [managed.ExternalConnecter](https://pkg.go.dev/github.com/crossplane/crossplane-runtime@v0.19.2/pkg/reconciler/managed#ExternalConnecter)'s
    35  `Connect` implementation. This involves calling the associated authentication
    36  functions from the Cloud SDK libraries (such as the [AWS SDK for Go][aws-sdk] or
    37  the [Azure SDK for Go][azure-sdk]) with the supplied configuration and
    38  credentials from the referenced `ProviderConfig` instance.
    39  
    40  Managed resources and `ProviderConfig`s are cluster-scoped, i.e., they do not
    41  exist within a Kubernetes namespace but rather exist at the global (cluster)
    42  scope. This does not fit well into a namespace-based multi-tenancy model, where
    43  each tenant is confined to its own namespace. The cluster scope is shared
    44  between all namespaces. In the namespace-based multi-tenancy model, the common
    45  approach is to have Role-Based Access Control ([RBAC]) rules that disallow a
    46  tenant from accessing API resources that do not reside in its namespace. Another
    47  dimension to consider here is that all namespaced tenants are serviced by a
    48  shared Crossplane provider deployment typically running in the
    49  `crossplane-system` namespace. This shared provider instance (or more precisely,
    50  the [Kubernetes ServiceAccount][k8s-sa] that the provider's pod uses) is
    51  allowed, via RBAC, to `get` the (cluster-scoped) `ProviderConfig` resources. If
    52  tenant `subjects` (groups, users, ServiceAccounts) are allowed to directly
    53  `create` managed resources, then we cannot constrain them from referring to any
    54  `ProviderConfig` (thus to any Cloud provider credential set) in the cluster
    55  solely using RBAC. This is because:
    56  
    57  1. RBAC rules allow designated verbs (`get`, `list`, `create`, `update`, etc.)
    58     on the specified API resources for the specified subjects. If a subject,
    59     e.g., a `ServiceAccount`, is allowed to `create` a managed resource, RBAC
    60     alone cannot be used to constrain the set of `ProviderConfig`s that can be
    61     referenced by the `create`d managed resource.
    62  1. The tenant subject itself does not directly access the `ProviderConfig` and
    63     in turn the Cloud provider credential set referred by the `ProviderConfig`.
    64     It's the Crossplane provider's `ServiceAccount` that accesses these
    65     resources, and as mentioned above, this ServiceAccount currently serves all
    66     tenants. This implies that we cannot isolate Cloud provider credentials among
    67     namespaced tenants by only using RBAC rules if we allow tenant subjects to
    68     have `edit` access (`create`, `update`, `patch`) to managed resources.
    69     Although it's possible to prevent them from reading Cloud provider
    70     credentials of other tenants in the cluster via RBAC rules, it's not possible
    71     to prevent them from _using_ those credentials solely with RBAC.
    72  
    73  As discussed in detail in the
    74  [Crossplane Multi-tenancy Guide](https://docs.crossplane.io/knowledge-base/guides/multi-tenant/),
    75  Crossplane is opinionated about the different personas in an organization
    76  adopting Crossplane. We make a distinction between the _infrastructure
    77  operators_ (or _platform builders_) who are expected to manage cluster-scoped
    78  resources (like `ProviderConfig`s, XRDs and `Composition`s) and _application
    79  operators_, who are expected to consume the infrastructure for their
    80  applications. And tenant subjects are classified as _application operators_,
    81  i.e., it's the infrastructure operator's responsibility to manage the
    82  infrastructure _across_ the tenants via cluster-scoped Crossplane resources, and
    83  it's possible and desirable from an isolation perspective to disallow
    84  application operators, who are tenant subjects, to directly access these shared
    85  cluster-scoped resources. This distinction is currently possible with Crossplane
    86  because:
    87  
    88  1. Crossplane `Claim` types are defined via cluster-scoped XRDs by
    89     infrastructure operators and _namespaced_ `Claim` instances are used by the
    90     tenant subjects. This allows infrastructure operators to define RBAC rules
    91     that allow tenant subjects to only access resources in their respective
    92     namespaces, e.g., `Claim`s.
    93  1. However, [1] is not sufficient on itself, as the scheme is still prone to
    94     privilege escalation attacks if the API exposed by the XR is not well
    95     designed. The (shared) provider `ServiceAccount` has access to all Cloud
    96     provider credentials in the cluster and if the exposed XR API allows a
    97     `Claim` to reference cross-tenant `ProviderConfig`s, then a misbehaving
    98     tenant subject can `create` a `Claim` which references some other tenant's
    99     credential set. Thus in our multi-tenancy
   100     [guide](https://docs.crossplane.io/knowledge-base/guides/multi-tenant/), we
   101     propose a security scheme where:
   102     1. The infrastructure operator follows a specific naming convention for the
   103        `ProviderConfig`s she provisions: The `ProviderConfig`s for different
   104        tenants are named after those tenants' namespaces.
   105     2. The infrastructure operator carefully designs `Composition`s that patch
   106        `spec.providerConfigRef` of composed resources using the `Claim`'s
   107        namespace.
   108     3. Tenant subjects are **not** allowed to provision managed resources
   109        directly (and also XRDs or `Composition`s) but only `Claim`s in their
   110        namespaces. And any `Composition` they can select with their `Claim`s will
   111        compose resources that refer to a `ProviderConfig` provisioned for their
   112        tenant (the `ProviderConfig` with the same name as the tenant's
   113        namespace).
   114     4. We also suggest that the naming conventions imposed by this scheme on
   115        `ProviderConfig`s can be relaxed to some degree by using `Composition`'s
   116        [patching capabilities](https://docs.crossplane.io/v1.12/concepts/composition/#compositions).
   117        For instance, a string [transform][patch-transform] of type `Format` can
   118        be used to combine the `Claim`'s namespace with an XR field's value to
   119        allow multiple `ProviderConfig`s per tenant and to allow selection of the
   120        `ProviderConfig` with the `Claim`.
   121  
   122  As explained above, RBAC rules can only impose restrictions on the actions
   123  (`get`, `update`, etc.) performed on the API resource endpoints but they cannot
   124  impose constraints on the API resources themselves (objects) available at these
   125  endpoints. Thus, we also discuss using one of the available policy engines that
   126  can run integrated with the Kubernetes API server to further impose restrictions
   127  on the resources. For example, the following [kyverno] [policy][kyverno-policy]
   128  prevents a tenant subject (`tenant1/user1`) from specifying in `Claim`s any
   129  `ProviderConfig` names without the prefix `tenant1` (_please do not use this
   130  example policy in production environments as it has a security vulnerability as
   131  we will discuss shortly_):
   132  
   133  ```yaml
   134  # XRD
   135  apiVersion: apiextensions.crossplane.io/v1
   136  kind: CompositeResourceDefinition
   137  metadata:
   138    name: compositeresourcegroups.example.org
   139  spec:
   140    group: example.org
   141    names:
   142      kind: CompositeResourceGroup
   143      plural: compositeresourcegroups
   144    claimNames:
   145      kind: ClaimResourceGroup
   146      plural: claimresourcegroups
   147    versions:
   148      - name: v1alpha1
   149        served: true
   150        referenceable: true
   151        schema:
   152          openAPIV3Schema:
   153            type: object
   154            properties:
   155              spec:
   156                type: object
   157                properties:
   158                  name:
   159                    type: string
   160                  providerConfigName:
   161                    type: string
   162                required:
   163                  - name
   164  
   165  ---
   166  # kyverno ClusterPolicy
   167  apiVersion: kyverno.io/v1
   168  kind: ClusterPolicy
   169  metadata:
   170    name: tenant1
   171  spec:
   172    validationFailureAction: enforce
   173    background: false
   174    rules:
   175      - name: check-for-providerconfig-ref
   176        match:
   177          any:
   178            - resources:
   179                kinds:
   180                  # G/V/K for the Claim type
   181                  - example.org/v1alpha1/ClaimResourceGroup
   182              subjects:
   183                - kind: User
   184                  name: tenant1/user1
   185        validate:
   186          message:
   187            "Only ProviderConfig names that have the prefix tenant1 are allowed
   188            for users under tenant1"
   189          pattern:
   190            spec:
   191              providerConfigName: tenant1*
   192  
   193  ---
   194  # related patch in a Composition
   195  ---
   196  patches:
   197    - fromFieldPath: spec.providerConfigName
   198      toFieldPath: spec.providerConfigRef.name
   199  ```
   200  
   201  ### Limitations of Naming Convention-based or Admission Controller-based Approaches
   202  
   203  The naming convention-based or admission controller-based approaches described
   204  above are not straightforward to configure, especially if you also consider that
   205  in addition to the RBAC configurations needed to isolate the tenants
   206  (restricting access to the cluster-wide resources), resource quotas and network
   207  policies are also needed to properly isolate and fairly distribute the worker
   208  node resources and the network resources, respectively. Also due to the
   209  associated complexity, it's easy to misconfigure the cluster and difficult to
   210  verify a given security configuration guarantees proper isolation between the
   211  tenants.
   212  
   213  As an example, consider the Kyverno `ClusterPolicy` given above: While the
   214  intent is to restrict the users under `tenant1` to using only the
   215  `ProviderConfig`s installed for them (e.g., those with names `tenant1*`), the
   216  scheme is broken if there exists a tenant in the system with `tenant1` as a
   217  prefix to its name, such as `tenant10`.
   218  
   219  Organizations, especially with hard multi-tenancy requirements (i.e., with
   220  tenants assumed to be untrustworthy or actively malicious), may not prefer or
   221  strictly forbid such approaches. The architectural problem here, from a security
   222  perspective, is that the Crossplane provider (and also the core Crossplane
   223  components) is a shared resource itself and it requires cross-tenant privileges
   224  such as accessing cluster-wide resources and accessing each tenant's namespaced
   225  resources (especially tenant Cloud credentials). This increases the attack
   226  surface in the dimensions of:
   227  
   228  - Logical vulnerabilities (see the above example for a misconfiguration)
   229  - Isolation vulnerabilities: For instance, controller *workqueue*s become shared
   230    resources between the tenants. How can we ensure, for instance, that the
   231    workqueue capacity is fairly shared between the tenants?
   232  - Code vulnerabilities: As an example, consider a hypothetical Crossplane
   233    provider bug in which the provider fetches another `ProviderConfig` than the
   234    one declared in the managed resource, or other credentials than the ones
   235    declared in the referred `ProviderConfig`. Although the logical barriers
   236    enforced by the `Composition`s or the admission controllers as descibed above
   237    are not broken, the too privileged provider itself breaks the cross-tenant
   238    barrier.
   239  
   240  In the current Crossplane provider deployment model, when a Crossplane provider
   241  package is installed, there can be a single _active_ `ProviderRevision`
   242  associated with it, which owns (via an owner reference) the Kubernetes
   243  deployment for running the provider. This single deployment, in turn, specifies
   244  a single Kubernetes service account under which the provider runs.
   245  
   246  Apart from a vulnerability perspective, there are also some other limitations to
   247  this architecture, which are related to identity-based authentication.
   248  
   249  > [!NOTE]
   250  > The [multi-tenancy guide](https://docs.crossplane.io/knowledge-base/guides/multi-tenant/)
   251    also mentions multi-cluster multi-tenancy, where tenants are run on their
   252    respective Kubernetes clusters. This form of multi-tenancy is out of scope in
   253    this document.
   254  
   255  ### Identity-based Authentication Schemes
   256  
   257  Various Cloud providers, such as AWS, Azure and GCP, have some means of
   258  identity-based authentication. With identity-based authentication an entity,
   259  such as a Cloud service (a database server, a Kubernetes cluster, etc.) or a
   260  workload (an executable running in a VM, a pod running in a Kubernetes cluster)
   261  is assigned a Cloud identity and further authorization checks are performed
   262  against this identity. The advantage with identity-based authentication is that
   263  no manually provisioned credentials are required.
   264  
   265  The traditional way for authenticating a Crossplane provider to the Cloud
   266  provider is to first provision a Cloud identity such as an AWS IAM user or a GCP
   267  service account or an Azure AD service principal and a set of credentials
   268  associated with that identity (such as an AWS access key or a GCP service
   269  account key or Azure client ID & secret) and then to provision a Kubernetes
   270  secret containing these credentials. Then a `ProviderConfig` refers to this
   271  Kubernetes secret. There are some undesirable consequences of this flow:
   272  
   273  - The associated Cloud credentials are generally long-term credentials and
   274    require manual rotation.
   275  - For fine-grained access control, you need multiple identities with such
   276    credentials to be manually managed & rotated.
   277  - These generally result in reusing such credentials, which in turn prevents
   278    fine-grained access control and promotes aggregation of privileges.
   279  
   280  Different Cloud providers have different identity-based authentication
   281  implementations:
   282  
   283  **AWS**: [EKS node IAM roles][aws-eks-node-iam], or IAM roles for service
   284  accounts ([IRSA]) both allow for identity-based authentication. IRSA has
   285  eliminated the need for some third-party solutions such as [kiam] or [kube2iam]
   286  and associates an IAM role with a Kubernetes service account. Using IRSA for
   287  authenticating `provider-aws` is [possible][provider-aws-irsa]. IRSA leverages
   288  the [service account token volume projection][k8s-sa-projection] support
   289  introduced with Kubernetes 1.12. When enabled, `kubelet`
   290  [projects][k8s-volume-projection] a signed OIDC JWT for a pod's service account
   291  at the requested volume mount path in a container and periodically rotates the
   292  token. An AWS client can then exchange this token (issued by the API server)
   293  with _temporary_ credentials for an IAM role via the AWS Security Token Service
   294  ([STS]) [AssumeRoleWithWebIdentity] API operation. The IAM role to be associated
   295  with the Kubernetes service account can be specified via an annotation on the
   296  service account (`eks.amazonaws.com/role-arn`). As we will discuss later, this
   297  can also be used in conjunction with IAM role chaining to implement fine-grained
   298  access control.
   299  
   300  As of this writing, `provider-aws` [supports][provider-aws-auth] `IRSA`, role
   301  chaining (via the [STS] [AssumeRole] API operation), and the [STS
   302  AssumeRoleWithWebIdentity][AssumeRoleWithWebIdentity] API operation. This allows
   303  us to authenticate `provider-aws` using the projected service account token by
   304  exhanging it with a set of temporary credentials associated with an IAM role.
   305  This set of temporary credentials consists of an access key ID, a secret access
   306  key and a security token. Also the target IAM role ARN (Amazon Resource Name) is
   307  configurable via the `provider-aws`'s `ProviderConfig` API. This allows
   308  Crossplane users to implement a fine-grained access policy for different tenants
   309  possibly using different AWS accounts:
   310  
   311  - The initial IAM role, which is the target IAM role for the `IRSA`
   312    authentication (via the `AssumeRoleWithWebIdentity` STS API operation) does
   313    not need privileges on the managed external resources when role chaining is
   314    used.
   315  - `provider-aws` then assumes another IAM role by exchanging the initial set of
   316    temporary credentials via STS role chaining. However, currently the
   317    `ProviderConfig` API does not allow chains of length greater than one, i.e.,
   318    `provider-aws` can only call the STS `AssumeRole` API once in a given chain.
   319    This is currently an artificial limitation in `provider-aws` imposed by the
   320    `ProviderConfig` API.
   321  - The target role ARN for the initial IRSA `AssumeRoleWithWebIdentity` operation
   322    is configurable via the `ProviderConfig` API. Thus, if a proper cross-AWS
   323    account trust policy exists between the EKS cluster's OIDC provider and a
   324    target IAM role in a different account (than the account owning the EKS
   325    cluster and the OIDC provider), then it's possible to switch to an IAM role in
   326    that target AWS account.
   327  - Privileges on the managed external resources need to be defined on the target
   328    IAM roles of the STS `Assume*` operations. And as mentioned, fine-grained
   329    access policies can be defined on these target roles which are configurable
   330    with the `ProviderConfig` API.
   331  - When combined with the already available single-cluster multi-tenancy
   332    techniques discussed above, this allows `provider-aws` users to isolate their
   333    tenant identities and the privileges required for those identities.
   334  
   335  From the relevant discussions on `provider-aws` surveyed for this writing, this
   336  level of tenant isolation has mostly been sufficient for `provider-aws` users.
   337  But as discussed above, a deeper isolation is still possible. Especially in the
   338  currently feasible `provider-aws` authentication scheme, the initial
   339  `AssumeRoleWithWebIdentity` target IAM role is still shared by the tenants
   340  although it does not require privileges on the managed external resources. But
   341  due to vulnerabilities discussed in the [Limitations of Naming Convention-based
   342  or Admission Controller-based Approaches] section above, it could still be
   343  possible for a tenant to assume an IAM role with more privileges than it needs,
   344  starting with the shared `AssumeRoleWithWebIdentity` target IAM role. A deeper
   345  isolation between tenants would be possible if it were possible to have a
   346  Kubernetes service account and an associated (initial) non-shared IAM role
   347  assigned to each tenant.
   348  
   349  As of this writing, `provider-jet-aws` supports IRSA authentication with support
   350  for role chaining via the STS `AssumeRole` API operation. Similar to
   351  `provider-aws`, only chains of length `1` are allowed. Also, `provider-jet-aws`
   352  does not currently support specifying the target `AssumeRoleWithWebIdentity` IAM
   353  role via the `ProviderConfig` API. And unlike `provider-aws`, `provider-jet-aws`
   354  does not support specifying external IDs, session tags or transitive tag keys
   355  for the `AssumeRole` operation, or specifying session names for the
   356  `AssumeRoleWithWebIdentity` operation.
   357  
   358  **Azure**: Azure has the notion of system-assigned or user-assigned [managed
   359  identities][azure-msi], which allow authentication to any resource that supports
   360  Azure AD authentication. Some Azure services, such as EKS, allow a managed
   361  identity to be enabled directly on a service's instance (system-assigned). Or a
   362  user-assigned managed identity can be provisioned and assigned to the service
   363  instance. Similar to AWS IRSA, Azure has also introduced [Azure AD workload
   364  identities][azure-wi], which work in a similar way to IRSA:
   365  
   366  |                                                      |
   367  | :--------------------------------------------------: |
   368  |   <img src="images/azure-wi.png" alt="drawing" />    |
   369  | Azure AD Workload Identities (reproduced from [[1]]) |
   370  
   371  In Azure AD workload identities, similar to IRSA, a Kubernetes service account
   372  is associated with an Azure AD application client ID via the
   373  `azure.workload.identity/client-id` annotation on the service account object.
   374  
   375  As of this writing, none of `provider-azure` or `provider-jet-azure` supports
   376  Azure workload identities. Terraform native `azurerm` provider itself currently
   377  does _not_ support workload identities, thus there are technical challenges if
   378  we would like to introduce support for workload identities in
   379  `provider-jet-azure`. However, using lower level APIs (then the [Azure Identity
   380  SDK for Go][azidentity]), it should be possible to [implement][azure-329]
   381  workload identities for `provider-azure`.
   382  
   383  Both `provider-azure` and `provider-jet-azure` support system-assigned and
   384  user-assigned managed identitites as an alternate form of identity-based
   385  authentication (with `provider-azure` support being introduced by this
   386  [PR][azure-330]).
   387  
   388  Using system-assigned managed identities, it's _not_ possible to implement an
   389  isolation between tenants (see the discussion above for `provider-aws`) by using
   390  separate Azure AD (AAD) applications (service principals) for them, because the
   391  system-assigned managed identity is shared between those tenants and currently
   392  it's not possible to switch identities within the Crossplane Azure providers\*.
   393  However, using user-assigned managed identities and per-tenant `ProviderConfig`s
   394  as discussed above in the context of single-cluster multi-tenancy, it's possible
   395  to implement fine-grained access control for tenants again with the same
   396  limitations mentioned there.
   397  
   398  \*: Whether there exists an Azure service (similar to the [STS] of AWS) that
   399  allows us to exchange credentials of an AAD application with (temporary)
   400  credentials of another AAD application needs further investigation.
   401  
   402  **GCP**: GCP also [recommends][gcp-wi] workload identities for assigning
   403  identities to workloads running in GKE clusters. With GKE workload identities, a
   404  Kubernetes service account is associated with a GCP IAM service account. And
   405  similar to AWS and Azure, GCP also uses an annotation
   406  (`iam.gke.io/gcp-service-account`) on the Kubernetes service account object
   407  which specifies the GCP service account to be impersonated.
   408  
   409  As of this writing, both `provider-gcp` and `provider-jet-gcp` support workload
   410  identities, which are based on Kubernetes service accounts similar to AWS IRSA
   411  and Azure AD workload identities. Thus, current implementations share the same
   412  limitations detailed in [Limitations of Naming Convention-based or Admission
   413  Controller-based Approaches].
   414  
   415  **Summary for the existing Crossplane AWS, Azure & GCP providers**:
   416  
   417  In all the three Kubernetes workload identity schemes introduced above, a
   418  Kubernetes service account is mapped to a Cloud provider identity (IAM
   419  role/service account, AD application, etc.) And as explained in depth above, the
   420  current Crossplane provider deployment model allows the provider to be run under
   421  a single Kubernetes service account.
   422  
   423  Users of `provider-aws` have so far combined [IRSA] with AWS STS role chaining
   424  (`AssumeRoleWithWebIdentity` and `AssumeRole` STS API operations) to meet their
   425  organizational requirements around least-privilege and fine-grained access
   426  control, and they have isolated their tenants sharing the same Crossplane
   427  control-plane using the single-cluster multi-tenancy techniques described above.
   428  However, currently lacking similar semantics for "role chaining", to the best of
   429  our knowledge, users of AKS and GKE workload identities cannot implement similar
   430  fine-grained access control scenarios because the Crossplane provider is running
   431  as a single Kubernetes deployment, which in turn is associated with a single
   432  Kubernetes service account. And for `provider-aws` users who would like to have
   433  more strict tenant isolation, we need more flexibility in the Crossplane
   434  deployment model.
   435  
   436  ## Decoupling Crossplane Provider Deployment
   437  
   438  Flexibility in Crossplane provider deployment has been discussed especially in
   439  [[2]] and [[3]]. [[2]] proposes a provider partitioning scheme on
   440  `ProviderConfig`s and [[3]] calls for a _Provider Runtime Interface_ for
   441  decoupling the runtime aspects of a provider (where & how a provider is deployed
   442  & run) from the core Crossplane package manager. We can combine these two
   443  approaches to have an extensible, flexible and future-proof deployment model for
   444  Crossplane providers that would also better meet the requirements around tenant
   445  isolation. Instead of partitioning based on `ProviderConfig`s, as an
   446  alternative, we could have an explicit partitioning API based on provider
   447  runtime configurations specified in `Provider.pkg`s:
   448  
   449  ```yaml
   450  apiVersion: pkg.crossplane.io/v1
   451  kind: Provider
   452  metadata:
   453    name: crossplane-provider-azure
   454  spec:
   455    package: crossplane/provider-azure:v0.19.0
   456    ...
   457    runtimeConfigs:
   458    - name: deploy-1
   459      runtime:
   460        apiVersion: runtime.crossplane.io/v1alpha1
   461        kind: KubernetesDeployment
   462        spec:
   463          # ControllerConfig reference that defines the corresponding Kubernetes deployment
   464          controllerConfigRef:
   465            name: cc-1
   466    - name: deploy-2
   467      runtime:
   468        apiVersion: runtime.crossplane.io/v1alpha1
   469        kind: KubernetesDeployment
   470        spec:
   471          # ControllerConfig reference that defines the corresponding Kubernetes deployment
   472          controllerConfigRef:
   473            name: cc-2
   474    - name: container-1
   475      runtime:
   476        apiVersion: runtime.crossplane.io/v1alpha1
   477        kind: DockerContainer
   478        spec:
   479          # some Docker client options
   480          host: /var/run/docker.sock
   481          config: ...
   482          # some docker run options
   483          runOptions:
   484            user: ...
   485            network: ...
   486    - ...
   487  ```
   488  
   489  In the proposed scheme, the `PackageRevision` controller would no longer
   490  directly manage a Kubernetes deployment for the active revision. Instead it
   491  would provision, for the active revision, a number of Kubernetes resources
   492  corresponding to each runtime configuration specified in the `runtimeConfigs`
   493  array. For the above example, the `PackageRevision` controller would provision
   494  two `KubernetesDeployment` and one `DockerContainer` _runtime configuration_
   495  resources for the active revision. An example `KubernetesDeployment` object
   496  provisioned by the `PackageRevision` controller could look like the following:
   497  
   498  ```yaml
   499  apiVersion: runtime.crossplane.io/v1alpha1
   500  kind: KubernetesDeployment
   501  metadata:
   502    name: deploy-1
   503    ownerReferences:
   504      - apiVersion: pkg.crossplane.io/v1
   505        controller: true
   506        kind: ProviderRevision
   507        name: crossplane-provider-azure-91818efefdbe
   508        uid: 3a58c719-019f-43eb-b338-d6116e299974
   509  spec:
   510    crossplaneProvider: crossplane/provider-azure-controller:v0.19.0
   511    # ControllerConfig reference that defines the corresponding Kubernetes deployment
   512    controllerConfigRef:
   513      name: cc-1
   514  ```
   515  
   516  As an alternative, in order to deprecate the `ControllerConfig` API, the
   517  `KubernetesDeployment` could also be defined as follows:
   518  
   519  ```yaml
   520  ---
   521  runtimeConfigs:
   522    - name: deploy-1
   523      runtime:
   524        apiVersion: runtime.crossplane.io/v1alpha1
   525        kind: KubernetesDeployment
   526        spec:
   527          template:
   528            # metadata that defines the corresponding Kubernetes deployment's metadata
   529            metadata: ...
   530            # spec that defines the corresponding Kubernetes deployment's spec
   531            spec: ...
   532  ```
   533  
   534  This scheme makes the runtime implementation pluggable, i.e., in different
   535  environments we can have different _provider runtime configuration_ contollers
   536  running (as Kubernetes controllers) with different capabilities. For instance,
   537  the existing deployment implementation embedded into the `PackageRevision`
   538  controller can still be shipped with the core Crossplane with a corresponding
   539  runtime configuration object. But another runtime configuration controller,
   540  which is also based on Kubernetes deployments, can implement advanced isolation
   541  semantics.
   542  
   543  [1]: https://azure.github.io/azure-workload-identity/docs/introduction.html
   544  [2]: https://github.com/crossplane/crossplane/issues/2411
   545  [3]: https://github.com/crossplane/crossplane/issues/2671
   546  [aws-sdk]: https://github.com/aws/aws-sdk-go-v2
   547  [azure-sdk]: https://github.com/Azure/azure-sdk-for-go
   548  [RBAC]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
   549  [k8s-sa]:
   550    https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
   551  [patch-transform]:
   552    https://github.com/crossplane/crossplane/blob/6c1b06507db47801c7a1c7d91704783e8d13856f/apis/apiextensions/v1/composition_transforms.go#L64
   553  [kyverno]: https://kyverno.io/
   554  [kyverno-policy]: https://kyverno.io/docs/kyverno-policies/
   555  [aws-eks-node-iam]:
   556    https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html
   557  [IRSA]:
   558    https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html
   559  [kiam]: https://github.com/uswitch/kiam
   560  [kube2iam]: https://github.com/jtblin/kube2iam
   561  [provider-aws-auth]:
   562    https://github.com/crossplane/provider-aws/blob/36299026cd9435c260ad13b32223d2e5fef3c443/AUTHENTICATION.md
   563  [provider-aws-irsa]:
   564    https://github.com/crossplane/provider-aws/blob/36299026cd9435c260ad13b32223d2e5fef3c443/AUTHENTICATION.md#using-iam-roles-for-serviceaccounts
   565  [k8s-sa-projection]:
   566    https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection
   567  [azure-msi]:
   568    https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview
   569  [azure-wi]:
   570    https://azure.github.io/azure-workload-identity/docs/introduction.html
   571  [k8s-volume-projection]:
   572    https://kubernetes.io/docs/concepts/storage/projected-volumes/
   573  [STS]: https://docs.aws.amazon.com/STS/latest/APIReference/welcome.html
   574  [AssumeRoleWithWebIdentity]:
   575    https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRoleWithWebIdentity.html
   576  [AssumeRole]:
   577    https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html
   578  [gcp-wi]:
   579    https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity
   580  [azidentity]: https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/azidentity
   581  [azure-329]: https://github.com/crossplane/provider-azure/issues/329
   582  [azure-330]: https://github.com/crossplane/provider-azure/pull/330