github.com/cilium/cilium@v1.16.2/Documentation/contributing/development/introducing_new_crds.rst (about)

     1  Introducing New CRDs
     2  ====================
     3  
     4  Cilium uses a combination of code generation tools to facilitate adding
     5  CRDs to the Kubernetes instance it is installed on.
     6  
     7  These CRDs make themselves available in the generated Kubernetes client
     8  Cilium uses.
     9  
    10  Defining And Generating CRDs
    11  ----------------------------
    12  
    13  Currently, two API versions exist ``v2`` and ``v2alpha1``.
    14  
    15  Paths:
    16  
    17  ::
    18  
    19     pkg/k8s/apis/cilium.io/v2/
    20     pkg/k8s/apis/cilium.io/v2alpha1/
    21  
    22  CRDs are defined via Golang structures, annotated with ``marks``, and
    23  generated with Cilium make file targets.
    24  
    25  Marks
    26  ~~~~~
    27  
    28  Marks are used to tell ``controller-gen`` *how* to generate the CRD.
    29  This includes defining the CRD's various names (Singular, plural,
    30  group), its Scope (Cluster, Namespaced), Shortnames, etc…
    31  
    32  An example:
    33  
    34  ::
    35  
    36     // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
    37  
    38     // +kubebuilder:resource:categories={cilium},singular="ciliumendpointslice",path="ciliumendpointslices",scope="Cluster",shortName={ces}
    39  
    40     // +kubebuilder:storageversion
    41  
    42  You can find CRD generation ``marks`` documentation
    43  `here <https://book.kubebuilder.io/reference/markers/crd.html>`__.
    44  
    45  Marks are also used to generate json-schema validation. You can define
    46  validation criteria such as "format=cidr" and "required" via validation
    47  ``marks`` in your struct's comments.
    48  
    49  An example:
    50  
    51  .. code-block:: go
    52  
    53     type CiliumBGPPeeringConfiguration struct {
    54         // PeerAddress is the IP address of the peer.
    55         // This must be in CIDR notation and use a /32 to express
    56         // a single host.
    57         //
    58         // +kubebuilder:validation:Required
    59         // +kubebuilder:validation:Format=cidr
    60         PeerAddress string `json:"peerAddress"`
    61  
    62  You can find CRD validation ``marks`` documentation
    63  `here <https://book.kubebuilder.io/reference/markers/crd-validation.html>`__.
    64  
    65  Defining CRDs
    66  ~~~~~~~~~~~~~
    67  
    68  Paths:
    69  
    70  ::
    71  
    72     pkg/k8s/apis/cilium.io/v2/
    73     pkg/k8s/apis/cilium.io/v2alpha1/
    74  
    75  The portion of the directory after ``apis/`` makes up the CRD's
    76  ``Group`` and ``Version``. See
    77  `KubeBuilder-GVK <https://book.kubebuilder.io/cronjob-tutorial/gvks.html>`__
    78  
    79  You can begin defining your ``CRD`` structure, making any subtypes you
    80  like to adequately define your data model and using ``marks`` to control
    81  the CRD generation process.
    82  
    83  Here is a brief example, omitting any further definitions of sub-types
    84  to express the CRD data model.
    85  
    86  .. code-block:: go
    87  
    88     // +genclient
    89     // +genclient:nonNamespaced
    90     // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
    91     // +kubebuilder:resource:categories={cilium,ciliumbgp},singular="ciliumbgppeeringpolicy",path="ciliumbgppeeringpolicies",scope="Cluster",shortName={bgpp}
    92     // +kubebuilder:printcolumn:JSONPath=".metadata.creationTimestamp",name="Age",type=date
    93     // +kubebuilder:storageversion
    94  
    95     // CiliumBGPPeeringPolicy is a Kubernetes third-party resource for instructing
    96     // Cilium's BGP control plane to create peers.
    97     type CiliumBGPPeeringPolicy struct {
    98         // +k8s:openapi-gen=false
    99         // +deepequal-gen=false
   100         metav1.TypeMeta `json:",inline"`
   101         // +k8s:openapi-gen=false
   102         // +deepequal-gen=false
   103         metav1.ObjectMeta `json:"metadata"`
   104  
   105         // Spec is a human readable description of a BGP peering policy
   106         //
   107         // +kubebuilder:validation:Required
   108         Spec CiliumBGPPeeringPolicySpec `json:"spec,omitempty"`
   109     }
   110  
   111  Integrating CRDs Into Cilium
   112  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   113  
   114  Once you've coded your CRD data model you can use Cilium's ``make``
   115  infrastructure to generate and integrate your CRD into Cilium.
   116  
   117  There are several make targets and a script which revolve around
   118  generating CRD and associated code gen (client, informers, ``DeepCopy``
   119  implementations, ``DeepEqual`` implementations, etc).
   120  
   121  Each of the next sections also detail the steps you should take to
   122  integrate your CRD into Cilium.
   123  
   124  Generating CRD YAML
   125  ~~~~~~~~~~~~~~~~~~~
   126  
   127  To simply generate the CRDs and copy them into the correct location you
   128  must perform two tasks:
   129  
   130  * Update the ``Makefile`` to edit the ``CRDS_CILIUM_V2`` or
   131    ``CRDS_CILIUM_V2ALPHA1`` variable (depending on the version of your new CRD)
   132    to contain the plural name of your new CRD.
   133  * Run ``make manifests``
   134  
   135  This will generate your Golang structs into CRD manifests and copy them
   136  to ``./pkg/k8s/apis/cilium.io/client/crds/`` into the appropriate
   137  ``Version`` directory.
   138  
   139  You can inspect your generated ``CRDs`` to confirm they look OK.
   140  
   141  Additionally ``./contrib/scripts/check-k8s-code-gen.sh`` is a script
   142  which will generate the CRD manifest along with generating the necessary K8s 
   143  API changes to use your CRDs via K8s client in Cilium source code.
   144  
   145  Generating Client Code
   146  ~~~~~~~~~~~~~~~~~~~~~~
   147  
   148  .. code-block:: shell-session
   149  
   150      make generate-k8s-api
   151  
   152  This make target will perform the necessary code-gen to integrate your
   153  CRD into Cilium's ``client-go`` client, create listers, watchers, and
   154  informers.
   155  
   156  Again, multiple steps must be taken to fully integrate your CRD into
   157  Cilium.
   158  
   159  Register With API Scheme
   160  ~~~~~~~~~~~~~~~~~~~~~~~~
   161  
   162  Paths:
   163  
   164  ::
   165  
   166      pkg/k8s/apis/cilium.io/v2alpha1/register.go
   167  
   168  Make a change similar to this diff to register your CRDs with the API
   169  scheme.
   170  
   171  .. code-block:: diff
   172  
   173     diff --git a/pkg/k8s/apis/cilium.io/v2alpha1/register.go b/pkg/k8s/apis/cilium.io/v2alpha1/register.go
   174     index 9650e32f8d..0d85c5a233 100644
   175     --- a/pkg/k8s/apis/cilium.io/v2alpha1/register.go
   176     +++ b/pkg/k8s/apis/cilium.io/v2alpha1/register.go
   177     @@ -55,6 +55,34 @@ const (
   178      
   179             // CESName is the full name of Cilium Endpoint Slice
   180             CESName = CESPluralName + "." + CustomResourceDefinitionGroup
   181     +
   182     +       // Cilium BGP Peering Policy (BGPP)
   183     +
   184     +       // BGPPPluralName is the plural name of Cilium BGP Peering Policy
   185     +       BGPPPluralName = "ciliumbgppeeringpolicies"
   186     +
   187     +       // BGPPKindDefinition is the kind name of Cilium BGP Peering Policy
   188     +       BGPPKindDefinition = "CiliumBGPPeeringPolicy"
   189     +
   190     +       // BGPPName is the full name of Cilium BGP Peering Policy
   191     +       BGPPName = BGPPPluralName + "." + CustomResourceDefinitionGroup
   192     +
   193     +       // Cilium BGP Load Balancer IP Pool (BGPPool)
   194     +
   195     +       // BGPPoolPluralName is the plural name of Cilium BGP Load Balancer IP Pool
   196     +       BGPPoolPluralName = "ciliumbgploadbalancerippools"
   197     +
   198     +       // BGPPoolKindDefinition is the kind name of Cilium BGP Peering Policy
   199     +       BGPPoolKindDefinition = "CiliumBGPLoadBalancerIPPool"
   200     +
   201     +       // BGPPoolName is the full name of Cilium BGP Load Balancer IP Pool
   202     +       BGPPoolName = BGPPoolPluralName + "." + CustomResourceDefinitionGroup
   203      )
   204      
   205      // SchemeGroupVersion is group version used to register these objects
   206     @@ -102,6 +130,10 @@ func addKnownTypes(scheme *runtime.Scheme) error {
   207                     &CiliumEndpointSlice{},
   208                     &CiliumEndpointSliceList{},
   209     +               &CiliumBGPPeeringPolicy{},
   210     +               &CiliumBGPPeeringPolicyList{},
   211     +               &CiliumBGPLoadBalancerIPPool{},
   212     +               &CiliumBGPLoadBalancerIPPoolList{},
   213             )
   214      
   215             metav1.AddToGroupVersion(scheme, SchemeGroupVersion)
   216  
   217  You should also bump the ``CustomResourceDefinitionSchemaVersion``
   218  variable in ``register.go`` to instruct Cilium
   219  that new CRDs have been added to the system.
   220  
   221  Register With Client
   222  ~~~~~~~~~~~~~~~~~~~~
   223  
   224  ``pkg/k8s/apis/cilium.io/client/register.go``
   225  
   226  Make a change similar to the following to register CRD types with the
   227  client.
   228  
   229  .. code-block:: diff
   230  
   231     diff --git a/pkg/k8s/apis/cilium.io/client/register.go b/pkg/k8s/apis/cilium.io/client/register.go
   232     index ede134d7d9..ec82169270 100644
   233     --- a/pkg/k8s/apis/cilium.io/client/register.go
   234     +++ b/pkg/k8s/apis/cilium.io/client/register.go
   235     @@ -60,6 +60,12 @@ const (
   236      
   237             // CESCRDName is the full name of the CES CRD.
   238             CESCRDName = k8sconstv2alpha1.CESKindDefinition + "/" + k8sconstv2alpha1.CustomResourceDefinitionVersion
   239     +
   240     +       // BGPPCRDName is the full name of the BGPP CRD.
   241     +       BGPPCRDName = k8sconstv2alpha1.BGPPKindDefinition + "/" + k8sconstv2alpha1.CustomResourceDefinitionVersion
   242     +
   243     +       // BGPPoolCRDName is the full name of the BGPPool CRD.
   244     +       BGPPoolCRDName = k8sconstv2alpha1.BGPPoolKindDefinition + "/" + k8sconstv2alpha1.CustomResourceDefinitionVersion
   245      )
   246      
   247      var (
   248     @@ -86,6 +92,7 @@ func CreateCustomResourceDefinitions(clientset apiextensionsclient.Interface) er
   249                     synced.CRDResourceName(k8sconstv2.CLRPName):       createCRD(CLRPCRDName, k8sconstv2.CLRPName),
   250                     synced.CRDResourceName(k8sconstv2.CEGPName):       createCRD(CEGPCRDName, k8sconstv2.CEGPName),
   251                     synced.CRDResourceName(k8sconstv2alpha1.CESName):  createCRD(CESCRDName, k8sconstv2alpha1.CESName),
   252     +               synced.CRDResourceName(k8sconstv2alpha1.BGPPName): createCRD(BGPPCRDName, k8sconstv2alpha1.BGPPName),
   253             }
   254             for _, r := range synced.AllCiliumCRDResourceNames() {
   255                     fn, ok := resourceToCreateFnMapping[r]
   256     @@ -127,6 +134,12 @@ var (
   257      
   258             //go:embed crds/v2alpha1/ciliumendpointslices.yaml
   259             crdsv2Alpha1Ciliumendpointslices []byte
   260     +
   261     +       //go:embed crds/v2alpha1/ciliumbgppeeringpolicies.yaml
   262     +       crdsv2Alpha1Ciliumbgppeeringpolicies []byte
   263     +
   264     +       //go:embed crds/v2alpha1/ciliumbgploadbalancerippools.yaml
   265     +       crdsv2Alpha1Ciliumbgploadbalancerippools []byte
   266      )
   267      
   268      // GetPregeneratedCRD returns the pregenerated CRD based on the requested CRD
   269  
   270  
   271  ``pkg/k8s/watchers/watcher.go``
   272  
   273  Also, configure the watcher for this resource (or tell the agent not to watch it)
   274  
   275  .. code-block:: diff
   276  
   277     diff --git a/pkg/k8s/watchers/watcher.go b/pkg/k8s/watchers/watcher.go
   278     index eedf397b6b..8419eb90fd 100644
   279     --- a/pkg/k8s/watchers/watcher.go
   280     +++ b/pkg/k8s/watchers/watcher.go
   281     @@ -398,6 +398,7 @@ var ciliumResourceToGroupMapping = map[string]watcherInfo{
   282           synced.CRDResourceName(v2.CECName):           {afterNodeInit, k8sAPIGroupCiliumEnvoyConfigV2},
   283           synced.CRDResourceName(v2alpha1.BGPPName):    {skip, ""}, // Handled in BGP control plane
   284           synced.CRDResourceName(v2alpha1.BGPPoolName): {skip, ""}, // Handled in BGP control plane
   285     +     synced.CRDResourceName(v2.CCOName):           {skip, ""}, // Handled by init directly
   286  
   287  
   288  Getting Your CRDs Installed
   289  ~~~~~~~~~~~~~~~~~~~~~~~~~~~
   290  
   291  Your new CRDs must be installed into Kubernetes. This is controlled in
   292  the ``pkg/k8s/synced/crd.go`` file.
   293  
   294  Here is an example diff which installs the CRDs ``v2alpha1.BGPPName``
   295  and ``v2alpha.BGPPoolName``:
   296  
   297  .. code-block:: diff
   298  
   299     diff --git a/pkg/k8s/synced/crd.go b/pkg/k8s/synced/crd.go
   300     index 52d975c449..10c554cf8a 100644
   301     --- a/pkg/k8s/synced/crd.go
   302     +++ b/pkg/k8s/synced/crd.go
   303     @@ -42,6 +42,11 @@ func agentCRDResourceNames() []string {
   304                     CRDResourceName(v2.CCNPName),
   305                     CRDResourceName(v2.CNName),
   306                     CRDResourceName(v2.CIDName),
   307     +               CRDResourceName(v2.CIDName),
   308     +               // TODO(louis) make this a conditional install
   309     +               // based on --enable-bgp-control-plane flag
   310     +               CRDResourceName(v2alpha1.BGPPName),
   311     +               CRDResourceName(v2alpha1.BGPPoolName),
   312             }
   313  
   314  Updating RBAC Roles
   315  ~~~~~~~~~~~~~~~~~~~
   316  
   317  Cilium is installed with a service account and this service account
   318  should be given RBAC permissions to access your new CRDs. The following
   319  files should be updated to include permissions to create, read, update, and delete 
   320  your new CRD.
   321  
   322  ::
   323  
   324     install/kubernetes/cilium/templates/cilium-agent/clusterrole.yaml
   325     install/kubernetes/cilium/templates/cilium-operator/clusterrole.yaml
   326     install/kubernetes/cilium/templates/cilium-preflight/clusterrole.yaml
   327  
   328  Here is a diff of updating the Agent's cluster role template to include
   329  our new BGP CRDs:
   330  
   331  .. code-block:: diff
   332  
   333     diff --git a/install/kubernetes/cilium/templates/cilium-agent/clusterrole.yaml b/install/kubernetes/cilium/templates/cilium-agent/clusterrole.yaml
   334     index 9878401a81..5ba6c30cd7 100644
   335     --- a/install/kubernetes/cilium/templates/cilium-agent/clusterrole.yaml
   336     +++ b/install/kubernetes/cilium/templates/cilium-agent/clusterrole.yaml
   337     @@ -102,6 +102,8 @@ rules:
   338        - ciliumlocalredirectpolicies/finalizers
   339        - ciliumendpointslices
   340     +  - ciliumbgppeeringpolicies
   341     +  - ciliumbgploadbalancerippools
   342        verbs:
   343        - '*'
   344      {{- end }}
   345  
   346  It's important to note, neither the Agent nor the Operator installs
   347  these manifests to the Kubernetes clusters. This means when testing your
   348  CRD out the updated ``clusterrole`` must be written to the cluster
   349  manually.
   350  
   351  Also please note, you should be specific about which 'verbs' are added to the
   352  Agent's cluster role. 
   353  This ensures a good security posture and best practice.
   354  
   355  A convenient script for this follows:
   356  
   357  .. code-block:: bash
   358  
   359     createTemplate(){
   360         if [ -z "${1}" ]; then
   361             echo "Commit SHA not set"
   362             return
   363         fi
   364         ciliumVersion=${1}
   365     MODIFY THIS LINE CD TO CILIUM ROOT DIR <-----
   366     cd install/kubernetes
   367     CILIUM_CI_TAG="${1}"
   368     helm template cilium ./cilium \
   369       --namespace kube-system \
   370       --set image.repository=quay.io/cilium/cilium-ci \
   371       --set image.tag=$CILIUM_CI_TAG \
   372       --set operator.image.repository=quay.io/cilium/operator \
   373       --set operator.image.suffix=-ci \
   374       --set operator.image.tag=$CILIUM_CI_TAG \
   375       --set clustermesh.apiserver.image.repository=quay.io/cilium/clustermesh-apiserver-ci \
   376       --set clustermesh.apiserver.image.tag=$CILIUM_CI_TAG \
   377       --set hubble.relay.image.repository=quay.io/cilium/hubble-relay-ci \
   378       --set hubble.relay.image.tag=$CILIUM_CI_TAG > /tmp/cilium.yaml
   379     echo "run kubectl apply -f /tmp/cilium.yaml"
   380     }
   381  
   382  The above script with install Cilium and newest ``clusterrole``
   383  manifests to anywhere your ``kubectl`` is pointed.