k8s.io/kubernetes@v1.29.3/test/conformance/testdata/conformance.yaml (about)

     1  - testname: Priority and Fairness FlowSchema API
     2    codename: '[sig-api-machinery] API priority and fairness should support FlowSchema
     3      API operations [Conformance]'
     4    description: ' The flowcontrol.apiserver.k8s.io API group MUST exist in the /apis
     5      discovery document. The flowcontrol.apiserver.k8s.io/v1 API group/version MUST
     6      exist in the /apis/flowcontrol.apiserver.k8s.io discovery document. The flowschemas
     7      and flowschemas/status resources MUST exist in the /apis/flowcontrol.apiserver.k8s.io/v1
     8      discovery document. The flowschema resource must support create, get, list, watch,
     9      update, patch, delete, and deletecollection.'
    10    release: v1.29
    11    file: test/e2e/apimachinery/flowcontrol.go
    12  - testname: Priority and Fairness PriorityLevelConfiguration API
    13    codename: '[sig-api-machinery] API priority and fairness should support PriorityLevelConfiguration
    14      API operations [Conformance]'
    15    description: ' The flowcontrol.apiserver.k8s.io API group MUST exist in the /apis
    16      discovery document. The flowcontrol.apiserver.k8s.io/v1 API group/version MUST
    17      exist in the /apis/flowcontrol.apiserver.k8s.io discovery document. The prioritylevelconfiguration
    18      and prioritylevelconfiguration/status resources MUST exist in the /apis/flowcontrol.apiserver.k8s.io/v1
    19      discovery document. The prioritylevelconfiguration resource must support create,
    20      get, list, watch, update, patch, delete, and deletecollection.'
    21    release: v1.29
    22    file: test/e2e/apimachinery/flowcontrol.go
    23  - testname: Admission webhook, list mutating webhooks
    24    codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing
    25      mutating webhooks should work [Conformance]'
    26    description: Create 10 mutating webhook configurations, all with a label. Attempt
    27      to list the webhook configurations matching the label; all the created webhook
    28      configurations MUST be present. Attempt to create an object; the object MUST be
    29      mutated. Attempt to remove the webhook configurations matching the label with
    30      deletecollection; all webhook configurations MUST be deleted. Attempt to create
    31      an object; the object MUST NOT be mutated.
    32    release: v1.16
    33    file: test/e2e/apimachinery/webhook.go
    34  - testname: Admission webhook, list validating webhooks
    35    codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing
    36      validating webhooks should work [Conformance]'
    37    description: Create 10 validating webhook configurations, all with a label. Attempt
    38      to list the webhook configurations matching the label; all the created webhook
    39      configurations MUST be present. Attempt to create an object; the create MUST be
    40      denied. Attempt to remove the webhook configurations matching the label with deletecollection;
    41      all webhook configurations MUST be deleted. Attempt to create an object; the create
    42      MUST NOT be denied.
    43    release: v1.16
    44    file: test/e2e/apimachinery/webhook.go
    45  - testname: Admission webhook, update mutating webhook
    46    codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating
    47      a mutating webhook should work [Conformance]'
    48    description: Register a mutating admission webhook configuration. Update the webhook
    49      to not apply to the create operation and attempt to create an object; the webhook
    50      MUST NOT mutate the object. Patch the webhook to apply to the create operation
    51      again and attempt to create an object; the webhook MUST mutate the object.
    52    release: v1.16
    53    file: test/e2e/apimachinery/webhook.go
    54  - testname: Admission webhook, update validating webhook
    55    codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating
    56      a validating webhook should work [Conformance]'
    57    description: Register a validating admission webhook configuration. Update the webhook
    58      to not apply to the create operation and attempt to create an object; the webhook
    59      MUST NOT deny the create. Patch the webhook to apply to the create operation again
    60      and attempt to create an object; the webhook MUST deny the create.
    61    release: v1.16
    62    file: test/e2e/apimachinery/webhook.go
    63  - testname: Admission webhook, deny attach
    64    codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
    65      be able to deny attaching pod [Conformance]'
    66    description: Register an admission webhook configuration that denies connecting
    67      to a pod's attach sub-resource. Attempts to attach MUST be denied.
    68    release: v1.16
    69    file: test/e2e/apimachinery/webhook.go
    70  - testname: Admission webhook, deny custom resource create and delete
    71    codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
    72      be able to deny custom resource creation, update and deletion [Conformance]'
    73    description: Register an admission webhook configuration that denies creation, update
    74      and deletion of custom resources. Attempts to create, update and delete custom
    75      resources MUST be denied.
    76    release: v1.16
    77    file: test/e2e/apimachinery/webhook.go
    78  - testname: Admission webhook, deny create
    79    codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
    80      be able to deny pod and configmap creation [Conformance]'
    81    description: Register an admission webhook configuration that admits pod and configmap.
    82      Attempts to create non-compliant pods and configmaps, or update/patch compliant
    83      pods and configmaps to be non-compliant MUST be denied. An attempt to create a
    84      pod that causes a webhook to hang MUST result in a webhook timeout error, and
    85      the pod creation MUST be denied. An attempt to create a non-compliant configmap
    86      in a whitelisted namespace based on the webhook namespace selector MUST be allowed.
    87    release: v1.16
    88    file: test/e2e/apimachinery/webhook.go
    89  - testname: Admission webhook, deny custom resource definition
    90    codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
    91      deny crd creation [Conformance]'
    92    description: Register a webhook that denies custom resource definition create. Attempt
    93      to create a custom resource definition; the create request MUST be denied.
    94    release: v1.16
    95    file: test/e2e/apimachinery/webhook.go
    96  - testname: Admission webhook, honor timeout
    97    codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
    98      honor timeout [Conformance]'
    99    description: Using a webhook that waits 5 seconds before admitting objects, configure
   100      the webhook with combinations of timeouts and failure policy values. Attempt to
   101      create a config map with each combination. Requests MUST timeout if the configured
   102      webhook timeout is less than 5 seconds and failure policy is fail. Requests must
   103      not timeout if the failure policy is ignore. Requests MUST NOT timeout if configured
   104      webhook timeout is 10 seconds (much longer than the webhook wait duration).
   105    release: v1.16
   106    file: test/e2e/apimachinery/webhook.go
   107  - testname: Admission webhook, discovery document
   108    codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
   109      include webhook resources in discovery documents [Conformance]'
   110    description: The admissionregistration.k8s.io API group MUST exists in the /apis
   111      discovery document. The admissionregistration.k8s.io/v1 API group/version MUST
   112      exists in the /apis discovery document. The mutatingwebhookconfigurations and
   113      validatingwebhookconfigurations resources MUST exist in the /apis/admissionregistration.k8s.io/v1
   114      discovery document.
   115    release: v1.16
   116    file: test/e2e/apimachinery/webhook.go
   117  - testname: Admission webhook, ordered mutation
   118    codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
   119      mutate configmap [Conformance]'
   120    description: Register a mutating webhook configuration with two webhooks that admit
   121      configmaps, one that adds a data key if the configmap already has a specific key,
   122      and another that adds a key if the key added by the first webhook is present.
   123      Attempt to create a config map; both keys MUST be added to the config map.
   124    release: v1.16
   125    file: test/e2e/apimachinery/webhook.go
   126  - testname: Admission webhook, mutate custom resource
   127    codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
   128      mutate custom resource [Conformance]'
   129    description: Register a webhook that mutates a custom resource. Attempt to create
   130      custom resource object; the custom resource MUST be mutated.
   131    release: v1.16
   132    file: test/e2e/apimachinery/webhook.go
   133  - testname: Admission webhook, mutate custom resource with different stored version
   134    codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
   135      mutate custom resource with different stored version [Conformance]'
   136    description: Register a webhook that mutates custom resources on create and update.
   137      Register a custom resource definition using v1 as stored version. Create a custom
   138      resource. Patch the custom resource definition to use v2 as the stored version.
   139      Attempt to patch the custom resource with a new field and value; the patch MUST
   140      be applied successfully.
   141    release: v1.16
   142    file: test/e2e/apimachinery/webhook.go
   143  - testname: Admission webhook, mutate custom resource with pruning
   144    codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
   145      mutate custom resource with pruning [Conformance]'
   146    description: Register mutating webhooks that adds fields to custom objects. Register
   147      a custom resource definition with a schema that includes only one of the data
   148      keys added by the webhooks. Attempt to a custom resource; the fields included
   149      in the schema MUST be present and field not included in the schema MUST NOT be
   150      present.
   151    release: v1.16
   152    file: test/e2e/apimachinery/webhook.go
   153  - testname: Admission webhook, mutation with defaulting
   154    codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
   155      mutate pod and apply defaults after mutation [Conformance]'
   156    description: Register a mutating webhook that adds an InitContainer to pods. Attempt
   157      to create a pod; the InitContainer MUST be added the TerminationMessagePolicy
   158      MUST be defaulted.
   159    release: v1.16
   160    file: test/e2e/apimachinery/webhook.go
   161  - testname: Admission webhook, admission control not allowed on webhook configuration
   162      objects
   163    codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
   164      not be able to mutate or prevent deletion of webhook configuration objects [Conformance]'
   165    description: Register webhooks that mutate and deny deletion of webhook configuration
   166      objects. Attempt to create and delete a webhook configuration object; both operations
   167      MUST be allowed and the webhook configuration object MUST NOT be mutated the webhooks.
   168    release: v1.16
   169    file: test/e2e/apimachinery/webhook.go
   170  - testname: Admission webhook, fail closed
   171    codename: '[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should
   172      unconditionally reject operations on fail closed webhook [Conformance]'
   173    description: Register a webhook with a fail closed policy and without CA bundle
   174      so that it cannot be called. Attempt operations that require the admission webhook;
   175      all MUST be denied.
   176    release: v1.16
   177    file: test/e2e/apimachinery/webhook.go
   178  - testname: aggregator-supports-the-sample-apiserver
   179    codename: '[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample
   180      API Server using the current Aggregator [Conformance]'
   181    description: Ensure that the sample-apiserver code from 1.17 and compiled against
   182      1.17 will work on the current Aggregator/API-Server.
   183    release: v1.17, v1.21, v1.27
   184    file: test/e2e/apimachinery/aggregator.go
   185  - testname: Custom Resource Definition Conversion Webhook, convert mixed version list
   186    codename: '[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
   187      should be able to convert a non homogeneous list of CRs [Conformance]'
   188    description: Register a conversion webhook and a custom resource definition. Create
   189      a custom resource stored at v1. Change the custom resource definition storage
   190      to v2. Create a custom resource stored at v2. Attempt to list the custom resources
   191      at v2; the list result MUST contain both custom resources at v2.
   192    release: v1.16
   193    file: test/e2e/apimachinery/crd_conversion_webhook.go
   194  - testname: Custom Resource Definition Conversion Webhook, conversion custom resource
   195    codename: '[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
   196      should be able to convert from CR v1 to CR v2 [Conformance]'
   197    description: Register a conversion webhook and a custom resource definition. Create
   198      a v1 custom resource. Attempts to read it at v2 MUST succeed.
   199    release: v1.16
   200    file: test/e2e/apimachinery/crd_conversion_webhook.go
   201  - testname: Custom Resource Definition, watch
   202    codename: '[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
   203      CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]'
   204    description: Create a Custom Resource Definition. Attempt to watch it; the watch
   205      MUST observe create, modify and delete events.
   206    release: v1.16
   207    file: test/e2e/apimachinery/crd_watch.go
   208  - testname: Custom Resource Definition, create
   209    codename: '[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
   210      Simple CustomResourceDefinition creating/deleting custom resource definition objects
   211      works [Conformance]'
   212    description: Create a API extension client and define a random custom resource definition.
   213      Create the custom resource definition and then delete it. The creation and deletion
   214      MUST be successful.
   215    release: v1.9
   216    file: test/e2e/apimachinery/custom_resource_definition.go
   217  - testname: Custom Resource Definition, status sub-resource
   218    codename: '[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
   219      Simple CustomResourceDefinition getting/updating/patching custom resource definition
   220      status sub-resource works [Conformance]'
   221    description: Create a custom resource definition. Attempt to read, update and patch
   222      its status sub-resource; all mutating sub-resource operations MUST be visible
   223      to subsequent reads.
   224    release: v1.16
   225    file: test/e2e/apimachinery/custom_resource_definition.go
   226  - testname: Custom Resource Definition, list
   227    codename: '[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
   228      Simple CustomResourceDefinition listing custom resource definition objects works
   229      [Conformance]'
   230    description: Create a API extension client, define 10 labeled custom resource definitions
   231      and list them using a label selector; the list result MUST contain only the labeled
   232      custom resource definitions. Delete the labeled custom resource definitions via
   233      delete collection; the delete MUST be successful and MUST delete only the labeled
   234      custom resource definitions.
   235    release: v1.16
   236    file: test/e2e/apimachinery/custom_resource_definition.go
   237  - testname: Custom Resource Definition, defaulting
   238    codename: '[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
   239      custom resource defaulting for requests and from storage works [Conformance]'
   240    description: Create a custom resource definition without default. Create CR. Add
   241      default and read CR until the default is applied. Create another CR. Remove default,
   242      add default for another field and read CR until new field is defaulted, but old
   243      default stays.
   244    release: v1.17
   245    file: test/e2e/apimachinery/custom_resource_definition.go
   246  - testname: Custom Resource Definition, discovery
   247    codename: '[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
   248      should include custom resource definition resources in discovery documents [Conformance]'
   249    description: Fetch /apis, /apis/apiextensions.k8s.io, and /apis/apiextensions.k8s.io/v1
   250      discovery documents, and ensure they indicate CustomResourceDefinition apiextensions.k8s.io/v1
   251      resources are available.
   252    release: v1.16
   253    file: test/e2e/apimachinery/custom_resource_definition.go
   254  - testname: Custom Resource OpenAPI Publish, stop serving version
   255    codename: '[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
   256      removes definition from spec when one version gets changed to not be served [Conformance]'
   257    description: Register a custom resource definition with multiple versions. OpenAPI
   258      definitions MUST be published for custom resource definitions. Update the custom
   259      resource definition to not serve one of the versions. OpenAPI definitions MUST
   260      be updated to not contain the version that is no longer served.
   261    release: v1.16
   262    file: test/e2e/apimachinery/crd_publish_openapi.go
   263  - testname: Custom Resource OpenAPI Publish, version rename
   264    codename: '[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
   265      updates the published spec when one version gets renamed [Conformance]'
   266    description: Register a custom resource definition with multiple versions; OpenAPI
   267      definitions MUST be published for custom resource definitions. Rename one of the
   268      versions of the custom resource definition via a patch; OpenAPI definitions MUST
   269      update to reflect the rename.
   270    release: v1.16
   271    file: test/e2e/apimachinery/crd_publish_openapi.go
   272  - testname: Custom Resource OpenAPI Publish, with x-kubernetes-preserve-unknown-fields
   273      at root
   274    codename: '[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
   275      works for CRD preserving unknown fields at the schema root [Conformance]'
   276    description: Register a custom resource definition with x-kubernetes-preserve-unknown-fields
   277      in the schema root. Attempt to create and apply a change a custom resource, via
   278      kubectl; kubectl validation MUST accept unknown properties. Attempt kubectl explain;
   279      the output MUST show the custom resource KIND.
   280    release: v1.16
   281    file: test/e2e/apimachinery/crd_publish_openapi.go
   282  - testname: Custom Resource OpenAPI Publish, with x-kubernetes-preserve-unknown-fields
   283      in embedded object
   284    codename: '[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
   285      works for CRD preserving unknown fields in an embedded object [Conformance]'
   286    description: Register a custom resource definition with x-kubernetes-preserve-unknown-fields
   287      in an embedded object. Attempt to create and apply a change a custom resource,
   288      via kubectl; kubectl validation MUST accept unknown properties. Attempt kubectl
   289      explain; the output MUST show that x-preserve-unknown-properties is used on the
   290      nested field.
   291    release: v1.16
   292    file: test/e2e/apimachinery/crd_publish_openapi.go
   293  - testname: Custom Resource OpenAPI Publish, with validation schema
   294    codename: '[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
   295      works for CRD with validation schema [Conformance]'
   296    description: Register a custom resource definition with a validating schema consisting
   297      of objects, arrays and primitives. Attempt to create and apply a change a custom
   298      resource using valid properties, via kubectl; kubectl validation MUST pass. Attempt
   299      both operations with unknown properties and without required properties; kubectl
   300      validation MUST reject the operations. Attempt kubectl explain; the output MUST
   301      explain the custom resource properties. Attempt kubectl explain on custom resource
   302      properties; the output MUST explain the nested custom resource properties. All
   303      validation should be the same.
   304    release: v1.16
   305    file: test/e2e/apimachinery/crd_publish_openapi.go
   306  - testname: Custom Resource OpenAPI Publish, with x-kubernetes-preserve-unknown-fields
   307      in object
   308    codename: '[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
   309      works for CRD without validation schema [Conformance]'
   310    description: Register a custom resource definition with x-kubernetes-preserve-unknown-fields
   311      in the top level object. Attempt to create and apply a change a custom resource,
   312      via kubectl; kubectl validation MUST accept unknown properties. Attempt kubectl
   313      explain; the output MUST contain a valid DESCRIPTION stanza.
   314    release: v1.16
   315    file: test/e2e/apimachinery/crd_publish_openapi.go
   316  - testname: Custom Resource OpenAPI Publish, varying groups
   317    codename: '[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
   318      works for multiple CRDs of different groups [Conformance]'
   319    description: Register multiple custom resource definitions spanning different groups
   320      and versions; OpenAPI definitions MUST be published for custom resource definitions.
   321    release: v1.16
   322    file: test/e2e/apimachinery/crd_publish_openapi.go
   323  - testname: Custom Resource OpenAPI Publish, varying kinds
   324    codename: '[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
   325      works for multiple CRDs of same group and version but different kinds [Conformance]'
   326    description: Register multiple custom resource definitions in the same group and
   327      version but spanning different kinds; OpenAPI definitions MUST be published for
   328      custom resource definitions.
   329    release: v1.16
   330    file: test/e2e/apimachinery/crd_publish_openapi.go
   331  - testname: Custom Resource OpenAPI Publish, varying versions
   332    codename: '[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
   333      works for multiple CRDs of same group but different versions [Conformance]'
   334    description: Register a custom resource definition with multiple versions; OpenAPI
   335      definitions MUST be published for custom resource definitions.
   336    release: v1.16
   337    file: test/e2e/apimachinery/crd_publish_openapi.go
   338  - testname: Discovery, confirm the groupVerion and a resourcefrom each apiGroup
   339    codename: '[sig-api-machinery] Discovery should locate the groupVersion and a resource
   340      within each APIGroup [Conformance]'
   341    description: A resourceList MUST be found for each apiGroup that is retrieved. For
   342      each apiGroup the groupVersion MUST equal the groupVersion as reported by the
   343      schema. From each resourceList a valid resource MUST be found.
   344    release: v1.28
   345    file: test/e2e/apimachinery/discovery.go
   346  - testname: Discovery, confirm the PreferredVersion for each api group
   347    codename: '[sig-api-machinery] Discovery should validate PreferredVersion for each
   348      APIGroup [Conformance]'
   349    description: Ensure that a list of apis is retrieved. Each api group found MUST
   350      return a valid PreferredVersion unless the group suffix is example.com.
   351    release: v1.19
   352    file: test/e2e/apimachinery/discovery.go
   353  - testname: Server side field validation, unknown fields CR no validation schema
   354    codename: '[sig-api-machinery] FieldValidation should create/apply a CR with unknown
   355      fields for CRD with no validation schema [Conformance]'
   356    description: When a CRD does not have a validation schema, it should succeed when
   357      a CR with unknown fields is applied.
   358    release: v1.27
   359    file: test/e2e/apimachinery/field_validation.go
   360  - testname: Server side field validation, valid CR with validation schema
   361    codename: '[sig-api-machinery] FieldValidation should create/apply a valid CR for
   362      CRD with validation schema [Conformance]'
   363    description: When a CRD has a validation schema, it should succeed when a valid
   364      CR is applied.
   365    release: v1.27
   366    file: test/e2e/apimachinery/field_validation.go
   367  - testname: Server side field validation, unknown fields CR fails validation
   368    codename: '[sig-api-machinery] FieldValidation should create/apply an invalid CR
   369      with extra properties for CRD with validation schema [Conformance]'
   370    description: When a CRD does have a validation schema, it should reject CRs with
   371      unknown fields.
   372    release: v1.27
   373    file: test/e2e/apimachinery/field_validation.go
   374  - testname: Server side field validation, CR duplicates
   375    codename: '[sig-api-machinery] FieldValidation should detect duplicates in a CR
   376      when preserving unknown fields [Conformance]'
   377    description: The server should reject CRs with duplicate fields even when preserving
   378      unknown fields.
   379    release: v1.27
   380    file: test/e2e/apimachinery/field_validation.go
   381  - testname: Server side field validation, typed object
   382    codename: '[sig-api-machinery] FieldValidation should detect unknown and duplicate
   383      fields of a typed object [Conformance]'
   384    description: It should reject the request if a typed object has unknown or duplicate
   385      fields.
   386    release: v1.27
   387    file: test/e2e/apimachinery/field_validation.go
   388  - testname: Server side field validation, unknown metadata
   389    codename: '[sig-api-machinery] FieldValidation should detect unknown metadata fields
   390      in both the root and embedded object of a CR [Conformance]'
   391    description: The server should reject CRs with unknown metadata fields in both the
   392      root and embedded objects of a CR.
   393    release: v1.27
   394    file: test/e2e/apimachinery/field_validation.go
   395  - testname: Server side field validation, typed unknown metadata
   396    codename: '[sig-api-machinery] FieldValidation should detect unknown metadata fields
   397      of a typed object [Conformance]'
   398    description: It should reject the request if a typed object has unknown fields in
   399      the metadata.
   400    release: v1.27
   401    file: test/e2e/apimachinery/field_validation.go
   402  - testname: Garbage Collector, delete deployment,  propagation policy background
   403    codename: '[sig-api-machinery] Garbage collector should delete RS created by deployment
   404      when not orphaning [Conformance]'
   405    description: Create a deployment with a replicaset. Once replicaset is created ,
   406      delete the deployment  with deleteOptions.PropagationPolicy set to Background.
   407      Deleting the deployment MUST delete the replicaset created by the deployment and
   408      also the Pods that belong to the deployments MUST be deleted.
   409    release: v1.9
   410    file: test/e2e/apimachinery/garbage_collector.go
   411  - testname: Garbage Collector, delete replication controller, propagation policy background
   412    codename: '[sig-api-machinery] Garbage collector should delete pods created by rc
   413      when not orphaning [Conformance]'
   414    description: Create a replication controller with 2 Pods. Once RC is created and
   415      the first Pod is created, delete RC with deleteOptions.PropagationPolicy set to
   416      Background. Deleting the Replication Controller MUST cause pods created by that
   417      RC to be deleted.
   418    release: v1.9
   419    file: test/e2e/apimachinery/garbage_collector.go
   420  - testname: Garbage Collector, delete replication controller, after owned pods
   421    codename: '[sig-api-machinery] Garbage collector should keep the rc around until
   422      all its pods are deleted if the deleteOptions says so [Conformance]'
   423    description: Create a replication controller with maximum allocatable Pods between
   424      10 and 100 replicas. Once RC is created and the all Pods are created, delete RC
   425      with deleteOptions.PropagationPolicy set to Foreground. Deleting the Replication
   426      Controller MUST cause pods created by that RC to be deleted before the RC is deleted.
   427    release: v1.9
   428    file: test/e2e/apimachinery/garbage_collector.go
   429  - testname: Garbage Collector, dependency cycle
   430    codename: '[sig-api-machinery] Garbage collector should not be blocked by dependency
   431      circle [Conformance]'
   432    description: Create three pods, patch them with Owner references such that pod1
   433      has pod3, pod2 has pod1 and pod3 has pod2 as owner references respectively. Delete
   434      pod1 MUST delete all pods. The dependency cycle MUST not block the garbage collection.
   435    release: v1.9
   436    file: test/e2e/apimachinery/garbage_collector.go
   437  - testname: Garbage Collector, multiple owners
   438    codename: '[sig-api-machinery] Garbage collector should not delete dependents that
   439      have both valid owner and owner that''s waiting for dependents to be deleted [Conformance]'
   440    description: Create a replication controller RC1, with maximum allocatable Pods
   441      between 10 and 100 replicas. Create second replication controller RC2 and set
   442      RC2 as owner for half of those replicas. Once RC1 is created and the all Pods
   443      are created, delete RC1 with deleteOptions.PropagationPolicy set to Foreground.
   444      Half of the Pods that has RC2 as owner MUST not be deleted or have a deletion
   445      timestamp. Deleting the Replication Controller MUST not delete Pods that are owned
   446      by multiple replication controllers.
   447    release: v1.9
   448    file: test/e2e/apimachinery/garbage_collector.go
   449  - testname: Garbage Collector, delete deployment, propagation policy orphan
   450    codename: '[sig-api-machinery] Garbage collector should orphan RS created by deployment
   451      when deleteOptions.PropagationPolicy is Orphan [Conformance]'
   452    description: Create a deployment with a replicaset. Once replicaset is created ,
   453      delete the deployment  with deleteOptions.PropagationPolicy set to Orphan. Deleting
   454      the deployment MUST cause the replicaset created by the deployment to be orphaned,
   455      also the Pods created by the deployments MUST be orphaned.
   456    release: v1.9
   457    file: test/e2e/apimachinery/garbage_collector.go
   458  - testname: Garbage Collector, delete replication controller, propagation policy orphan
   459    codename: '[sig-api-machinery] Garbage collector should orphan pods created by rc
   460      if delete options say so [Conformance]'
   461    description: Create a replication controller with maximum allocatable Pods between
   462      10 and 100 replicas. Once RC is created and the all Pods are created, delete RC
   463      with deleteOptions.PropagationPolicy set to Orphan. Deleting the Replication Controller
   464      MUST cause pods created by that RC to be orphaned.
   465    release: v1.9
   466    file: test/e2e/apimachinery/garbage_collector.go
   467  - testname: Namespace, apply finalizer to a namespace
   468    codename: '[sig-api-machinery] Namespaces [Serial] should apply a finalizer to a
   469      Namespace [Conformance]'
   470    description: Attempt to create a Namespace which MUST be succeed. Updating the namespace
   471      with a fake finalizer MUST succeed. The fake finalizer MUST be found. Removing
   472      the fake finalizer from the namespace MUST succeed and MUST NOT be found.
   473    release: v1.26
   474    file: test/e2e/apimachinery/namespace.go
   475  - testname: Namespace, apply update to a namespace
   476    codename: '[sig-api-machinery] Namespaces [Serial] should apply an update to a Namespace
   477      [Conformance]'
   478    description: When updating the namespace it MUST succeed and the field MUST equal
   479      the new value.
   480    release: v1.26
   481    file: test/e2e/apimachinery/namespace.go
   482  - testname: Namespace, apply changes to a namespace status
   483    codename: '[sig-api-machinery] Namespaces [Serial] should apply changes to a namespace
   484      status [Conformance]'
   485    description: Getting the current namespace status MUST succeed. The reported status
   486      phase MUST be active. Given the patching of the namespace status, the fields MUST
   487      equal the new values. Given the updating of the namespace status, the fields MUST
   488      equal the new values.
   489    release: v1.25
   490    file: test/e2e/apimachinery/namespace.go
   491  - testname: namespace-deletion-removes-pods
   492    codename: '[sig-api-machinery] Namespaces [Serial] should ensure that all pods are
   493      removed when a namespace is deleted [Conformance]'
   494    description: Ensure that if a namespace is deleted then all pods are removed from
   495      that namespace.
   496    release: v1.11
   497    file: test/e2e/apimachinery/namespace.go
   498  - testname: namespace-deletion-removes-services
   499    codename: '[sig-api-machinery] Namespaces [Serial] should ensure that all services
   500      are removed when a namespace is deleted [Conformance]'
   501    description: Ensure that if a namespace is deleted then all services are removed
   502      from that namespace.
   503    release: v1.11
   504    file: test/e2e/apimachinery/namespace.go
   505  - testname: Namespace patching
   506    codename: '[sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]'
   507    description: A Namespace is created. The Namespace is patched. The Namespace and
   508      MUST now include the new Label.
   509    release: v1.18
   510    file: test/e2e/apimachinery/namespace.go
   511  - testname: ResourceQuota, apply changes to a ResourceQuota status
   512    codename: '[sig-api-machinery] ResourceQuota should apply changes to a resourcequota
   513      status [Conformance]'
   514    description: Attempt to create a ResourceQuota for CPU and Memory quota limits.
   515      Creation MUST be successful. Updating the hard status values MUST succeed and
   516      the new values MUST be found. The reported hard status values MUST equal the spec
   517      hard values. Patching the spec hard values MUST succeed and the new values MUST
   518      be found. Patching the hard status values MUST succeed. The reported hard status
   519      values MUST equal the new spec hard values. Getting the /status MUST succeed and
   520      the reported hard status values MUST equal the spec hard values. Repatching the
   521      hard status values MUST succeed. The spec MUST NOT be changed when patching /status.
   522    release: v1.26
   523    file: test/e2e/apimachinery/resource_quota.go
   524  - testname: ResourceQuota, update and delete
   525    codename: '[sig-api-machinery] ResourceQuota should be able to update and delete
   526      ResourceQuota. [Conformance]'
   527    description: Create a ResourceQuota for CPU and Memory quota limits. Creation MUST
   528      be successful. When ResourceQuota is updated to modify CPU and Memory quota limits,
   529      update MUST succeed with updated values for CPU and Memory limits. When ResourceQuota
   530      is deleted, it MUST not be available in the namespace.
   531    release: v1.16
   532    file: test/e2e/apimachinery/resource_quota.go
   533  - testname: ResourceQuota, object count quota, configmap
   534    codename: '[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture
   535      the life of a configMap. [Conformance]'
   536    description: Create a ResourceQuota. Creation MUST be successful and its ResourceQuotaStatus
   537      MUST match to expected used and total allowed resource quota count within namespace.
   538      Create a ConfigMap. Its creation MUST be successful and resource usage count against
   539      the ConfigMap object MUST be captured in ResourceQuotaStatus of the ResourceQuota.
   540      Delete the ConfigMap. Deletion MUST succeed and resource usage count against the
   541      ConfigMap object MUST be released from ResourceQuotaStatus of the ResourceQuota.
   542    release: v1.16
   543    file: test/e2e/apimachinery/resource_quota.go
   544  - testname: ResourceQuota, object count quota, pod
   545    codename: '[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture
   546      the life of a pod. [Conformance]'
   547    description: Create a ResourceQuota. Creation MUST be successful and its ResourceQuotaStatus
   548      MUST match to expected used and total allowed resource quota count within namespace.
   549      Create a Pod with resource request count for CPU, Memory, EphemeralStorage and
   550      ExtendedResourceName. Pod creation MUST be successful and respective resource
   551      usage count MUST be captured in ResourceQuotaStatus of the ResourceQuota. Create
   552      another Pod with resource request exceeding remaining quota. Pod creation MUST
   553      fail as the request exceeds ResourceQuota limits. Update the successfully created
   554      pod's resource requests. Updation MUST fail as a Pod can not dynamically update
   555      its resource requirements. Delete the successfully created Pod. Pod Deletion MUST
   556      be scuccessful and it MUST release the allocated resource counts from ResourceQuotaStatus
   557      of the ResourceQuota.
   558    release: v1.16
   559    file: test/e2e/apimachinery/resource_quota.go
   560  - testname: ResourceQuota, object count quota, replicaSet
   561    codename: '[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture
   562      the life of a replica set. [Conformance]'
   563    description: Create a ResourceQuota. Creation MUST be successful and its ResourceQuotaStatus
   564      MUST match to expected used and total allowed resource quota count within namespace.
   565      Create a ReplicaSet. Its creation MUST be successful and resource usage count
   566      against the ReplicaSet object MUST be captured in ResourceQuotaStatus of the ResourceQuota.
   567      Delete the ReplicaSet. Deletion MUST succeed and resource usage count against
   568      the ReplicaSet object MUST be released from ResourceQuotaStatus of the ResourceQuota.
   569    release: v1.16
   570    file: test/e2e/apimachinery/resource_quota.go
   571  - testname: ResourceQuota, object count quota, replicationController
   572    codename: '[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture
   573      the life of a replication controller. [Conformance]'
   574    description: Create a ResourceQuota. Creation MUST be successful and its ResourceQuotaStatus
   575      MUST match to expected used and total allowed resource quota count within namespace.
   576      Create a ReplicationController. Its creation MUST be successful and resource usage
   577      count against the ReplicationController object MUST be captured in ResourceQuotaStatus
   578      of the ResourceQuota. Delete the ReplicationController. Deletion MUST succeed
   579      and resource usage count against the ReplicationController object MUST be released
   580      from ResourceQuotaStatus of the ResourceQuota.
   581    release: v1.16
   582    file: test/e2e/apimachinery/resource_quota.go
   583  - testname: ResourceQuota, object count quota, secret
   584    codename: '[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture
   585      the life of a secret. [Conformance]'
   586    description: Create a ResourceQuota. Creation MUST be successful and its ResourceQuotaStatus
   587      MUST match to expected used and total allowed resource quota count within namespace.
   588      Create a Secret. Its creation MUST be successful and resource usage count against
   589      the Secret object and resourceQuota object MUST be captured in ResourceQuotaStatus
   590      of the ResourceQuota. Delete the Secret. Deletion MUST succeed and resource usage
   591      count against the Secret object MUST be released from ResourceQuotaStatus of the
   592      ResourceQuota.
   593    release: v1.16
   594    file: test/e2e/apimachinery/resource_quota.go
   595  - testname: ResourceQuota, object count quota, service
   596    codename: '[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture
   597      the life of a service. [Conformance]'
   598    description: Create a ResourceQuota. Creation MUST be successful and its ResourceQuotaStatus
   599      MUST match to expected used and total allowed resource quota count within namespace.
   600      Create a Service. Its creation MUST be successful and resource usage count against
   601      the Service object and resourceQuota object MUST be captured in ResourceQuotaStatus
   602      of the ResourceQuota. Delete the Service. Deletion MUST succeed and resource usage
   603      count against the Service object MUST be released from ResourceQuotaStatus of
   604      the ResourceQuota.
   605    release: v1.16
   606    file: test/e2e/apimachinery/resource_quota.go
   607  - testname: ResourceQuota, object count quota, resourcequotas
   608    codename: '[sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure
   609      its status is promptly calculated. [Conformance]'
   610    description: Create a ResourceQuota. Creation MUST be successful and its ResourceQuotaStatus
   611      MUST match to expected used and total allowed resource quota count within namespace.
   612    release: v1.16
   613    file: test/e2e/apimachinery/resource_quota.go
   614  - testname: ResourceQuota, manage lifecycle of a ResourceQuota
   615    codename: '[sig-api-machinery] ResourceQuota should manage the lifecycle of a ResourceQuota
   616      [Conformance]'
   617    description: Attempt to create a ResourceQuota for CPU and Memory quota limits.
   618      Creation MUST be successful. Attempt to list all namespaces with a label selector
   619      which MUST succeed. One list MUST be found. The ResourceQuota when patched MUST
   620      succeed. Given the patching of the ResourceQuota, the fields MUST equal the new
   621      values. It MUST succeed at deleting a collection of ResourceQuota via a label
   622      selector.
   623    release: v1.25
   624    file: test/e2e/apimachinery/resource_quota.go
   625  - testname: ResourceQuota, quota scope, BestEffort and NotBestEffort scope
   626    codename: '[sig-api-machinery] ResourceQuota should verify ResourceQuota with best
   627      effort scope. [Conformance]'
   628    description: Create two ResourceQuotas, one with 'BestEffort' scope and another
   629      with 'NotBestEffort' scope. Creation MUST be successful and their ResourceQuotaStatus
   630      MUST match to expected used and total allowed resource quota count within namespace.
   631      Create a 'BestEffort' Pod by not explicitly specifying resource limits and requests.
   632      Pod creation MUST be successful and usage count MUST be captured in ResourceQuotaStatus
   633      of 'BestEffort' scoped ResourceQuota but MUST NOT in 'NotBestEffort' scoped ResourceQuota.
   634      Delete the Pod. Pod deletion MUST succeed and Pod resource usage count MUST be
   635      released from ResourceQuotaStatus of 'BestEffort' scoped ResourceQuota. Create
   636      a 'NotBestEffort' Pod by explicitly specifying resource limits and requests. Pod
   637      creation MUST be successful and usage count MUST be captured in ResourceQuotaStatus
   638      of 'NotBestEffort' scoped ResourceQuota but MUST NOT in 'BestEffort' scoped ResourceQuota.
   639      Delete the Pod. Pod deletion MUST succeed and Pod resource usage count MUST be
   640      released from ResourceQuotaStatus of 'NotBestEffort' scoped ResourceQuota.
   641    release: v1.16
   642    file: test/e2e/apimachinery/resource_quota.go
   643  - testname: ResourceQuota, quota scope, Terminating and NotTerminating scope
   644    codename: '[sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating
   645      scopes. [Conformance]'
   646    description: Create two ResourceQuotas, one with 'Terminating' scope and another
   647      'NotTerminating' scope. Request and the limit counts for CPU and Memory resources
   648      are set for the ResourceQuota. Creation MUST be successful and their ResourceQuotaStatus
   649      MUST match to expected used and total allowed resource quota count within namespace.
   650      Create a Pod with specified CPU and Memory ResourceRequirements fall within quota
   651      limits. Pod creation MUST be successful and usage count MUST be captured in ResourceQuotaStatus
   652      of 'NotTerminating' scoped ResourceQuota but MUST NOT in 'Terminating' scoped
   653      ResourceQuota. Delete the Pod. Pod deletion MUST succeed and Pod resource usage
   654      count MUST be released from ResourceQuotaStatus of 'NotTerminating' scoped ResourceQuota.
   655      Create a pod with specified activeDeadlineSeconds and resourceRequirements for
   656      CPU and Memory fall within quota limits. Pod creation MUST be successful and usage
   657      count MUST be captured in ResourceQuotaStatus of 'Terminating' scoped ResourceQuota
   658      but MUST NOT in 'NotTerminating' scoped ResourceQuota. Delete the Pod. Pod deletion
   659      MUST succeed and Pod resource usage count MUST be released from ResourceQuotaStatus
   660      of 'Terminating' scoped ResourceQuota.
   661    release: v1.16
   662    file: test/e2e/apimachinery/resource_quota.go
   663  - testname: API Chunking, server should return chunks of results for list calls
   664    codename: '[sig-api-machinery] Servers with support for API chunking should return
   665      chunks of results for list calls [Conformance]'
   666    description: Create a large number of PodTemplates. Attempt to retrieve the first
   667      chunk with limit set; the server MUST return the chunk of the size not exceeding
   668      the limit with RemainingItems set in the response. Attempt to retrieve the remaining
   669      items by providing the received continuation token and limit; the server MUST
   670      return the remaining items in chunks of the size not exceeding the limit, with
   671      appropriately set RemainingItems field in the response and with the ResourceVersion
   672      returned in the first response. Attempt to list all objects at once without setting
   673      the limit; the server MUST return all items in a single response.
   674    release: v1.29
   675    file: test/e2e/apimachinery/chunking.go
   676  - testname: API Chunking, server should support continue listing from the last key
   677      even if the original version has been compacted away
   678    codename: '[sig-api-machinery] Servers with support for API chunking should support
   679      continue listing from the last key if the original version has been compacted
   680      away, though the list is inconsistent [Slow] [Conformance]'
   681    description: Create a large number of PodTemplates. Attempt to retrieve the first
   682      chunk with limit set; the server MUST return the chunk of the size not exceeding
   683      the limit with RemainingItems set in the response. Attempt to retrieve the second
   684      page until the continuation token expires; the server MUST return a continuation
   685      token for inconsistent list continuation. Attempt to retrieve the second page
   686      with the received inconsistent list continuation token; the server MUST return
   687      the number of items not exceeding the limit, a new continuation token and appropriately
   688      set RemainingItems field in the response. Attempt to retrieve the remaining pages
   689      by passing the received continuation token; the server MUST return the remaining
   690      items in chunks of the size not exceeding the limit, with appropriately set RemainingItems
   691      field in the response and with the ResourceVersion returned as part of the inconsistent
   692      list.
   693    release: v1.29
   694    file: test/e2e/apimachinery/chunking.go
   695  - testname: API metadata HTTP return
   696    codename: '[sig-api-machinery] Servers with support for Table transformation should
   697      return a 406 for a backend which does not implement metadata [Conformance]'
   698    description: Issue a HTTP request to the API. HTTP request MUST return a HTTP status
   699      code of 406.
   700    release: v1.16
   701    file: test/e2e/apimachinery/table_conversion.go
   702  - testname: watch-configmaps-closed-and-restarted
   703    codename: '[sig-api-machinery] Watchers should be able to restart watching from
   704      the last resource version observed by the previous watch [Conformance]'
   705    description: Ensure that a watch can be reopened from the last resource version
   706      observed by the previous watch, and it will continue delivering notifications
   707      from that point in time.
   708    release: v1.11
   709    file: test/e2e/apimachinery/watch.go
   710  - testname: watch-configmaps-from-resource-version
   711    codename: '[sig-api-machinery] Watchers should be able to start watching from a
   712      specific resource version [Conformance]'
   713    description: Ensure that a watch can be opened from a particular resource version
   714      in the past and only notifications happening after that resource version are observed.
   715    release: v1.11
   716    file: test/e2e/apimachinery/watch.go
   717  - testname: watch-configmaps-with-multiple-watchers
   718    codename: '[sig-api-machinery] Watchers should observe add, update, and delete watch
   719      notifications on configmaps [Conformance]'
   720    description: Ensure that multiple watchers are able to receive all add, update,
   721      and delete notifications on configmaps that match a label selector and do not
   722      receive notifications for configmaps which do not match that label selector.
   723    release: v1.11
   724    file: test/e2e/apimachinery/watch.go
   725  - testname: watch-configmaps-label-changed
   726    codename: '[sig-api-machinery] Watchers should observe an object deletion if it
   727      stops meeting the requirements of the selector [Conformance]'
   728    description: Ensure that a watched object stops meeting the requirements of a watch's
   729      selector, the watch will observe a delete, and will not observe notifications
   730      for that object until it meets the selector's requirements again.
   731    release: v1.11
   732    file: test/e2e/apimachinery/watch.go
   733  - testname: watch-consistency
   734    codename: '[sig-api-machinery] Watchers should receive events on concurrent watches
   735      in same order [Conformance]'
   736    description: Ensure that concurrent watches are consistent with each other by initiating
   737      an additional watch for events received from the first watch, initiated at the
   738      resource version of the event, and checking that all resource versions of all
   739      events match. Events are produced from writes on a background goroutine.
   740    release: v1.15
   741    file: test/e2e/apimachinery/watch.go
   742  - testname: Confirm a server version
   743    codename: '[sig-api-machinery] server version should find the server version [Conformance]'
   744    description: Ensure that an API server version can be retrieved. Both the major
   745      and minor versions MUST only be an integer.
   746    release: v1.19
   747    file: test/e2e/apimachinery/server_version.go
   748  - testname: ControllerRevision, resource lifecycle
   749    codename: '[sig-apps] ControllerRevision [Serial] should manage the lifecycle of
   750      a ControllerRevision [Conformance]'
   751    description: Creating a DaemonSet MUST succeed. Listing all ControllerRevisions
   752      with a label selector MUST find only one. After patching the ControllerRevision
   753      with a new label, the label MUST be found. Creating a new ControllerRevision for
   754      the DaemonSet MUST succeed. Listing the ControllerRevisions by label selector
   755      MUST find only two. Deleting a ControllerRevision MUST succeed. Listing the ControllerRevisions
   756      by label selector MUST find only one. After updating the ControllerRevision with
   757      a new label, the label MUST be found. Patching the DaemonSet MUST succeed. Listing
   758      the ControllerRevisions by label selector MUST find only two. Deleting a collection
   759      of ControllerRevision via a label selector MUST succeed. Listing the ControllerRevisions
   760      by label selector MUST find only one. The current ControllerRevision revision
   761      MUST be 3.
   762    release: v1.25
   763    file: test/e2e/apps/controller_revision.go
   764  - testname: CronJob Suspend
   765    codename: '[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]'
   766    description: CronJob MUST support suspension, which suppresses creation of new jobs.
   767    release: v1.21
   768    file: test/e2e/apps/cronjob.go
   769  - testname: CronJob ForbidConcurrent
   770    codename: '[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent
   771      [Slow] [Conformance]'
   772    description: CronJob MUST support ForbidConcurrent policy, allowing to run single,
   773      previous job at the time.
   774    release: v1.21
   775    file: test/e2e/apps/cronjob.go
   776  - testname: CronJob ReplaceConcurrent
   777    codename: '[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]'
   778    description: CronJob MUST support ReplaceConcurrent policy, allowing to run single,
   779      newer job at the time.
   780    release: v1.21
   781    file: test/e2e/apps/cronjob.go
   782  - testname: CronJob AllowConcurrent
   783    codename: '[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]'
   784    description: CronJob MUST support AllowConcurrent policy, allowing to run multiple
   785      jobs at the same time.
   786    release: v1.21
   787    file: test/e2e/apps/cronjob.go
   788  - testname: CronJob API Operations
   789    codename: '[sig-apps] CronJob should support CronJob API operations [Conformance]'
   790    description: ' CronJob MUST support create, get, list, watch, update, patch, delete,
   791      and deletecollection. CronJob/status MUST support get, update and patch.'
   792    release: v1.21
   793    file: test/e2e/apps/cronjob.go
   794  - testname: DaemonSet, list and delete a collection of DaemonSets
   795    codename: '[sig-apps] Daemon set [Serial] should list and delete a collection of
   796      DaemonSets [Conformance]'
   797    description: When a DaemonSet is created it MUST succeed. It MUST succeed when listing
   798      DaemonSets via a label selector. It MUST succeed when deleting the DaemonSet via
   799      deleteCollection.
   800    release: v1.22
   801    file: test/e2e/apps/daemon_set.go
   802  - testname: DaemonSet-FailedPodCreation
   803    codename: '[sig-apps] Daemon set [Serial] should retry creating failed daemon pods
   804      [Conformance]'
   805    description: A conformant Kubernetes distribution MUST create new DaemonSet Pods
   806      when they fail.
   807    release: v1.10
   808    file: test/e2e/apps/daemon_set.go
   809  - testname: DaemonSet-Rollback
   810    codename: '[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts
   811      [Conformance]'
   812    description: A conformant Kubernetes distribution MUST support automated, minimally
   813      disruptive rollback of updates to a DaemonSet.
   814    release: v1.10
   815    file: test/e2e/apps/daemon_set.go
   816  - testname: DaemonSet-NodeSelection
   817    codename: '[sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]'
   818    description: A conformant Kubernetes distribution MUST support DaemonSet Pod node
   819      selection via label selectors.
   820    release: v1.10
   821    file: test/e2e/apps/daemon_set.go
   822  - testname: DaemonSet-Creation
   823    codename: '[sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]'
   824    description: A conformant Kubernetes distribution MUST support the creation of DaemonSets.
   825      When a DaemonSet Pod is deleted, the DaemonSet controller MUST create a replacement
   826      Pod.
   827    release: v1.10
   828    file: test/e2e/apps/daemon_set.go
   829  - testname: DaemonSet-RollingUpdate
   830    codename: '[sig-apps] Daemon set [Serial] should update pod when spec was updated
   831      and update strategy is RollingUpdate [Conformance]'
   832    description: A conformant Kubernetes distribution MUST support DaemonSet RollingUpdates.
   833    release: v1.10
   834    file: test/e2e/apps/daemon_set.go
   835  - testname: DaemonSet, status sub-resource
   836    codename: '[sig-apps] Daemon set [Serial] should verify changes to a daemon set
   837      status [Conformance]'
   838    description: When a DaemonSet is created it MUST succeed. Attempt to read, update
   839      and patch its status sub-resource; all mutating sub-resource operations MUST be
   840      visible to subsequent reads.
   841    release: v1.22
   842    file: test/e2e/apps/daemon_set.go
   843  - testname: Deployment, completes the scaling of a Deployment subresource
   844    codename: '[sig-apps] Deployment Deployment should have a working scale subresource
   845      [Conformance]'
   846    description: Create a Deployment with a single Pod. The Pod MUST be verified that
   847      it is running. The Deployment MUST get and verify the scale subresource count.
   848      The Deployment MUST update and verify the scale subresource. The Deployment MUST
   849      patch and verify a scale subresource.
   850    release: v1.21
   851    file: test/e2e/apps/deployment.go
   852  - testname: Deployment Recreate
   853    codename: '[sig-apps] Deployment RecreateDeployment should delete old pods and create
   854      new ones [Conformance]'
   855    description: A conformant Kubernetes distribution MUST support the Deployment with
   856      Recreate strategy.
   857    release: v1.12
   858    file: test/e2e/apps/deployment.go
   859  - testname: Deployment RollingUpdate
   860    codename: '[sig-apps] Deployment RollingUpdateDeployment should delete old pods
   861      and create new ones [Conformance]'
   862    description: A conformant Kubernetes distribution MUST support the Deployment with
   863      RollingUpdate strategy.
   864    release: v1.12
   865    file: test/e2e/apps/deployment.go
   866  - testname: Deployment RevisionHistoryLimit
   867    codename: '[sig-apps] Deployment deployment should delete old replica sets [Conformance]'
   868    description: A conformant Kubernetes distribution MUST clean up Deployment's ReplicaSets
   869      based on the Deployment's `.spec.revisionHistoryLimit`.
   870    release: v1.12
   871    file: test/e2e/apps/deployment.go
   872  - testname: Deployment Proportional Scaling
   873    codename: '[sig-apps] Deployment deployment should support proportional scaling
   874      [Conformance]'
   875    description: A conformant Kubernetes distribution MUST support Deployment proportional
   876      scaling, i.e. proportionally scale a Deployment's ReplicaSets when a Deployment
   877      is scaled.
   878    release: v1.12
   879    file: test/e2e/apps/deployment.go
   880  - testname: Deployment Rollover
   881    codename: '[sig-apps] Deployment deployment should support rollover [Conformance]'
   882    description: A conformant Kubernetes distribution MUST support Deployment rollover,
   883      i.e. allow arbitrary number of changes to desired state during rolling update
   884      before the rollout finishes.
   885    release: v1.12
   886    file: test/e2e/apps/deployment.go
   887  - testname: Deployment, completes the lifecycle of a Deployment
   888    codename: '[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]'
   889    description: When a Deployment is created it MUST succeed with the required number
   890      of replicas. It MUST succeed when the Deployment is patched. When scaling the
   891      deployment is MUST succeed. When fetching and patching the DeploymentStatus it
   892      MUST succeed. It MUST succeed when deleting the Deployment.
   893    release: v1.20
   894    file: test/e2e/apps/deployment.go
   895  - testname: Deployment, status sub-resource
   896    codename: '[sig-apps] Deployment should validate Deployment Status endpoints [Conformance]'
   897    description: When a Deployment is created it MUST succeed. Attempt to read, update
   898      and patch its status sub-resource; all mutating sub-resource operations MUST be
   899      visible to subsequent reads.
   900    release: v1.22
   901    file: test/e2e/apps/deployment.go
   902  - testname: 'PodDisruptionBudget: list and delete collection'
   903    codename: '[sig-apps] DisruptionController Listing PodDisruptionBudgets for all
   904      namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]'
   905    description: PodDisruptionBudget API must support list and deletecollection operations.
   906    release: v1.21
   907    file: test/e2e/apps/disruption.go
   908  - testname: 'PodDisruptionBudget: block an eviction until the PDB is updated to allow
   909      it'
   910    codename: '[sig-apps] DisruptionController should block an eviction until the PDB
   911      is updated to allow it [Conformance]'
   912    description: Eviction API must block an eviction until the PDB is updated to allow
   913      it
   914    release: v1.22
   915    file: test/e2e/apps/disruption.go
   916  - testname: 'PodDisruptionBudget: create, update, patch, and delete object'
   917    codename: '[sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]'
   918    description: PodDisruptionBudget API must support create, update, patch, and delete
   919      operations.
   920    release: v1.21
   921    file: test/e2e/apps/disruption.go
   922  - testname: 'PodDisruptionBudget: Status updates'
   923    codename: '[sig-apps] DisruptionController should observe PodDisruptionBudget status
   924      updated [Conformance]'
   925    description: Disruption controller MUST update the PDB status with how many disruptions
   926      are allowed.
   927    release: v1.21
   928    file: test/e2e/apps/disruption.go
   929  - testname: 'PodDisruptionBudget: update and patch status'
   930    codename: '[sig-apps] DisruptionController should update/patch PodDisruptionBudget
   931      status [Conformance]'
   932    description: PodDisruptionBudget API must support update and patch operations on
   933      status subresource.
   934    release: v1.21
   935    file: test/e2e/apps/disruption.go
   936  - testname: Jobs, orphan pods, re-adoption
   937    codename: '[sig-apps] Job should adopt matching orphans and release non-matching
   938      pods [Conformance]'
   939    description: Create a parallel job. The number of Pods MUST equal the level of parallelism.
   940      Orphan a Pod by modifying its owner reference. The Job MUST re-adopt the orphan
   941      pod. Modify the labels of one of the Job's Pods. The Job MUST release the Pod.
   942    release: v1.16
   943    file: test/e2e/apps/job.go
   944  - testname: Jobs, apply changes to status
   945    codename: '[sig-apps] Job should apply changes to a job status [Conformance]'
   946    description: Attempt to create a running Job which MUST succeed. Attempt to patch
   947      the Job status to include a new start time which MUST succeed. An annotation for
   948      the job that was patched MUST be found. Attempt to replace the job status with
   949      a new start time which MUST succeed. Attempt to read its status sub-resource which
   950      MUST succeed
   951    release: v1.24
   952    file: test/e2e/apps/job.go
   953  - testname: Ensure Pods of an Indexed Job get a unique index.
   954    codename: '[sig-apps] Job should create pods for an Indexed job with completion
   955      indexes and specified hostname [Conformance]'
   956    description: Create an Indexed job. Job MUST complete successfully. Ensure that
   957      created pods have completion index annotation and environment variable.
   958    release: v1.24
   959    file: test/e2e/apps/job.go
   960  - testname: Jobs, active pods, graceful termination
   961    codename: '[sig-apps] Job should delete a job [Conformance]'
   962    description: Create a job. Ensure the active pods reflect parallelism in the namespace
   963      and delete the job. Job MUST be deleted successfully.
   964    release: v1.15
   965    file: test/e2e/apps/job.go
   966  - testname: Jobs, manage lifecycle
   967    codename: '[sig-apps] Job should manage the lifecycle of a job [Conformance]'
   968    description: Attempt to create a suspended Job which MUST succeed. Attempt to patch
   969      the Job to include a new label which MUST succeed. The label MUST be found. Attempt
   970      to replace the Job to include a new annotation which MUST succeed. The annotation
   971      MUST be found. Attempt to list all namespaces with a label selector which MUST
   972      succeed. One list MUST be found. It MUST succeed at deleting a collection of jobs
   973      via a label selector.
   974    release: v1.25
   975    file: test/e2e/apps/job.go
   976  - testname: Jobs, completion after task failure
   977    codename: '[sig-apps] Job should run a job to completion when tasks sometimes fail
   978      and are locally restarted [Conformance]'
   979    description: Explicitly cause the tasks to fail once initially. After restarting,
   980      the Job MUST execute to completion.
   981    release: v1.16
   982    file: test/e2e/apps/job.go
   983  - testname: ReplicaSet, is created, Replaced and Patched
   984    codename: '[sig-apps] ReplicaSet Replace and Patch tests [Conformance]'
   985    description: Create a ReplicaSet (RS) with a single Pod. The Pod MUST be verified
   986      that it is running. The RS MUST scale to two replicas and verify the scale count
   987      The RS MUST be patched and verify that patch succeeded.
   988    release: v1.21
   989    file: test/e2e/apps/replica_set.go
   990  - testname: ReplicaSet, completes the scaling of a ReplicaSet subresource
   991    codename: '[sig-apps] ReplicaSet Replicaset should have a working scale subresource
   992      [Conformance]'
   993    description: Create a ReplicaSet (RS) with a single Pod. The Pod MUST be verified
   994      that it is running. The RS MUST get and verify the scale subresource count. The
   995      RS MUST update and verify the scale subresource. The RS MUST patch and verify
   996      a scale subresource.
   997    release: v1.21
   998    file: test/e2e/apps/replica_set.go
   999  - testname: Replica Set, adopt matching pods and release non matching pods
  1000    codename: '[sig-apps] ReplicaSet should adopt matching pods on creation and release
  1001      no longer matching pods [Conformance]'
  1002    description: A Pod is created, then a Replica Set (RS) whose label selector will
  1003      match the Pod. The RS MUST either adopt the Pod or delete and replace it with
  1004      a new Pod. When the labels on one of the Pods owned by the RS change to no longer
  1005      match the RS's label selector, the RS MUST release the Pod and update the Pod's
  1006      owner references
  1007    release: v1.13
  1008    file: test/e2e/apps/replica_set.go
  1009  - testname: ReplicaSet, list and delete a collection of ReplicaSets
  1010    codename: '[sig-apps] ReplicaSet should list and delete a collection of ReplicaSets
  1011      [Conformance]'
  1012    description: When a ReplicaSet is created it MUST succeed. It MUST succeed when
  1013      listing ReplicaSets via a label selector. It MUST succeed when deleting the ReplicaSet
  1014      via deleteCollection.
  1015    release: v1.22
  1016    file: test/e2e/apps/replica_set.go
  1017  - testname: Replica Set, run basic image
  1018    codename: '[sig-apps] ReplicaSet should serve a basic image on each replica with
  1019      a public image [Conformance]'
  1020    description: Create a ReplicaSet with a Pod and a single Container. Make sure that
  1021      the Pod is running. Pod SHOULD send a valid response when queried.
  1022    release: v1.9
  1023    file: test/e2e/apps/replica_set.go
  1024  - testname: ReplicaSet, status sub-resource
  1025    codename: '[sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]'
  1026    description: Create a ReplicaSet resource which MUST succeed. Attempt to read, update
  1027      and patch its status sub-resource; all mutating sub-resource operations MUST be
  1028      visible to subsequent reads.
  1029    release: v1.22
  1030    file: test/e2e/apps/replica_set.go
  1031  - testname: Replication Controller, adopt matching pods
  1032    codename: '[sig-apps] ReplicationController should adopt matching pods on creation
  1033      [Conformance]'
  1034    description: An ownerless Pod is created, then a Replication Controller (RC) is
  1035      created whose label selector will match the Pod. The RC MUST either adopt the
  1036      Pod or delete and replace it with a new Pod
  1037    release: v1.13
  1038    file: test/e2e/apps/rc.go
  1039  - testname: Replication Controller, get and update ReplicationController scale
  1040    codename: '[sig-apps] ReplicationController should get and update a ReplicationController
  1041      scale [Conformance]'
  1042    description: A ReplicationController is created which MUST succeed. It MUST succeed
  1043      when reading the ReplicationController scale. When updating the ReplicationController
  1044      scale it MUST succeed and the field MUST equal the new value.
  1045    release: v1.26
  1046    file: test/e2e/apps/rc.go
  1047  - testname: Replication Controller, release pods
  1048    codename: '[sig-apps] ReplicationController should release no longer matching pods
  1049      [Conformance]'
  1050    description: A Replication Controller (RC) is created, and its Pods are created.
  1051      When the labels on one of the Pods change to no longer match the RC's label selector,
  1052      the RC MUST release the Pod and update the Pod's owner references.
  1053    release: v1.13
  1054    file: test/e2e/apps/rc.go
  1055  - testname: Replication Controller, run basic image
  1056    codename: '[sig-apps] ReplicationController should serve a basic image on each replica
  1057      with a public image [Conformance]'
  1058    description: Replication Controller MUST create a Pod with Basic Image and MUST
  1059      run the service with the provided image. Image MUST be tested by dialing into
  1060      the service listening through TCP, UDP and HTTP.
  1061    release: v1.9
  1062    file: test/e2e/apps/rc.go
  1063  - testname: Replication Controller, check for issues like exceeding allocated quota
  1064    codename: '[sig-apps] ReplicationController should surface a failure condition on
  1065      a common issue like exceeded quota [Conformance]'
  1066    description: Attempt to create a Replication Controller with pods exceeding the
  1067      namespace quota. The creation MUST fail
  1068    release: v1.15
  1069    file: test/e2e/apps/rc.go
  1070  - testname: Replication Controller, lifecycle
  1071    codename: '[sig-apps] ReplicationController should test the lifecycle of a ReplicationController
  1072      [Conformance]'
  1073    description: A Replication Controller (RC) is created, read, patched, and deleted
  1074      with verification.
  1075    release: v1.20
  1076    file: test/e2e/apps/rc.go
  1077  - testname: StatefulSet, Burst Scaling
  1078    codename: '[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic]
  1079      Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]'
  1080    description: StatefulSet MUST support the Parallel PodManagementPolicy for burst
  1081      scaling. This test does not depend on a preexisting default StorageClass or a
  1082      dynamic provisioner.
  1083    release: v1.9
  1084    file: test/e2e/apps/statefulset.go
  1085  - testname: StatefulSet, Scaling
  1086    codename: '[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic]
  1087      Scaling should happen in predictable order and halt if any stateful pod is unhealthy
  1088      [Slow] [Conformance]'
  1089    description: StatefulSet MUST create Pods in ascending order by ordinal index when
  1090      scaling up, and delete Pods in descending order when scaling down. Scaling up
  1091      or down MUST pause if any Pods belonging to the StatefulSet are unhealthy. This
  1092      test does not depend on a preexisting default StorageClass or a dynamic provisioner.
  1093    release: v1.9
  1094    file: test/e2e/apps/statefulset.go
  1095  - testname: StatefulSet, Recreate Failed Pod
  1096    codename: '[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic]
  1097      Should recreate evicted statefulset [Conformance]'
  1098    description: StatefulSet MUST delete and recreate Pods it owns that go into a Failed
  1099      state, such as when they are rejected or evicted by a Node. This test does not
  1100      depend on a preexisting default StorageClass or a dynamic provisioner.
  1101    release: v1.9
  1102    file: test/e2e/apps/statefulset.go
  1103  - testname: StatefulSet resource Replica scaling
  1104    codename: '[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic]
  1105      should have a working scale subresource [Conformance]'
  1106    description: Create a StatefulSet resource. Newly created StatefulSet resource MUST
  1107      have a scale of one. Bring the scale of the StatefulSet resource up to two. StatefulSet
  1108      scale MUST be at two replicas.
  1109    release: v1.16, v1.21
  1110    file: test/e2e/apps/statefulset.go
  1111  - testname: StatefulSet, list, patch and delete a collection of StatefulSets
  1112    codename: '[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic]
  1113      should list, patch and delete a collection of StatefulSets [Conformance]'
  1114    description: When a StatefulSet is created it MUST succeed. It MUST succeed when
  1115      listing StatefulSets via a label selector. It MUST succeed when patching a StatefulSet.
  1116      It MUST succeed when deleting the StatefulSet via deleteCollection.
  1117    release: v1.22
  1118    file: test/e2e/apps/statefulset.go
  1119  - testname: StatefulSet, Rolling Update with Partition
  1120    codename: '[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic]
  1121      should perform canary updates and phased rolling updates of template modifications
  1122      [Conformance]'
  1123    description: StatefulSet's RollingUpdate strategy MUST support the Partition parameter
  1124      for canaries and phased rollouts. If a Pod is deleted while a rolling update is
  1125      in progress, StatefulSet MUST restore the Pod without violating the Partition.
  1126      This test does not depend on a preexisting default StorageClass or a dynamic provisioner.
  1127    release: v1.9
  1128    file: test/e2e/apps/statefulset.go
  1129  - testname: StatefulSet, Rolling Update
  1130    codename: '[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic]
  1131      should perform rolling updates and roll backs of template modifications [Conformance]'
  1132    description: StatefulSet MUST support the RollingUpdate strategy to automatically
  1133      replace Pods one at a time when the Pod template changes. The StatefulSet's status
  1134      MUST indicate the CurrentRevision and UpdateRevision. If the template is changed
  1135      to match a prior revision, StatefulSet MUST detect this as a rollback instead
  1136      of creating a new revision. This test does not depend on a preexisting default
  1137      StorageClass or a dynamic provisioner.
  1138    release: v1.9
  1139    file: test/e2e/apps/statefulset.go
  1140  - testname: StatefulSet, status sub-resource
  1141    codename: '[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic]
  1142      should validate Statefulset Status endpoints [Conformance]'
  1143    description: When a StatefulSet is created it MUST succeed. Attempt to read, update
  1144      and patch its status sub-resource; all mutating sub-resource operations MUST be
  1145      visible to subsequent reads.
  1146    release: v1.22
  1147    file: test/e2e/apps/statefulset.go
  1148  - testname: Conformance tests minimum number of nodes.
  1149    codename: '[sig-architecture] Conformance Tests should have at least two untainted
  1150      nodes [Conformance]'
  1151    description: Conformance tests requires at least two untainted nodes where pods
  1152      can be scheduled.
  1153    release: v1.23
  1154    file: test/e2e/architecture/conformance.go
  1155  - testname: CertificateSigningRequest API
  1156    codename: '[sig-auth] Certificates API [Privileged:ClusterAdmin] should support
  1157      CSR API operations [Conformance]'
  1158    description: ' The certificates.k8s.io API group MUST exists in the /apis discovery
  1159      document. The certificates.k8s.io/v1 API group/version MUST exist in the /apis/certificates.k8s.io
  1160      discovery document. The certificatesigningrequests, certificatesigningrequests/approval,
  1161      and certificatesigningrequests/status resources MUST exist in the /apis/certificates.k8s.io/v1
  1162      discovery document. The certificatesigningrequests resource must support create,
  1163      get, list, watch, update, patch, delete, and deletecollection. The certificatesigningrequests/approval
  1164      resource must support get, update, patch. The certificatesigningrequests/status
  1165      resource must support get, update, patch.'
  1166    release: v1.19
  1167    file: test/e2e/auth/certificates.go
  1168  - testname: OIDC Discovery (ServiceAccountIssuerDiscovery)
  1169    codename: '[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support
  1170      OIDC discovery of service account issuer [Conformance]'
  1171    description: Ensure kube-apiserver serves correct OIDC discovery endpoints by deploying
  1172      a Pod that verifies its own token against these endpoints.
  1173    release: v1.21
  1174    file: test/e2e/auth/service_accounts.go
  1175  - testname: Service account tokens auto mount optionally
  1176    codename: '[sig-auth] ServiceAccounts should allow opting out of API token automount
  1177      [Conformance]'
  1178    description: Ensure that Service Account keys are mounted into the Pod only when
  1179      AutoMountServiceToken is not set to false. We test the following scenarios here.
  1180      1. Create Pod, Pod Spec has AutomountServiceAccountToken set to nil a) Service
  1181      Account with default value, b) Service Account is an configured AutomountServiceAccountToken
  1182      set to true, c) Service Account is an configured AutomountServiceAccountToken
  1183      set to false 2. Create Pod, Pod Spec has AutomountServiceAccountToken set to true
  1184      a) Service Account with default value, b) Service Account is configured with AutomountServiceAccountToken
  1185      set to true, c) Service Account is configured with AutomountServiceAccountToken
  1186      set to false 3. Create Pod, Pod Spec has AutomountServiceAccountToken set to false
  1187      a) Service Account with default value, b) Service Account is configured with AutomountServiceAccountToken
  1188      set to true, c) Service Account is configured with AutomountServiceAccountToken
  1189      set to false The Containers running in these pods MUST verify that the ServiceTokenVolume
  1190      path is auto mounted only when Pod Spec has AutomountServiceAccountToken not set
  1191      to false and ServiceAccount object has AutomountServiceAccountToken not set to
  1192      false, this include test cases 1a,1b,2a,2b and 2c. In the test cases 1c,3a,3b
  1193      and 3c the ServiceTokenVolume MUST not be auto mounted.
  1194    release: v1.9
  1195    file: test/e2e/auth/service_accounts.go
  1196  - testname: RootCA ConfigMap test
  1197    codename: '[sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in
  1198      any namespace [Conformance]'
  1199    description: Ensure every namespace exist a ConfigMap for root ca cert. 1. Created
  1200      automatically 2. Recreated if deleted 3. Reconciled if modified
  1201    release: v1.21
  1202    file: test/e2e/auth/service_accounts.go
  1203  - testname: Service Account Tokens Must AutoMount
  1204    codename: '[sig-auth] ServiceAccounts should mount an API token into pods [Conformance]'
  1205    description: Ensure that Service Account keys are mounted into the Container. Pod
  1206      contains three containers each will read Service Account token, root CA and default
  1207      namespace respectively from the default API Token Mount path. All these three
  1208      files MUST exist and the Service Account mount path MUST be auto mounted to the
  1209      Container.
  1210    release: v1.9
  1211    file: test/e2e/auth/service_accounts.go
  1212  - testname: TokenRequestProjection should mount a projected volume with token using
  1213      TokenRequest API.
  1214    codename: '[sig-auth] ServiceAccounts should mount projected service account token
  1215      [Conformance]'
  1216    description: Ensure that projected service account token is mounted.
  1217    release: v1.20
  1218    file: test/e2e/auth/service_accounts.go
  1219  - testname: ServiceAccount lifecycle test
  1220    codename: '[sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount
  1221      [Conformance]'
  1222    description: Creates a ServiceAccount with a static Label MUST be added as shown
  1223      in watch event. Patching the ServiceAccount MUST return it's new property. Listing
  1224      the ServiceAccounts MUST return the test ServiceAccount with it's patched values.
  1225      ServiceAccount will be deleted and MUST find a deleted watch event.
  1226    release: v1.19
  1227    file: test/e2e/auth/service_accounts.go
  1228  - testname: ServiceAccount, update a ServiceAccount
  1229    codename: '[sig-auth] ServiceAccounts should update a ServiceAccount [Conformance]'
  1230    description: A ServiceAccount is created which MUST succeed. When updating the ServiceAccount
  1231      it MUST succeed and the field MUST equal the new value.
  1232    release: v1.26
  1233    file: test/e2e/auth/service_accounts.go
  1234  - testname: SubjectReview, API Operations
  1235    codename: '[sig-auth] SubjectReview should support SubjectReview API operations
  1236      [Conformance]'
  1237    description: A ServiceAccount is created which MUST succeed. A clientset is created
  1238      to impersonate the ServiceAccount. A SubjectAccessReview is created for the ServiceAccount
  1239      which MUST succeed. The allowed status for the SubjectAccessReview MUST match
  1240      the expected allowed for the impersonated client call. A LocalSubjectAccessReviews
  1241      is created for the ServiceAccount which MUST succeed. The allowed status for the
  1242      LocalSubjectAccessReview MUST match the expected allowed for the impersonated
  1243      client call.
  1244    release: v1.27
  1245    file: test/e2e/auth/subjectreviews.go
  1246  - testname: Kubectl, guestbook application
  1247    codename: '[sig-cli] Kubectl client Guestbook application should create and stop
  1248      a working application [Conformance]'
  1249    description: Create Guestbook application that contains an agnhost primary server,
  1250      2 agnhost replicas, frontend application, frontend service and agnhost primary
  1251      service and agnhost replica service. Using frontend service, the test will write
  1252      an entry into the guestbook application which will store the entry into the backend
  1253      agnhost store. Application flow MUST work as expected and the data written MUST
  1254      be available to read.
  1255    release: v1.9
  1256    file: test/e2e/kubectl/kubectl.go
  1257  - testname: Kubectl, check version v1
  1258    codename: '[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in
  1259      available api versions [Conformance]'
  1260    description: Run kubectl to get api versions, output MUST contain returned versions
  1261      with 'v1' listed.
  1262    release: v1.9
  1263    file: test/e2e/kubectl/kubectl.go
  1264  - testname: Kubectl, cluster info
  1265    codename: '[sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes
  1266      control plane services is included in cluster-info [Conformance]'
  1267    description: Call kubectl to get cluster-info, output MUST contain cluster-info
  1268      returned and Kubernetes control plane SHOULD be running.
  1269    release: v1.9
  1270    file: test/e2e/kubectl/kubectl.go
  1271  - testname: Kubectl, describe pod or rc
  1272    codename: '[sig-cli] Kubectl client Kubectl describe should check if kubectl describe
  1273      prints relevant information for rc and pods [Conformance]'
  1274    description: Deploy an agnhost controller and an agnhost service. Kubectl describe
  1275      pods SHOULD return the name, namespace, labels, state and other information as
  1276      expected. Kubectl describe on rc, service, node and namespace SHOULD also return
  1277      proper information.
  1278    release: v1.9
  1279    file: test/e2e/kubectl/kubectl.go
  1280  - testname: Kubectl, diff Deployment
  1281    codename: '[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds
  1282      a difference for Deployments [Conformance]'
  1283    description: Create a Deployment with httpd image. Declare the same Deployment with
  1284      a different image, busybox. Diff of live Deployment with declared Deployment MUST
  1285      include the difference between live and declared image.
  1286    release: v1.19
  1287    file: test/e2e/kubectl/kubectl.go
  1288  - testname: Kubectl, create service, replication controller
  1289    codename: '[sig-cli] Kubectl client Kubectl expose should create services for rc
  1290      [Conformance]'
  1291    description: Create a Pod running agnhost listening to port 6379. Using kubectl
  1292      expose the agnhost primary replication controllers at port 1234. Validate that
  1293      the replication controller is listening on port 1234 and the target port is set
  1294      to 6379, port that agnhost primary is listening. Using kubectl expose the agnhost
  1295      primary as a service at port 2345. The service MUST be listening on port 2345
  1296      and the target port is set to 6379, port that agnhost primary is listening.
  1297    release: v1.9
  1298    file: test/e2e/kubectl/kubectl.go
  1299  - testname: Kubectl, label update
  1300    codename: '[sig-cli] Kubectl client Kubectl label should update the label on a resource
  1301      [Conformance]'
  1302    description: When a Pod is running, update a Label using 'kubectl label' command.
  1303      The label MUST be created in the Pod. A 'kubectl get pod' with -l option on the
  1304      container MUST verify that the label can be read back. Use 'kubectl label label-'
  1305      to remove the label. 'kubectl get pod' with -l option SHOULD not list the deleted
  1306      label as the label is removed.
  1307    release: v1.9
  1308    file: test/e2e/kubectl/kubectl.go
  1309  - testname: Kubectl, patch to annotate
  1310    codename: '[sig-cli] Kubectl client Kubectl patch should add annotations for pods
  1311      in rc [Conformance]'
  1312    description: Start running agnhost and a replication controller. When the pod is
  1313      running, using 'kubectl patch' command add annotations. The annotation MUST be
  1314      added to running pods and SHOULD be able to read added annotations from each of
  1315      the Pods running under the replication controller.
  1316    release: v1.9
  1317    file: test/e2e/kubectl/kubectl.go
  1318  - testname: Kubectl, replace
  1319    codename: '[sig-cli] Kubectl client Kubectl replace should update a single-container
  1320      pod''s image [Conformance]'
  1321    description: Command 'kubectl replace' on a existing Pod with a new spec MUST update
  1322      the image of the container running in the Pod. A -f option to 'kubectl replace'
  1323      SHOULD force to re-create the resource. The new Pod SHOULD have the container
  1324      with new change to the image.
  1325    release: v1.9
  1326    file: test/e2e/kubectl/kubectl.go
  1327  - testname: Kubectl, run pod
  1328    codename: '[sig-cli] Kubectl client Kubectl run pod should create a pod from an
  1329      image when restart is Never [Conformance]'
  1330    description: Command 'kubectl run' MUST create a pod, when a image name is specified
  1331      in the run command. After the run command there SHOULD be a pod that should exist
  1332      with one container running the specified image.
  1333    release: v1.9
  1334    file: test/e2e/kubectl/kubectl.go
  1335  - testname: Kubectl, server-side dry-run Pod
  1336    codename: '[sig-cli] Kubectl client Kubectl server-side dry-run should check if
  1337      kubectl can dry-run update Pods [Conformance]'
  1338    description: The command 'kubectl run' must create a pod with the specified image
  1339      name. After, the command 'kubectl patch pod -p {...} --dry-run=server' should
  1340      update the Pod with the new image name and server-side dry-run enabled. The image
  1341      name must not change.
  1342    release: v1.19
  1343    file: test/e2e/kubectl/kubectl.go
  1344  - testname: Kubectl, version
  1345    codename: '[sig-cli] Kubectl client Kubectl version should check is all data is
  1346      printed [Conformance]'
  1347    description: The command 'kubectl version' MUST return the major, minor versions,  GitCommit,
  1348      etc of the Client and the Server that the kubectl is configured to connect to.
  1349    release: v1.9
  1350    file: test/e2e/kubectl/kubectl.go
  1351  - testname: Kubectl, proxy socket
  1352    codename: '[sig-cli] Kubectl client Proxy server should support --unix-socket=/path
  1353      [Conformance]'
  1354    description: Start a proxy server on by running 'kubectl proxy' with --unix-socket=<some
  1355      path>. Call the proxy server by requesting api versions from  http://locahost:0/api.
  1356      The proxy server MUST provide at least one version string
  1357    release: v1.9
  1358    file: test/e2e/kubectl/kubectl.go
  1359  - testname: Kubectl, proxy port zero
  1360    codename: '[sig-cli] Kubectl client Proxy server should support proxy with --port
  1361      0 [Conformance]'
  1362    description: Start a proxy server on port zero by running 'kubectl proxy' with --port=0.
  1363      Call the proxy server by requesting api versions from unix socket. The proxy server
  1364      MUST provide at least one version string.
  1365    release: v1.9
  1366    file: test/e2e/kubectl/kubectl.go
  1367  - testname: Kubectl, replication controller
  1368    codename: '[sig-cli] Kubectl client Update Demo should create and stop a replication
  1369      controller [Conformance]'
  1370    description: Create a Pod and a container with a given image. Configure replication
  1371      controller to run 2 replicas. The number of running instances of the Pod MUST
  1372      equal the number of replicas set on the replication controller which is 2.
  1373    release: v1.9
  1374    file: test/e2e/kubectl/kubectl.go
  1375  - testname: Kubectl, scale replication controller
  1376    codename: '[sig-cli] Kubectl client Update Demo should scale a replication controller
  1377      [Conformance]'
  1378    description: Create a Pod and a container with a given image. Configure replication
  1379      controller to run 2 replicas. The number of running instances of the Pod MUST
  1380      equal the number of replicas set on the replication controller which is 2. Update
  1381      the replicaset to 1. Number of running instances of the Pod MUST be 1. Update
  1382      the replicaset to 2. Number of running instances of the Pod MUST be 2.
  1383    release: v1.9
  1384    file: test/e2e/kubectl/kubectl.go
  1385  - testname: Kubectl, logs
  1386    codename: '[sig-cli] Kubectl logs logs should be able to retrieve and filter logs
  1387      [Conformance]'
  1388    description: When a Pod is running then it MUST generate logs. Starting a Pod should
  1389      have a expected log line. Also log command options MUST work as expected and described
  1390      below. 'kubectl logs -tail=1' should generate a output of one line, the last line
  1391      in the log. 'kubectl --limit-bytes=1' should generate a single byte output. 'kubectl
  1392      --tail=1 --timestamp should generate one line with timestamp in RFC3339 format
  1393      'kubectl --since=1s' should output logs that are only 1 second older from now
  1394      'kubectl --since=24h' should output logs that are only 1 day older from now
  1395    release: v1.9
  1396    file: test/e2e/kubectl/logs.go
  1397  - testname: New Event resource lifecycle, testing a list of events
  1398    codename: '[sig-instrumentation] Events API should delete a collection of events
  1399      [Conformance]'
  1400    description: Create a list of events, the events MUST exist. The events are deleted
  1401      and MUST NOT show up when listing all events.
  1402    release: v1.19
  1403    file: test/e2e/instrumentation/events.go
  1404  - testname: New Event resource lifecycle, testing a single event
  1405    codename: '[sig-instrumentation] Events API should ensure that an event can be fetched,
  1406      patched, deleted, and listed [Conformance]'
  1407    description: Create an event, the event MUST exist. The event is patched with a
  1408      new note, the check MUST have the update note. The event is updated with a new
  1409      series, the check MUST have the update series. The event is deleted and MUST NOT
  1410      show up when listing all events.
  1411    release: v1.19
  1412    file: test/e2e/instrumentation/events.go
  1413  - testname: Event, delete a collection
  1414    codename: '[sig-instrumentation] Events should delete a collection of events [Conformance]'
  1415    description: A set of events is created with a label selector which MUST be found
  1416      when listed. The set of events is deleted and MUST NOT show up when listed by
  1417      its label selector.
  1418    release: v1.20
  1419    file: test/e2e/instrumentation/core_events.go
  1420  - testname: Event, manage lifecycle of an Event
  1421    codename: '[sig-instrumentation] Events should manage the lifecycle of an event
  1422      [Conformance]'
  1423    description: Attempt to create an event which MUST succeed. Attempt to list all
  1424      namespaces with a label selector which MUST succeed. One list MUST be found. The
  1425      event is patched with a new message, the check MUST have the update message. The
  1426      event is updated with a new series of events, the check MUST confirm this update.
  1427      The event is deleted and MUST NOT show up when listing all events.
  1428    release: v1.25
  1429    file: test/e2e/instrumentation/core_events.go
  1430  - testname: DNS, cluster
  1431    codename: '[sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance]'
  1432    description: When a Pod is created, the pod MUST be able to resolve cluster dns
  1433      entries such as kubernetes.default via /etc/hosts.
  1434    release: v1.14
  1435    file: test/e2e/network/dns.go
  1436  - testname: DNS, for ExternalName Services
  1437    codename: '[sig-network] DNS should provide DNS for ExternalName services [Conformance]'
  1438    description: Create a service with externalName. Pod MUST be able to resolve the
  1439      address for this service via CNAME. When externalName of this service is changed,
  1440      Pod MUST resolve to new DNS entry for the service. Change the service type from
  1441      externalName to ClusterIP, Pod MUST resolve DNS to the service by serving A records.
  1442    release: v1.15
  1443    file: test/e2e/network/dns.go
  1444  - testname: DNS, resolve the hostname
  1445    codename: '[sig-network] DNS should provide DNS for pods for Hostname [Conformance]'
  1446    description: Create a headless service with label. Create a Pod with label to match
  1447      service's label, with hostname and a subdomain same as service name. Pod MUST
  1448      be able to resolve its fully qualified domain name as well as hostname by serving
  1449      an A record at that name.
  1450    release: v1.15
  1451    file: test/e2e/network/dns.go
  1452  - testname: DNS, resolve the subdomain
  1453    codename: '[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]'
  1454    description: Create a headless service with label. Create a Pod with label to match
  1455      service's label, with hostname and a subdomain same as service name. Pod MUST
  1456      be able to resolve its fully qualified domain name as well as subdomain by serving
  1457      an A record at that name.
  1458    release: v1.15
  1459    file: test/e2e/network/dns.go
  1460  - testname: DNS, services
  1461    codename: '[sig-network] DNS should provide DNS for services [Conformance]'
  1462    description: When a headless service is created, the service MUST be able to resolve
  1463      all the required service endpoints. When the service is created, any pod in the
  1464      same namespace must be able to resolve the service by all of the expected DNS
  1465      names.
  1466    release: v1.9
  1467    file: test/e2e/network/dns.go
  1468  - testname: DNS, cluster
  1469    codename: '[sig-network] DNS should provide DNS for the cluster [Conformance]'
  1470    description: When a Pod is created, the pod MUST be able to resolve cluster dns
  1471      entries such as kubernetes.default via DNS.
  1472    release: v1.9
  1473    file: test/e2e/network/dns.go
  1474  - testname: DNS, PQDN for services
  1475    codename: '[sig-network] DNS should resolve DNS of partial qualified names for services
  1476      [LinuxOnly] [Conformance]'
  1477    description: 'Create a headless service and normal service. Both the services MUST
  1478      be able to resolve partial qualified DNS entries of their service endpoints by
  1479      serving A records and SRV records. [LinuxOnly]: As Windows currently does not
  1480      support resolving PQDNs.'
  1481    release: v1.17
  1482    file: test/e2e/network/dns.go
  1483  - testname: DNS, custom dnsConfig
  1484    codename: '[sig-network] DNS should support configurable pod DNS nameservers [Conformance]'
  1485    description: Create a Pod with DNSPolicy as None and custom DNS configuration, specifying
  1486      nameservers and search path entries. Pod creation MUST be successful and provided
  1487      DNS configuration MUST be configured in the Pod.
  1488    release: v1.17
  1489    file: test/e2e/network/dns.go
  1490  - testname: EndpointSlice API
  1491    codename: '[sig-network] EndpointSlice should create Endpoints and EndpointSlices
  1492      for Pods matching a Service [Conformance]'
  1493    description: The discovery.k8s.io API group MUST exist in the /apis discovery document.
  1494      The discovery.k8s.io/v1 API group/version MUST exist in the /apis/discovery.k8s.io
  1495      discovery document. The endpointslices resource MUST exist in the /apis/discovery.k8s.io/v1
  1496      discovery document. The endpointslice controller must create EndpointSlices for
  1497      Pods mataching a Service.
  1498    release: v1.21
  1499    file: test/e2e/network/endpointslice.go
  1500  - testname: EndpointSlice API
  1501    codename: '[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices
  1502      for a Service with a selector specified [Conformance]'
  1503    description: The discovery.k8s.io API group MUST exist in the /apis discovery document.
  1504      The discovery.k8s.io/v1 API group/version MUST exist in the /apis/discovery.k8s.io
  1505      discovery document. The endpointslices resource MUST exist in the /apis/discovery.k8s.io/v1
  1506      discovery document. The endpointslice controller should create and delete EndpointSlices
  1507      for Pods matching a Service.
  1508    release: v1.21
  1509    file: test/e2e/network/endpointslice.go
  1510  - testname: EndpointSlice API
  1511    codename: '[sig-network] EndpointSlice should have Endpoints and EndpointSlices
  1512      pointing to API Server [Conformance]'
  1513    description: The discovery.k8s.io API group MUST exist in the /apis discovery document.
  1514      The discovery.k8s.io/v1 API group/version MUST exist in the /apis/discovery.k8s.io
  1515      discovery document. The endpointslices resource MUST exist in the /apis/discovery.k8s.io/v1
  1516      discovery document. The cluster MUST have a service named "kubernetes" on the
  1517      default namespace referencing the API servers. The "kubernetes.default" service
  1518      MUST have Endpoints and EndpointSlices pointing to each API server instance.
  1519    release: v1.21
  1520    file: test/e2e/network/endpointslice.go
  1521  - testname: EndpointSlice API
  1522    codename: '[sig-network] EndpointSlice should support creating EndpointSlice API
  1523      operations [Conformance]'
  1524    description: The discovery.k8s.io API group MUST exist in the /apis discovery document.
  1525      The discovery.k8s.io/v1 API group/version MUST exist in the /apis/discovery.k8s.io
  1526      discovery document. The endpointslices resource MUST exist in the /apis/discovery.k8s.io/v1
  1527      discovery document. The endpointslices resource must support create, get, list,
  1528      watch, update, patch, delete, and deletecollection.
  1529    release: v1.21
  1530    file: test/e2e/network/endpointslice.go
  1531  - testname: EndpointSlice Mirroring
  1532    codename: '[sig-network] EndpointSliceMirroring should mirror a custom Endpoints
  1533      resource through create update and delete [Conformance]'
  1534    description: The discovery.k8s.io API group MUST exist in the /apis discovery document.
  1535      The discovery.k8s.io/v1 API group/version MUST exist in the /apis/discovery.k8s.io
  1536      discovery document. The endpointslices resource MUST exist in the /apis/discovery.k8s.io/v1
  1537      discovery document. The endpointslices mirrorowing must mirror endpoint create,
  1538      update, and delete actions.
  1539    release: v1.21
  1540    file: test/e2e/network/endpointslicemirroring.go
  1541  - testname: Scheduling, HostPort matching and HostIP and Protocol not-matching
  1542    codename: '[sig-network] HostPort validates that there is no conflict between pods
  1543      with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]'
  1544    description: Pods with the same HostPort value MUST be able to be scheduled to the
  1545      same node if the HostIP or Protocol is different. This test is marked LinuxOnly
  1546      since hostNetwork is not supported on Windows.
  1547    release: v1.16, v1.21
  1548    file: test/e2e/network/hostport.go
  1549  - testname: Ingress API
  1550    codename: '[sig-network] Ingress API should support creating Ingress API operations
  1551      [Conformance]'
  1552    description: ' The networking.k8s.io API group MUST exist in the /apis discovery
  1553      document. The networking.k8s.io/v1 API group/version MUST exist in the /apis/networking.k8s.io
  1554      discovery document. The ingresses resources MUST exist in the /apis/networking.k8s.io/v1
  1555      discovery document. The ingresses resource must support create, get, list, watch,
  1556      update, patch, delete, and deletecollection. The ingresses/status resource must
  1557      support update and patch'
  1558    release: v1.19
  1559    file: test/e2e/network/ingress.go
  1560  - testname: IngressClass API
  1561    codename: '[sig-network] IngressClass API should support creating IngressClass API
  1562      operations [Conformance]'
  1563    description: ' - The networking.k8s.io API group MUST exist in the /apis discovery
  1564      document. - The networking.k8s.io/v1 API group/version MUST exist in the /apis/networking.k8s.io
  1565      discovery document. - The ingressclasses resource MUST exist in the /apis/networking.k8s.io/v1
  1566      discovery document. - The ingressclass resource must support create, get, list,
  1567      watch, update, patch, delete, and deletecollection.'
  1568    release: v1.19
  1569    file: test/e2e/network/ingressclass.go
  1570  - testname: Networking, intra pod http
  1571    codename: '[sig-network] Networking Granular Checks: Pods should function for intra-pod
  1572      communication: http [NodeConformance] [Conformance]'
  1573    description: Create a hostexec pod that is capable of curl to netcat commands. Create
  1574      a test Pod that will act as a webserver front end exposing ports 8080 for tcp
  1575      and 8081 for udp. The netserver service proxies are created on specified number
  1576      of nodes. The kubectl exec on the webserver container MUST reach a http port on
  1577      the each of service proxy endpoints in the cluster and the request MUST be successful.
  1578      Container will execute curl command to reach the service port within specified
  1579      max retry limit and MUST result in reporting unique hostnames.
  1580    release: v1.9, v1.18
  1581    file: test/e2e/common/network/networking.go
  1582  - testname: Networking, intra pod udp
  1583    codename: '[sig-network] Networking Granular Checks: Pods should function for intra-pod
  1584      communication: udp [NodeConformance] [Conformance]'
  1585    description: Create a hostexec pod that is capable of curl to netcat commands. Create
  1586      a test Pod that will act as a webserver front end exposing ports 8080 for tcp
  1587      and 8081 for udp. The netserver service proxies are created on specified number
  1588      of nodes. The kubectl exec on the webserver container MUST reach a udp port on
  1589      the each of service proxy endpoints in the cluster and the request MUST be successful.
  1590      Container will execute curl command to reach the service port within specified
  1591      max retry limit and MUST result in reporting unique hostnames.
  1592    release: v1.9, v1.18
  1593    file: test/e2e/common/network/networking.go
  1594  - testname: Networking, intra pod http, from node
  1595    codename: '[sig-network] Networking Granular Checks: Pods should function for node-pod
  1596      communication: http [LinuxOnly] [NodeConformance] [Conformance]'
  1597    description: Create a hostexec pod that is capable of curl to netcat commands. Create
  1598      a test Pod that will act as a webserver front end exposing ports 8080 for tcp
  1599      and 8081 for udp. The netserver service proxies are created on specified number
  1600      of nodes. The kubectl exec on the webserver container MUST reach a http port on
  1601      the each of service proxy endpoints in the cluster using a http post(protocol=tcp)  and
  1602      the request MUST be successful. Container will execute curl command to reach the
  1603      service port within specified max retry limit and MUST result in reporting unique
  1604      hostnames. This test is marked LinuxOnly it breaks when using Overlay networking
  1605      with Windows.
  1606    release: v1.9
  1607    file: test/e2e/common/network/networking.go
  1608  - testname: Networking, intra pod http, from node
  1609    codename: '[sig-network] Networking Granular Checks: Pods should function for node-pod
  1610      communication: udp [LinuxOnly] [NodeConformance] [Conformance]'
  1611    description: Create a hostexec pod that is capable of curl to netcat commands. Create
  1612      a test Pod that will act as a webserver front end exposing ports 8080 for tcp
  1613      and 8081 for udp. The netserver service proxies are created on specified number
  1614      of nodes. The kubectl exec on the webserver container MUST reach a http port on
  1615      the each of service proxy endpoints in the cluster using a http post(protocol=udp)  and
  1616      the request MUST be successful. Container will execute curl command to reach the
  1617      service port within specified max retry limit and MUST result in reporting unique
  1618      hostnames. This test is marked LinuxOnly it breaks when using Overlay networking
  1619      with Windows.
  1620    release: v1.9
  1621    file: test/e2e/common/network/networking.go
  1622  - testname: Proxy, validate Proxy responses
  1623    codename: '[sig-network] Proxy version v1 A set of valid responses are returned
  1624      for both pod and service Proxy [Conformance]'
  1625    description: Attempt to create a pod and a service. A set of pod and service endpoints
  1626      MUST be accessed via Proxy using a list of http methods. A valid response MUST
  1627      be returned for each endpoint.
  1628    release: v1.24
  1629    file: test/e2e/network/proxy.go
  1630  - testname: Proxy, validate ProxyWithPath responses
  1631    codename: '[sig-network] Proxy version v1 A set of valid responses are returned
  1632      for both pod and service ProxyWithPath [Conformance]'
  1633    description: Attempt to create a pod and a service. A set of pod and service endpoints
  1634      MUST be accessed via ProxyWithPath using a list of http methods. A valid response
  1635      MUST be returned for each endpoint.
  1636    release: v1.21
  1637    file: test/e2e/network/proxy.go
  1638  - testname: Proxy, logs service endpoint
  1639    codename: '[sig-network] Proxy version v1 should proxy through a service and a pod
  1640      [Conformance]'
  1641    description: Select any node in the cluster to invoke  /logs endpoint  using the
  1642      /nodes/proxy subresource from the kubelet port. This endpoint MUST be reachable.
  1643    release: v1.9
  1644    file: test/e2e/network/proxy.go
  1645  - testname: Service endpoint latency, thresholds
  1646    codename: '[sig-network] Service endpoints latency should not be very high [Conformance]'
  1647    description: Run 100 iterations of create service with the Pod running the pause
  1648      image, measure the time it takes for creating the service and the endpoint with
  1649      the service name is available. These durations are captured for 100 iterations,
  1650      then the durations are sorted to compute 50th, 90th and 99th percentile. The single
  1651      server latency MUST not exceed liberally set thresholds of 20s for 50th percentile
  1652      and 50s for the 90th percentile.
  1653    release: v1.9
  1654    file: test/e2e/network/service_latency.go
  1655  - testname: Service, change type, ClusterIP to ExternalName
  1656    codename: '[sig-network] Services should be able to change the type from ClusterIP
  1657      to ExternalName [Conformance]'
  1658    description: Create a service of type ClusterIP. Service creation MUST be successful
  1659      by assigning ClusterIP to the service. Update service type from ClusterIP to ExternalName
  1660      by setting CNAME entry as externalName. Service update MUST be successful and
  1661      service MUST not has associated ClusterIP. Service MUST be able to resolve to
  1662      IP address by returning A records ensuring service is pointing to provided externalName.
  1663    release: v1.16
  1664    file: test/e2e/network/service.go
  1665  - testname: Service, change type, ExternalName to ClusterIP
  1666    codename: '[sig-network] Services should be able to change the type from ExternalName
  1667      to ClusterIP [Conformance]'
  1668    description: Create a service of type ExternalName, pointing to external DNS. ClusterIP
  1669      MUST not be assigned to the service. Update the service from ExternalName to ClusterIP
  1670      by removing ExternalName entry, assigning port 80 as service port and TCP as protocol.
  1671      Service update MUST be successful by assigning ClusterIP to the service and it
  1672      MUST be reachable over serviceName and ClusterIP on provided service port.
  1673    release: v1.16
  1674    file: test/e2e/network/service.go
  1675  - testname: Service, change type, ExternalName to NodePort
  1676    codename: '[sig-network] Services should be able to change the type from ExternalName
  1677      to NodePort [Conformance]'
  1678    description: Create a service of type ExternalName, pointing to external DNS. ClusterIP
  1679      MUST not be assigned to the service. Update the service from ExternalName to NodePort,
  1680      assigning port 80 as service port and, TCP as protocol. service update MUST be
  1681      successful by exposing service on every node's IP on dynamically assigned NodePort
  1682      and, ClusterIP MUST be assigned to route service requests. Service MUST be reachable
  1683      over serviceName and the ClusterIP on servicePort. Service MUST also be reachable
  1684      over node's IP on NodePort.
  1685    release: v1.16
  1686    file: test/e2e/network/service.go
  1687  - testname: Service, change type, NodePort to ExternalName
  1688    codename: '[sig-network] Services should be able to change the type from NodePort
  1689      to ExternalName [Conformance]'
  1690    description: Create a service of type NodePort. Service creation MUST be successful
  1691      by exposing service on every node's IP on dynamically assigned NodePort and, ClusterIP
  1692      MUST be assigned to route service requests. Update the service type from NodePort
  1693      to ExternalName by setting CNAME entry as externalName. Service update MUST be
  1694      successful and, MUST not has ClusterIP associated with the service and, allocated
  1695      NodePort MUST be released. Service MUST be able to resolve to IP address by returning
  1696      A records ensuring service is pointing to provided externalName.
  1697    release: v1.16
  1698    file: test/e2e/network/service.go
  1699  - testname: Service, NodePort Service
  1700    codename: '[sig-network] Services should be able to create a functioning NodePort
  1701      service [Conformance]'
  1702    description: Create a TCP NodePort service, and test reachability from a client
  1703      Pod. The client Pod MUST be able to access the NodePort service by service name
  1704      and cluster IP on the service port, and on nodes' internal and external IPs on
  1705      the NodePort.
  1706    release: v1.16
  1707    file: test/e2e/network/service.go
  1708  - testname: Service, NodePort type, session affinity to None
  1709    codename: '[sig-network] Services should be able to switch session affinity for
  1710      NodePort service [LinuxOnly] [Conformance]'
  1711    description: 'Create a service of type "NodePort" and provide service port and protocol.
  1712      Service''s sessionAffinity is set to "ClientIP". Service creation MUST be successful
  1713      by assigning a "ClusterIP" to the service and allocating NodePort on all the nodes.
  1714      Create a Replication Controller to ensure that 3 pods are running and are targeted
  1715      by the service to serve hostname of the pod when requests are sent to the service.
  1716      Create another pod to make requests to the service. Update the service''s sessionAffinity
  1717      to "None". Service update MUST be successful. When a requests are made to the
  1718      service on node''s IP and NodePort, service MUST be able serve the hostname from
  1719      any pod of the replica. When service''s sessionAffinily is updated back to "ClientIP",
  1720      service MUST serve the hostname from the same pod of the replica for all consecutive
  1721      requests. Service MUST be reachable over serviceName and the ClusterIP on servicePort.
  1722      Service MUST also be reachable over node''s IP on NodePort. [LinuxOnly]: Windows
  1723      does not support session affinity.'
  1724    release: v1.19
  1725    file: test/e2e/network/service.go
  1726  - testname: Service, ClusterIP type, session affinity to None
  1727    codename: '[sig-network] Services should be able to switch session affinity for
  1728      service with type clusterIP [LinuxOnly] [Conformance]'
  1729    description: 'Create a service of type "ClusterIP". Service''s sessionAffinity is
  1730      set to "ClientIP". Service creation MUST be successful by assigning "ClusterIP"
  1731      to the service. Create a Replication Controller to ensure that 3 pods are running
  1732      and are targeted by the service to serve hostname of the pod when requests are
  1733      sent to the service. Create another pod to make requests to the service. Update
  1734      the service''s sessionAffinity to "None". Service update MUST be successful. When
  1735      a requests are made to the service, it MUST be able serve the hostname from any
  1736      pod of the replica. When service''s sessionAffinily is updated back to "ClientIP",
  1737      service MUST serve the hostname from the same pod of the replica for all consecutive
  1738      requests. Service MUST be reachable over serviceName and the ClusterIP on servicePort.
  1739      [LinuxOnly]: Windows does not support session affinity.'
  1740    release: v1.19
  1741    file: test/e2e/network/service.go
  1742  - testname: Service, complete ServiceStatus lifecycle
  1743    codename: '[sig-network] Services should complete a service status lifecycle [Conformance]'
  1744    description: Create a service, the service MUST exist. When retrieving /status the
  1745      action MUST be validated. When patching /status the action MUST be validated.
  1746      When updating /status the action MUST be validated. When patching a service the
  1747      action MUST be validated.
  1748    release: v1.21
  1749    file: test/e2e/network/service.go
  1750  - testname: Service, deletes a collection of services
  1751    codename: '[sig-network] Services should delete a collection of services [Conformance]'
  1752    description: Create three services with the required labels and ports. It MUST locate
  1753      three services in the test namespace. It MUST succeed at deleting a collection
  1754      of services via a label selector. It MUST locate only one service after deleting
  1755      the service collection.
  1756    release: v1.23
  1757    file: test/e2e/network/service.go
  1758  - testname: Find Kubernetes Service in default Namespace
  1759    codename: '[sig-network] Services should find a service from listing all namespaces
  1760      [Conformance]'
  1761    description: List all Services in all Namespaces, response MUST include a Service
  1762      named Kubernetes with the Namespace of default.
  1763    release: v1.18
  1764    file: test/e2e/network/service.go
  1765  - testname: Service, NodePort type, session affinity to ClientIP
  1766    codename: '[sig-network] Services should have session affinity work for NodePort
  1767      service [LinuxOnly] [Conformance]'
  1768    description: 'Create a service of type "NodePort" and provide service port and protocol.
  1769      Service''s sessionAffinity is set to "ClientIP". Service creation MUST be successful
  1770      by assigning a "ClusterIP" to service and allocating NodePort on all nodes. Create
  1771      a Replication Controller to ensure that 3 pods are running and are targeted by
  1772      the service to serve hostname of the pod when a requests are sent to the service.
  1773      Create another pod to make requests to the service on node''s IP and NodePort.
  1774      Service MUST serve the hostname from the same pod of the replica for all consecutive
  1775      requests. Service MUST be reachable over serviceName and the ClusterIP on servicePort.
  1776      Service MUST also be reachable over node''s IP on NodePort. [LinuxOnly]: Windows
  1777      does not support session affinity.'
  1778    release: v1.19
  1779    file: test/e2e/network/service.go
  1780  - testname: Service, ClusterIP type, session affinity to ClientIP
  1781    codename: '[sig-network] Services should have session affinity work for service
  1782      with type clusterIP [LinuxOnly] [Conformance]'
  1783    description: 'Create a service of type "ClusterIP". Service''s sessionAffinity is
  1784      set to "ClientIP". Service creation MUST be successful by assigning "ClusterIP"
  1785      to the service. Create a Replication Controller to ensure that 3 pods are running
  1786      and are targeted by the service to serve hostname of the pod when requests are
  1787      sent to the service. Create another pod to make requests to the service. Service
  1788      MUST serve the hostname from the same pod of the replica for all consecutive requests.
  1789      Service MUST be reachable over serviceName and the ClusterIP on servicePort. [LinuxOnly]:
  1790      Windows does not support session affinity.'
  1791    release: v1.19
  1792    file: test/e2e/network/service.go
  1793  - testname: Kubernetes Service
  1794    codename: '[sig-network] Services should provide secure master service [Conformance]'
  1795    description: By default when a kubernetes cluster is running there MUST be a 'kubernetes'
  1796      service running in the cluster.
  1797    release: v1.9
  1798    file: test/e2e/network/service.go
  1799  - testname: Service, endpoints
  1800    codename: '[sig-network] Services should serve a basic endpoint from pods [Conformance]'
  1801    description: Create a service with a endpoint without any Pods, the service MUST
  1802      run and show empty endpoints. Add a pod to the service and the service MUST validate
  1803      to show all the endpoints for the ports exposed by the Pod. Add another Pod then
  1804      the list of all Ports exposed by both the Pods MUST be valid and have corresponding
  1805      service endpoint. Once the second Pod is deleted then set of endpoint MUST be
  1806      validated to show only ports from the first container that are exposed. Once both
  1807      pods are deleted the endpoints from the service MUST be empty.
  1808    release: v1.9
  1809    file: test/e2e/network/service.go
  1810  - testname: Service, should serve endpoints on same port and different protocols.
  1811    codename: '[sig-network] Services should serve endpoints on same port and different
  1812      protocols [Conformance]'
  1813    description: Create one service with two ports, same port number and different protocol
  1814      TCP and UDP. It MUST be able to forward traffic to both ports. Update the Service
  1815      to expose only the TCP port, it MUST succeed to connect to the TCP port and fail
  1816      to connect to the UDP port. Update the Service to expose only the UDP port, it
  1817      MUST succeed to connect to the UDP port and fail to connect to the TCP port.
  1818    release: v1.29
  1819    file: test/e2e/network/service.go
  1820  - testname: Service, endpoints with multiple ports
  1821    codename: '[sig-network] Services should serve multiport endpoints from pods [Conformance]'
  1822    description: Create a service with two ports but no Pods are added to the service
  1823      yet.  The service MUST run and show empty set of endpoints. Add a Pod to the first
  1824      port, service MUST list one endpoint for the Pod on that port. Add another Pod
  1825      to the second port, service MUST list both the endpoints. Delete the first Pod
  1826      and the service MUST list only the endpoint to the second Pod. Delete the second
  1827      Pod and the service must now have empty set of endpoints.
  1828    release: v1.9
  1829    file: test/e2e/network/service.go
  1830  - testname: Endpoint resource lifecycle
  1831    codename: '[sig-network] Services should test the lifecycle of an Endpoint [Conformance]'
  1832    description: Create an endpoint, the endpoint MUST exist. The endpoint is updated
  1833      with a new label, a check after the update MUST find the changes. The endpoint
  1834      is then patched with a new IPv4 address and port, a check after the patch MUST
  1835      the changes. The endpoint is deleted by it's label, a watch listens for the deleted
  1836      watch event.
  1837    release: v1.19
  1838    file: test/e2e/network/service.go
  1839  - testname: ConfigMap, from environment field
  1840    codename: '[sig-node] ConfigMap should be consumable via environment variable [NodeConformance]
  1841      [Conformance]'
  1842    description: Create a Pod with an environment variable value set using a value from
  1843      ConfigMap. A ConfigMap value MUST be accessible in the container environment.
  1844    release: v1.9
  1845    file: test/e2e/common/node/configmap.go
  1846  - testname: ConfigMap, from environment variables
  1847    codename: '[sig-node] ConfigMap should be consumable via the environment [NodeConformance]
  1848      [Conformance]'
  1849    description: Create a Pod with a environment source from ConfigMap. All ConfigMap
  1850      values MUST be available as environment variables in the container.
  1851    release: v1.9
  1852    file: test/e2e/common/node/configmap.go
  1853  - testname: ConfigMap, with empty-key
  1854    codename: '[sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]'
  1855    description: Attempt to create a ConfigMap with an empty key. The creation MUST
  1856      fail.
  1857    release: v1.14
  1858    file: test/e2e/common/node/configmap.go
  1859  - testname: ConfigMap lifecycle
  1860    codename: '[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]'
  1861    description: Attempt to create a ConfigMap. Patch the created ConfigMap. Fetching
  1862      the ConfigMap MUST reflect changes. By fetching all the ConfigMaps via a Label
  1863      selector it MUST find the ConfigMap by it's static label and updated value. The
  1864      ConfigMap must be deleted by Collection.
  1865    release: v1.19
  1866    file: test/e2e/common/node/configmap.go
  1867  - testname: Pod Lifecycle, post start exec hook
  1868    codename: '[sig-node] Container Lifecycle Hook when create a pod with lifecycle
  1869      hook should execute poststart exec hook properly [NodeConformance] [Conformance]'
  1870    description: When a post start handler is specified in the container lifecycle using
  1871      a 'Exec' action, then the handler MUST be invoked after the start of the container.
  1872      A server pod is created that will serve http requests, create a second pod with
  1873      a container lifecycle specifying a post start that invokes the server pod using
  1874      ExecAction to validate that the post start is executed.
  1875    release: v1.9
  1876    file: test/e2e/common/node/lifecycle_hook.go
  1877  - testname: Pod Lifecycle, post start http hook
  1878    codename: '[sig-node] Container Lifecycle Hook when create a pod with lifecycle
  1879      hook should execute poststart http hook properly [NodeConformance] [Conformance]'
  1880    description: When a post start handler is specified in the container lifecycle using
  1881      a HttpGet action, then the handler MUST be invoked after the start of the container.
  1882      A server pod is created that will serve http requests, create a second pod on
  1883      the same node with a container lifecycle specifying a post start that invokes
  1884      the server pod to validate that the post start is executed.
  1885    release: v1.9
  1886    file: test/e2e/common/node/lifecycle_hook.go
  1887  - testname: Pod Lifecycle, prestop exec hook
  1888    codename: '[sig-node] Container Lifecycle Hook when create a pod with lifecycle
  1889      hook should execute prestop exec hook properly [NodeConformance] [Conformance]'
  1890    description: When a pre-stop handler is specified in the container lifecycle using
  1891      a 'Exec' action, then the handler MUST be invoked before the container is terminated.
  1892      A server pod is created that will serve http requests, create a second pod with
  1893      a container lifecycle specifying a pre-stop that invokes the server pod using
  1894      ExecAction to validate that the pre-stop is executed.
  1895    release: v1.9
  1896    file: test/e2e/common/node/lifecycle_hook.go
  1897  - testname: Pod Lifecycle, prestop http hook
  1898    codename: '[sig-node] Container Lifecycle Hook when create a pod with lifecycle
  1899      hook should execute prestop http hook properly [NodeConformance] [Conformance]'
  1900    description: When a pre-stop handler is specified in the container lifecycle using
  1901      a 'HttpGet' action, then the handler MUST be invoked before the container is terminated.
  1902      A server pod is created that will serve http requests, create a second pod on
  1903      the same node with a container lifecycle specifying a pre-stop that invokes the
  1904      server pod to validate that the pre-stop is executed.
  1905    release: v1.9
  1906    file: test/e2e/common/node/lifecycle_hook.go
  1907  - testname: Container Runtime, TerminationMessage, from log output of succeeding container
  1908    codename: '[sig-node] Container Runtime blackbox test on terminated container should
  1909      report termination message as empty when pod succeeds and TerminationMessagePolicy
  1910      FallbackToLogsOnError is set [NodeConformance] [Conformance]'
  1911    description: Create a pod with an container. Container's output is recorded in log
  1912      and container exits successfully without an error. When container is terminated,
  1913      terminationMessage MUST have no content as container succeed.
  1914    release: v1.15
  1915    file: test/e2e/common/node/runtime.go
  1916  - testname: Container Runtime, TerminationMessage, from file of succeeding container
  1917    codename: '[sig-node] Container Runtime blackbox test on terminated container should
  1918      report termination message from file when pod succeeds and TerminationMessagePolicy
  1919      FallbackToLogsOnError is set [NodeConformance] [Conformance]'
  1920    description: Create a pod with an container. Container's output is recorded in a
  1921      file and the container exits successfully without an error. When container is
  1922      terminated, terminationMessage MUST match with the content from file.
  1923    release: v1.15
  1924    file: test/e2e/common/node/runtime.go
  1925  - testname: Container Runtime, TerminationMessage, from container's log output of
  1926      failing container
  1927    codename: '[sig-node] Container Runtime blackbox test on terminated container should
  1928      report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError
  1929      is set [NodeConformance] [Conformance]'
  1930    description: Create a pod with an container. Container's output is recorded in log
  1931      and container exits with an error. When container is terminated, termination message
  1932      MUST match the expected output recorded from container's log.
  1933    release: v1.15
  1934    file: test/e2e/common/node/runtime.go
  1935  - testname: Container Runtime, TerminationMessagePath, non-root user and non-default
  1936      path
  1937    codename: '[sig-node] Container Runtime blackbox test on terminated container should
  1938      report termination message if TerminationMessagePath is set as non-root user and
  1939      at a non-default path [NodeConformance] [Conformance]'
  1940    description: Create a pod with a container to run it as a non-root user with a custom
  1941      TerminationMessagePath set. Pod redirects the output to the provided path successfully.
  1942      When the container is terminated, the termination message MUST match the expected
  1943      output logged in the provided custom path.
  1944    release: v1.15
  1945    file: test/e2e/common/node/runtime.go
  1946  - testname: Container Runtime, Restart Policy, Pod Phases
  1947    codename: '[sig-node] Container Runtime blackbox test when starting a container
  1948      that exits should run with the expected status [NodeConformance] [Conformance]'
  1949    description: If the restart policy is set to 'Always', Pod MUST be restarted when
  1950      terminated, If restart policy is 'OnFailure', Pod MUST be started only if it is
  1951      terminated with non-zero exit code. If the restart policy is 'Never', Pod MUST
  1952      never be restarted. All these three test cases MUST verify the restart counts
  1953      accordingly.
  1954    release: v1.13
  1955    file: test/e2e/common/node/runtime.go
  1956  - testname: Containers, with arguments
  1957    codename: '[sig-node] Containers should be able to override the image''s default
  1958      arguments (container cmd) [NodeConformance] [Conformance]'
  1959    description: Default command and  from the container image entrypoint MUST be used
  1960      when Pod does not specify the container command but the arguments from Pod spec
  1961      MUST override when specified.
  1962    release: v1.9
  1963    file: test/e2e/common/node/containers.go
  1964  - testname: Containers, with command
  1965    codename: '[sig-node] Containers should be able to override the image''s default
  1966      command (container entrypoint) [NodeConformance] [Conformance]'
  1967    description: Default command from the container image entrypoint MUST NOT be used
  1968      when Pod specifies the container command.  Command from Pod spec MUST override
  1969      the command in the image.
  1970    release: v1.9
  1971    file: test/e2e/common/node/containers.go
  1972  - testname: Containers, with command and arguments
  1973    codename: '[sig-node] Containers should be able to override the image''s default
  1974      command and arguments [NodeConformance] [Conformance]'
  1975    description: Default command and arguments from the container image entrypoint MUST
  1976      NOT be used when Pod specifies the container command and arguments.  Command and
  1977      arguments from Pod spec MUST override the command and arguments in the image.
  1978    release: v1.9
  1979    file: test/e2e/common/node/containers.go
  1980  - testname: Containers, without command and arguments
  1981    codename: '[sig-node] Containers should use the image defaults if command and args
  1982      are blank [NodeConformance] [Conformance]'
  1983    description: Default command and arguments from the container image entrypoint MUST
  1984      be used when Pod does not specify the container command
  1985    release: v1.9
  1986    file: test/e2e/common/node/containers.go
  1987  - testname: DownwardAPI, environment for CPU and memory limits and requests
  1988    codename: '[sig-node] Downward API should provide container''s limits.cpu/memory
  1989      and requests.cpu/memory as env vars [NodeConformance] [Conformance]'
  1990    description: Downward API MUST expose CPU request and Memory request set through
  1991      environment variables at runtime in the container.
  1992    release: v1.9
  1993    file: test/e2e/common/node/downwardapi.go
  1994  - testname: DownwardAPI, environment for default CPU and memory limits and requests
  1995    codename: '[sig-node] Downward API should provide default limits.cpu/memory from
  1996      node allocatable [NodeConformance] [Conformance]'
  1997    description: Downward API MUST expose CPU request and Memory limits set through
  1998      environment variables at runtime in the container.
  1999    release: v1.9
  2000    file: test/e2e/common/node/downwardapi.go
  2001  - testname: DownwardAPI, environment for host ip
  2002    codename: '[sig-node] Downward API should provide host IP as an env var [NodeConformance]
  2003      [Conformance]'
  2004    description: Downward API MUST expose Pod and Container fields as environment variables.
  2005      Specify host IP as environment variable in the Pod Spec are visible at runtime
  2006      in the container.
  2007    release: v1.9
  2008    file: test/e2e/common/node/downwardapi.go
  2009  - testname: DownwardAPI, environment for Pod UID
  2010    codename: '[sig-node] Downward API should provide pod UID as env vars [NodeConformance]
  2011      [Conformance]'
  2012    description: Downward API MUST expose Pod UID set through environment variables
  2013      at runtime in the container.
  2014    release: v1.9
  2015    file: test/e2e/common/node/downwardapi.go
  2016  - testname: DownwardAPI, environment for name, namespace and ip
  2017    codename: '[sig-node] Downward API should provide pod name, namespace and IP address
  2018      as env vars [NodeConformance] [Conformance]'
  2019    description: Downward API MUST expose Pod and Container fields as environment variables.
  2020      Specify Pod Name, namespace and IP as environment variable in the Pod Spec are
  2021      visible at runtime in the container.
  2022    release: v1.9
  2023    file: test/e2e/common/node/downwardapi.go
  2024  - testname: Ephemeral Container, update ephemeral containers
  2025    codename: '[sig-node] Ephemeral Containers [NodeConformance] should update the ephemeral
  2026      containers in an existing pod [Conformance]'
  2027    description: Adding an ephemeral container to pod.spec MUST result in the container
  2028      running. There MUST now be only one ephermal container found. Updating the pod
  2029      with another ephemeral container MUST succeed. There MUST now be two ephermal
  2030      containers found.
  2031    release: v1.28
  2032    file: test/e2e/common/node/ephemeral_containers.go
  2033  - testname: Ephemeral Container Creation
  2034    codename: '[sig-node] Ephemeral Containers [NodeConformance] will start an ephemeral
  2035      container in an existing pod [Conformance]'
  2036    description: Adding an ephemeral container to pod.spec MUST result in the container
  2037      running.
  2038    release: "1.25"
  2039    file: test/e2e/common/node/ephemeral_containers.go
  2040  - testname: init-container-starts-app-restartalways-pod
  2041    codename: '[sig-node] InitContainer [NodeConformance] should invoke init containers
  2042      on a RestartAlways pod [Conformance]'
  2043    description: Ensure that all InitContainers are started and all containers in pod
  2044      started and at least one container is still running or is in the process of being
  2045      restarted when Pod has restart policy as RestartAlways.
  2046    release: v1.12
  2047    file: test/e2e/common/node/init_container.go
  2048  - testname: init-container-starts-app-restartnever-pod
  2049    codename: '[sig-node] InitContainer [NodeConformance] should invoke init containers
  2050      on a RestartNever pod [Conformance]'
  2051    description: Ensure that all InitContainers are started and all containers in pod
  2052      are voluntarily terminated with exit status 0, and the system is not going to
  2053      restart any of these containers when Pod has restart policy as RestartNever.
  2054    release: v1.12
  2055    file: test/e2e/common/node/init_container.go
  2056  - testname: init-container-fails-stops-app-restartnever-pod
  2057    codename: '[sig-node] InitContainer [NodeConformance] should not start app containers
  2058      and fail the pod if init containers fail on a RestartNever pod [Conformance]'
  2059    description: Ensure that app container is not started when at least one InitContainer
  2060      fails to start and Pod has restart policy as RestartNever.
  2061    release: v1.12
  2062    file: test/e2e/common/node/init_container.go
  2063  - testname: init-container-fails-stops-app-restartalways-pod
  2064    codename: '[sig-node] InitContainer [NodeConformance] should not start app containers
  2065      if init containers fail on a RestartAlways pod [Conformance]'
  2066    description: Ensure that app container is not started when all InitContainers failed
  2067      to start and Pod has restarted for few occurrences and pod has restart policy
  2068      as RestartAlways.
  2069    release: v1.12
  2070    file: test/e2e/common/node/init_container.go
  2071  - testname: Kubelet, log output, default
  2072    codename: '[sig-node] Kubelet when scheduling a busybox command in a pod should
  2073      print the output to logs [NodeConformance] [Conformance]'
  2074    description: By default the stdout and stderr from the process being executed in
  2075      a pod MUST be sent to the pod's logs.
  2076    release: v1.13
  2077    file: test/e2e/common/node/kubelet.go
  2078  - testname: Kubelet, failed pod, delete
  2079    codename: '[sig-node] Kubelet when scheduling a busybox command that always fails
  2080      in a pod should be possible to delete [NodeConformance] [Conformance]'
  2081    description: Create a Pod with terminated state. This terminated pod MUST be able
  2082      to be deleted.
  2083    release: v1.13
  2084    file: test/e2e/common/node/kubelet.go
  2085  - testname: Kubelet, failed pod, terminated reason
  2086    codename: '[sig-node] Kubelet when scheduling a busybox command that always fails
  2087      in a pod should have an terminated reason [NodeConformance] [Conformance]'
  2088    description: Create a Pod with terminated state. Pod MUST have only one container.
  2089      Container MUST be in terminated state and MUST have an terminated reason.
  2090    release: v1.13
  2091    file: test/e2e/common/node/kubelet.go
  2092  - testname: Kubelet, pod with read only root file system
  2093    codename: '[sig-node] Kubelet when scheduling a read only busybox container should
  2094      not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]'
  2095    description: Create a Pod with security context set with ReadOnlyRootFileSystem
  2096      set to true. The Pod then tries to write to the /file on the root, write operation
  2097      to the root filesystem MUST fail as expected. This test is marked LinuxOnly since
  2098      Windows does not support creating containers with read-only access.
  2099    release: v1.13
  2100    file: test/e2e/common/node/kubelet.go
  2101  - testname: Kubelet, hostAliases
  2102    codename: '[sig-node] Kubelet when scheduling an agnhost Pod with hostAliases should
  2103      write entries to /etc/hosts [NodeConformance] [Conformance]'
  2104    description: Create a Pod with hostAliases and a container with command to output
  2105      /etc/hosts entries. Pod's logs MUST have matching entries of specified hostAliases
  2106      to the output of /etc/hosts entries.
  2107    release: v1.13
  2108    file: test/e2e/common/node/kubelet.go
  2109  - testname: Kubelet, managed etc hosts
  2110    codename: '[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts
  2111      file [LinuxOnly] [NodeConformance] [Conformance]'
  2112    description: Create a Pod with containers with hostNetwork set to false, one of
  2113      the containers mounts the /etc/hosts file form the host. Create a second Pod with
  2114      hostNetwork set to true. 1. The Pod with hostNetwork=false MUST have /etc/hosts
  2115      of containers managed by the Kubelet. 2. The Pod with hostNetwork=false but the
  2116      container mounts /etc/hosts file from the host. The /etc/hosts file MUST not be
  2117      managed by the Kubelet. 3. The Pod with hostNetwork=true , /etc/hosts file MUST
  2118      not be managed by the Kubelet. This test is marked LinuxOnly since Windows cannot
  2119      mount individual files in Containers.
  2120    release: v1.9
  2121    file: test/e2e/common/node/kubelet_etc_hosts.go
  2122  - testname: lease API should be available
  2123    codename: '[sig-node] Lease lease API should be available [Conformance]'
  2124    description: "Create Lease object, and get it; create and get MUST be successful
  2125      and Spec of the read Lease MUST match Spec of original Lease. Update the Lease
  2126      and get it; update and get MUST be successful\tand Spec of the read Lease MUST
  2127      match Spec of updated Lease. Patch the Lease and get it; patch and get MUST be
  2128      successful and Spec of the read Lease MUST match Spec of patched Lease. Create
  2129      a second Lease with labels and list Leases; create and list MUST be successful
  2130      and list MUST return both leases. Delete the labels lease via delete collection;
  2131      the delete MUST be successful and MUST delete only the labels lease. List leases;
  2132      list MUST be successful and MUST return just the remaining lease. Delete the lease;
  2133      delete MUST be successful. Get the lease; get MUST return not found error."
  2134    release: v1.17
  2135    file: test/e2e/common/node/lease.go
  2136  - testname: Pod Eviction, Toleration limits
  2137    codename: '[sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with
  2138      minTolerationSeconds [Disruptive] [Conformance]'
  2139    description: In a multi-pods scenario with tolerationSeconds, the pods MUST be evicted
  2140      as per the toleration time limit.
  2141    release: v1.16
  2142    file: test/e2e/node/taints.go
  2143  - testname: Taint, Pod Eviction on taint removal
  2144    codename: '[sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels
  2145      eviction [Disruptive] [Conformance]'
  2146    description: The Pod with toleration timeout scheduled on a tainted Node MUST not
  2147      be evicted if the taint is removed before toleration time ends.
  2148    release: v1.16
  2149    file: test/e2e/node/taints.go
  2150  - testname: PodTemplate, delete a collection
  2151    codename: '[sig-node] PodTemplates should delete a collection of pod templates [Conformance]'
  2152    description: A set of Pod Templates is created with a label selector which MUST
  2153      be found when listed. The set of Pod Templates is deleted and MUST NOT show up
  2154      when listed by its label selector.
  2155    release: v1.19
  2156    file: test/e2e/common/node/podtemplates.go
  2157  - testname: PodTemplate, replace
  2158    codename: '[sig-node] PodTemplates should replace a pod template [Conformance]'
  2159    description: Attempt to create a PodTemplate which MUST succeed. Attempt to replace
  2160      the PodTemplate to include a new annotation which MUST succeed. The annotation
  2161      MUST be found in the new PodTemplate.
  2162    release: v1.24
  2163    file: test/e2e/common/node/podtemplates.go
  2164  - testname: PodTemplate lifecycle
  2165    codename: '[sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]'
  2166    description: Attempt to create a PodTemplate. Patch the created PodTemplate. Fetching
  2167      the PodTemplate MUST reflect changes. By fetching all the PodTemplates via a Label
  2168      selector it MUST find the PodTemplate by it's static label and updated value.
  2169      The PodTemplate must be deleted.
  2170    release: v1.19
  2171    file: test/e2e/common/node/podtemplates.go
  2172  - testname: Pods, QOS
  2173    codename: '[sig-node] Pods Extended Pods Set QOS Class should be set on Pods with
  2174      matching resource requests and limits for memory and cpu [Conformance]'
  2175    description: Create a Pod with CPU and Memory request and limits. Pod status MUST
  2176      have QOSClass set to PodQOSGuaranteed.
  2177    release: v1.9
  2178    file: test/e2e/node/pods.go
  2179  - testname: Pods, ActiveDeadlineSeconds
  2180    codename: '[sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance]
  2181      [Conformance]'
  2182    description: Create a Pod with a unique label. Query for the Pod with the label
  2183      as selector MUST be successful. The Pod is updated with ActiveDeadlineSeconds
  2184      set on the Pod spec. Pod MUST terminate of the specified time elapses.
  2185    release: v1.9
  2186    file: test/e2e/common/node/pods.go
  2187  - testname: Pods, lifecycle
  2188    codename: '[sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]'
  2189    description: A Pod is created with a unique label. Pod MUST be accessible when queried
  2190      using the label selector upon creation. Add a watch, check if the Pod is running.
  2191      Pod then deleted, The pod deletion timestamp is observed. The watch MUST return
  2192      the pod deleted event. Query with the original selector for the Pod MUST return
  2193      empty list.
  2194    release: v1.9
  2195    file: test/e2e/common/node/pods.go
  2196  - testname: Pods, update
  2197    codename: '[sig-node] Pods should be updated [NodeConformance] [Conformance]'
  2198    description: Create a Pod with a unique label. Query for the Pod with the label
  2199      as selector MUST be successful. Update the pod to change the value of the Label.
  2200      Query for the Pod with the new value for the label MUST be successful.
  2201    release: v1.9
  2202    file: test/e2e/common/node/pods.go
  2203  - testname: Pods, service environment variables
  2204    codename: '[sig-node] Pods should contain environment variables for services [NodeConformance]
  2205      [Conformance]'
  2206    description: Create a server Pod listening on port 9376. A Service called fooservice
  2207      is created for the server Pod listening on port 8765 targeting port 8080. If a
  2208      new Pod is created in the cluster then the Pod MUST have the fooservice environment
  2209      variables available from this new Pod. The new create Pod MUST have environment
  2210      variables such as FOOSERVICE_SERVICE_HOST, FOOSERVICE_SERVICE_PORT, FOOSERVICE_PORT,
  2211      FOOSERVICE_PORT_8765_TCP_PORT, FOOSERVICE_PORT_8765_TCP_PROTO, FOOSERVICE_PORT_8765_TCP
  2212      and FOOSERVICE_PORT_8765_TCP_ADDR that are populated with proper values.
  2213    release: v1.9
  2214    file: test/e2e/common/node/pods.go
  2215  - testname: Pods, delete a collection
  2216    codename: '[sig-node] Pods should delete a collection of pods [Conformance]'
  2217    description: A set of pods is created with a label selector which MUST be found
  2218      when listed. The set of pods is deleted and MUST NOT show up when listed by its
  2219      label selector.
  2220    release: v1.19
  2221    file: test/e2e/common/node/pods.go
  2222  - testname: Pods, assigned hostip
  2223    codename: '[sig-node] Pods should get a host IP [NodeConformance] [Conformance]'
  2224    description: Create a Pod. Pod status MUST return successfully and contains a valid
  2225      IP address.
  2226    release: v1.9
  2227    file: test/e2e/common/node/pods.go
  2228  - testname: Pods, patching status
  2229    codename: '[sig-node] Pods should patch a pod status [Conformance]'
  2230    description: A pod is created which MUST succeed and be found running. The pod status
  2231      when patched MUST succeed. Given the patching of the pod status, the fields MUST
  2232      equal the new values.
  2233    release: v1.25
  2234    file: test/e2e/common/node/pods.go
  2235  - testname: Pods, completes the lifecycle of a Pod and the PodStatus
  2236    codename: '[sig-node] Pods should run through the lifecycle of Pods and PodStatus
  2237      [Conformance]'
  2238    description: A Pod is created with a static label which MUST succeed. It MUST succeed
  2239      when patching the label and the pod data. When checking and replacing the PodStatus
  2240      it MUST succeed. It MUST succeed when deleting the Pod.
  2241    release: v1.20
  2242    file: test/e2e/common/node/pods.go
  2243  - testname: Pods, remote command execution over websocket
  2244    codename: '[sig-node] Pods should support remote command execution over websockets
  2245      [NodeConformance] [Conformance]'
  2246    description: A Pod is created. Websocket is created to retrieve exec command output
  2247      from this pod. Message retrieved form Websocket MUST match with expected exec
  2248      command output.
  2249    release: v1.13
  2250    file: test/e2e/common/node/pods.go
  2251  - testname: Pods, logs from websockets
  2252    codename: '[sig-node] Pods should support retrieving logs from the container over
  2253      websockets [NodeConformance] [Conformance]'
  2254    description: A Pod is created. Websocket is created to retrieve log of a container
  2255      from this pod. Message retrieved form Websocket MUST match with container's output.
  2256    release: v1.13
  2257    file: test/e2e/common/node/pods.go
  2258  - testname: Pods, prestop hook
  2259    codename: '[sig-node] PreStop should call prestop when killing a pod [Conformance]'
  2260    description: Create a server pod with a rest endpoint '/write' that changes state.Received
  2261      field. Create a Pod with a pre-stop handle that posts to the /write endpoint on
  2262      the server Pod. Verify that the Pod with pre-stop hook is running. Delete the
  2263      Pod with the pre-stop hook. Before the Pod is deleted, pre-stop handler MUST be
  2264      called when configured. Verify that the Pod is deleted and a call to prestop hook
  2265      is verified by checking the status received on the server Pod.
  2266    release: v1.9
  2267    file: test/e2e/node/pre_stop.go
  2268  - testname: Pod liveness probe, using http endpoint, failure
  2269    codename: '[sig-node] Probing container should *not* be restarted with a /healthz
  2270      http liveness probe [NodeConformance] [Conformance]'
  2271    description: A Pod is created with liveness probe on http endpoint '/'. Liveness
  2272      probe on this endpoint will not fail. When liveness probe does not fail then the
  2273      restart count MUST remain zero.
  2274    release: v1.9
  2275    file: test/e2e/common/node/container_probe.go
  2276  - testname: Pod liveness probe, using grpc call, success
  2277    codename: '[sig-node] Probing container should *not* be restarted with a GRPC liveness
  2278      probe [NodeConformance] [Conformance]'
  2279    description: A Pod is created with liveness probe on grpc service. Liveness probe
  2280      on this endpoint will not fail. When liveness probe does not fail then the restart
  2281      count MUST remain zero.
  2282    release: v1.23
  2283    file: test/e2e/common/node/container_probe.go
  2284  - testname: Pod liveness probe, using local file, no restart
  2285    codename: '[sig-node] Probing container should *not* be restarted with a exec "cat
  2286      /tmp/health" liveness probe [NodeConformance] [Conformance]'
  2287    description: Pod is created with liveness probe that uses 'exec' command to cat
  2288      /temp/health file. Liveness probe MUST not fail to check health and the restart
  2289      count should remain 0.
  2290    release: v1.9
  2291    file: test/e2e/common/node/container_probe.go
  2292  - testname: Pod liveness probe, using tcp socket, no restart
  2293    codename: '[sig-node] Probing container should *not* be restarted with a tcp:8080
  2294      liveness probe [NodeConformance] [Conformance]'
  2295    description: A Pod is created with liveness probe on tcp socket 8080. The http handler
  2296      on port 8080 will return http errors after 10 seconds, but the socket will remain
  2297      open. Liveness probe MUST not fail to check health and the restart count should
  2298      remain 0.
  2299    release: v1.18
  2300    file: test/e2e/common/node/container_probe.go
  2301  - testname: Pod liveness probe, using http endpoint, restart
  2302    codename: '[sig-node] Probing container should be restarted with a /healthz http
  2303      liveness probe [NodeConformance] [Conformance]'
  2304    description: A Pod is created with liveness probe on http endpoint /healthz. The
  2305      http handler on the /healthz will return a http error after 10 seconds since the
  2306      Pod is started. This MUST result in liveness check failure. The Pod MUST now be
  2307      killed and restarted incrementing restart count to 1.
  2308    release: v1.9
  2309    file: test/e2e/common/node/container_probe.go
  2310  - testname: Pod liveness probe, using grpc call, failure
  2311    codename: '[sig-node] Probing container should be restarted with a GRPC liveness
  2312      probe [NodeConformance] [Conformance]'
  2313    description: A Pod is created with liveness probe on grpc service. Liveness probe
  2314      on this endpoint should fail because of wrong probe port. When liveness probe
  2315      does  fail then the restart count should +1.
  2316    release: v1.23
  2317    file: test/e2e/common/node/container_probe.go
  2318  - testname: Pod liveness probe, using local file, restart
  2319    codename: '[sig-node] Probing container should be restarted with a exec "cat /tmp/health"
  2320      liveness probe [NodeConformance] [Conformance]'
  2321    description: Create a Pod with liveness probe that uses ExecAction handler to cat
  2322      /temp/health file. The Container deletes the file /temp/health after 10 second,
  2323      triggering liveness probe to fail. The Pod MUST now be killed and restarted incrementing
  2324      restart count to 1.
  2325    release: v1.9
  2326    file: test/e2e/common/node/container_probe.go
  2327  - testname: Pod liveness probe, using http endpoint, multiple restarts (slow)
  2328    codename: '[sig-node] Probing container should have monotonically increasing restart
  2329      count [NodeConformance] [Conformance]'
  2330    description: A Pod is created with liveness probe on http endpoint /healthz. The
  2331      http handler on the /healthz will return a http error after 10 seconds since the
  2332      Pod is started. This MUST result in liveness check failure. The Pod MUST now be
  2333      killed and restarted incrementing restart count to 1. The liveness probe must
  2334      fail again after restart once the http handler for /healthz enpoind on the Pod
  2335      returns an http error after 10 seconds from the start. Restart counts MUST increment
  2336      every time health check fails, measure up to 5 restart.
  2337    release: v1.9
  2338    file: test/e2e/common/node/container_probe.go
  2339  - testname: Pod readiness probe, with initial delay
  2340    codename: '[sig-node] Probing container with readiness probe should not be ready
  2341      before initial delay and never restart [NodeConformance] [Conformance]'
  2342    description: Create a Pod that is configured with a initial delay set on the readiness
  2343      probe. Check the Pod Start time to compare to the initial delay. The Pod MUST
  2344      be ready only after the specified initial delay.
  2345    release: v1.9
  2346    file: test/e2e/common/node/container_probe.go
  2347  - testname: Pod readiness probe, failure
  2348    codename: '[sig-node] Probing container with readiness probe that fails should never
  2349      be ready and never restart [NodeConformance] [Conformance]'
  2350    description: Create a Pod with a readiness probe that fails consistently. When this
  2351      Pod is created, then the Pod MUST never be ready, never be running and restart
  2352      count MUST be zero.
  2353    release: v1.9
  2354    file: test/e2e/common/node/container_probe.go
  2355  - testname: Pod with the deleted RuntimeClass is rejected.
  2356    codename: '[sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass
  2357      [NodeConformance] [Conformance]'
  2358    description: Pod requesting the deleted RuntimeClass must be rejected.
  2359    release: v1.20
  2360    file: test/e2e/common/node/runtimeclass.go
  2361  - testname: Pod with the non-existing RuntimeClass is rejected.
  2362    codename: '[sig-node] RuntimeClass should reject a Pod requesting a non-existent
  2363      RuntimeClass [NodeConformance] [Conformance]'
  2364    description: The Pod requesting the non-existing RuntimeClass must be rejected.
  2365    release: v1.20
  2366    file: test/e2e/common/node/runtimeclass.go
  2367  - testname: RuntimeClass Overhead field must be respected.
  2368    codename: '[sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass
  2369      and initialize its Overhead [NodeConformance] [Conformance]'
  2370    description: The Pod requesting the existing RuntimeClass must be scheduled. This
  2371      test doesn't validate that the Pod will actually start because this functionality
  2372      depends on container runtime and preconfigured handler. Runtime-specific functionality
  2373      is not being tested here.
  2374    release: v1.24
  2375    file: test/e2e/common/node/runtimeclass.go
  2376  - testname: Can schedule a pod requesting existing RuntimeClass.
  2377    codename: '[sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass
  2378      without PodOverhead [NodeConformance] [Conformance]'
  2379    description: The Pod requesting the existing RuntimeClass must be scheduled. This
  2380      test doesn't validate that the Pod will actually start because this functionality
  2381      depends on container runtime and preconfigured handler. Runtime-specific functionality
  2382      is not being tested here.
  2383    release: v1.20
  2384    file: test/e2e/common/node/runtimeclass.go
  2385  - testname: RuntimeClass API
  2386    codename: '[sig-node] RuntimeClass should support RuntimeClasses API operations
  2387      [Conformance]'
  2388    description: ' The node.k8s.io API group MUST exist in the /apis discovery document.
  2389      The node.k8s.io/v1 API group/version MUST exist in the /apis/mode.k8s.io discovery
  2390      document. The runtimeclasses resource MUST exist in the /apis/node.k8s.io/v1 discovery
  2391      document. The runtimeclasses resource must support create, get, list, watch, update,
  2392      patch, delete, and deletecollection.'
  2393    release: v1.20
  2394    file: test/e2e/common/node/runtimeclass.go
  2395  - testname: Secrets, pod environment field
  2396    codename: '[sig-node] Secrets should be consumable from pods in env vars [NodeConformance]
  2397      [Conformance]'
  2398    description: Create a secret. Create a Pod with Container that declares a environment
  2399      variable which references the secret created to extract a key value from the secret.
  2400      Pod MUST have the environment variable that contains proper value for the key
  2401      to the secret.
  2402    release: v1.9
  2403    file: test/e2e/common/node/secrets.go
  2404  - testname: Secrets, pod environment from source
  2405    codename: '[sig-node] Secrets should be consumable via the environment [NodeConformance]
  2406      [Conformance]'
  2407    description: Create a secret. Create a Pod with Container that declares a environment
  2408      variable using 'EnvFrom' which references the secret created to extract a key
  2409      value from the secret. Pod MUST have the environment variable that contains proper
  2410      value for the key to the secret.
  2411    release: v1.9
  2412    file: test/e2e/common/node/secrets.go
  2413  - testname: Secrets, with empty-key
  2414    codename: '[sig-node] Secrets should fail to create secret due to empty secret key
  2415      [Conformance]'
  2416    description: Attempt to create a Secret with an empty key. The creation MUST fail.
  2417    release: v1.15
  2418    file: test/e2e/common/node/secrets.go
  2419  - testname: Secret patching
  2420    codename: '[sig-node] Secrets should patch a secret [Conformance]'
  2421    description: A Secret is created. Listing all Secrets MUST return an empty list.
  2422      Given the patching and fetching of the Secret, the fields MUST equal the new values.
  2423      The Secret is deleted by it's static Label. Secrets are listed finally, the list
  2424      MUST NOT include the originally created Secret.
  2425    release: v1.18
  2426    file: test/e2e/common/node/secrets.go
  2427  - testname: Security Context, runAsUser=65534
  2428    codename: '[sig-node] Security Context When creating a container with runAsUser
  2429      should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]'
  2430    description: 'Container is created with runAsUser option by passing uid 65534 to
  2431      run as unpriviledged user. Pod MUST be in Succeeded phase. [LinuxOnly]: This test
  2432      is marked as LinuxOnly since Windows does not support running as UID / GID.'
  2433    release: v1.15
  2434    file: test/e2e/common/node/security_context.go
  2435  - testname: Security Context, privileged=false.
  2436    codename: '[sig-node] Security Context When creating a pod with privileged should
  2437      run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]'
  2438    description: 'Create a container to run in unprivileged mode by setting pod''s SecurityContext
  2439      Privileged option as false. Pod MUST be in Succeeded phase. [LinuxOnly]: This
  2440      test is marked as LinuxOnly since it runs a Linux-specific command.'
  2441    release: v1.15
  2442    file: test/e2e/common/node/security_context.go
  2443  - testname: Security Context, readOnlyRootFilesystem=false.
  2444    codename: '[sig-node] Security Context When creating a pod with readOnlyRootFilesystem
  2445      should run the container with writable rootfs when readOnlyRootFilesystem=false
  2446      [NodeConformance] [Conformance]'
  2447    description: Container is configured to run with readOnlyRootFilesystem to false.
  2448      Write operation MUST be allowed and Pod MUST be in Succeeded state.
  2449    release: v1.15
  2450    file: test/e2e/common/node/security_context.go
  2451  - testname: Security Context, test RunAsGroup at container level
  2452    codename: '[sig-node] Security Context should support container.SecurityContext.RunAsUser
  2453      And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]'
  2454    description: 'Container is created with runAsUser and runAsGroup option by passing
  2455      uid 1001 and gid 2002 at containr level. Pod MUST be in Succeeded phase. [LinuxOnly]:
  2456      This test is marked as LinuxOnly since Windows does not support running as UID
  2457      / GID.'
  2458    release: v1.21
  2459    file: test/e2e/node/security_context.go
  2460  - testname: Security Context, test RunAsGroup at pod level
  2461    codename: '[sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser
  2462      And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]'
  2463    description: 'Container is created with runAsUser and runAsGroup option by passing
  2464      uid 1001 and gid 2002 at pod level. Pod MUST be in Succeeded phase. [LinuxOnly]:
  2465      This test is marked as LinuxOnly since Windows does not support running as UID
  2466      / GID.'
  2467    release: v1.21
  2468    file: test/e2e/node/security_context.go
  2469  - testname: Security Context, allowPrivilegeEscalation=false.
  2470    codename: '[sig-node] Security Context when creating containers with AllowPrivilegeEscalation
  2471      should not allow privilege escalation when false [LinuxOnly] [NodeConformance]
  2472      [Conformance]'
  2473    description: 'Configuring the allowPrivilegeEscalation to false, does not allow
  2474      the privilege escalation operation. A container is configured with allowPrivilegeEscalation=false
  2475      and a given uid (1000) which is not 0. When the container is run, container''s
  2476      output MUST match with expected output verifying container ran with given uid
  2477      i.e. uid=1000. [LinuxOnly]: This test is marked LinuxOnly since Windows does not
  2478      support running as UID / GID, or privilege escalation.'
  2479    release: v1.15
  2480    file: test/e2e/common/node/security_context.go
  2481  - testname: Sysctls, reject invalid sysctls
  2482    codename: '[sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid
  2483      sysctls [MinimumKubeletVersion:1.21] [Conformance]'
  2484    description: 'Pod is created with one valid and two invalid sysctls. Pod should
  2485      not apply invalid sysctls. [LinuxOnly]: This test is marked as LinuxOnly since
  2486      Windows does not support sysctls'
  2487    release: v1.21
  2488    file: test/e2e/common/node/sysctl.go
  2489  - testname: Sysctl, test sysctls
  2490    codename: '[sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls
  2491      [MinimumKubeletVersion:1.21] [Environment:NotInUserNS] [Conformance]'
  2492    description: 'Pod is created with kernel.shm_rmid_forced sysctl. Kernel.shm_rmid_forced
  2493      must be set to 1 [LinuxOnly]: This test is marked as LinuxOnly since Windows does
  2494      not support sysctls [Environment:NotInUserNS]: The test fails in UserNS (as expected):
  2495      `open /proc/sys/kernel/shm_rmid_forced: permission denied`'
  2496    release: v1.21
  2497    file: test/e2e/common/node/sysctl.go
  2498  - testname: Environment variables, expansion
  2499    codename: '[sig-node] Variable Expansion should allow composing env vars into new
  2500      env vars [NodeConformance] [Conformance]'
  2501    description: Create a Pod with environment variables. Environment variables defined
  2502      using previously defined environment variables MUST expand to proper values.
  2503    release: v1.9
  2504    file: test/e2e/common/node/expansion.go
  2505  - testname: Environment variables, command argument expansion
  2506    codename: '[sig-node] Variable Expansion should allow substituting values in a container''s
  2507      args [NodeConformance] [Conformance]'
  2508    description: Create a Pod with environment variables and container command arguments
  2509      using them. Container command arguments using the  defined environment variables
  2510      MUST expand to proper values.
  2511    release: v1.9
  2512    file: test/e2e/common/node/expansion.go
  2513  - testname: Environment variables, command expansion
  2514    codename: '[sig-node] Variable Expansion should allow substituting values in a container''s
  2515      command [NodeConformance] [Conformance]'
  2516    description: Create a Pod with environment variables and container command using
  2517      them. Container command using the  defined environment variables MUST expand to
  2518      proper values.
  2519    release: v1.9
  2520    file: test/e2e/common/node/expansion.go
  2521  - testname: VolumeSubpathEnvExpansion, subpath expansion
  2522    codename: '[sig-node] Variable Expansion should allow substituting values in a volume
  2523      subpath [Conformance]'
  2524    description: Make sure a container's subpath can be set using an expansion of environment
  2525      variables.
  2526    release: v1.19
  2527    file: test/e2e/common/node/expansion.go
  2528  - testname: VolumeSubpathEnvExpansion, subpath with absolute path
  2529    codename: '[sig-node] Variable Expansion should fail substituting values in a volume
  2530      subpath with absolute path [Slow] [Conformance]'
  2531    description: Make sure a container's subpath can not be set using an expansion of
  2532      environment variables when absolute path is supplied.
  2533    release: v1.19
  2534    file: test/e2e/common/node/expansion.go
  2535  - testname: VolumeSubpathEnvExpansion, subpath with backticks
  2536    codename: '[sig-node] Variable Expansion should fail substituting values in a volume
  2537      subpath with backticks [Slow] [Conformance]'
  2538    description: Make sure a container's subpath can not be set using an expansion of
  2539      environment variables when backticks are supplied.
  2540    release: v1.19
  2541    file: test/e2e/common/node/expansion.go
  2542  - testname: VolumeSubpathEnvExpansion, subpath test writes
  2543    codename: '[sig-node] Variable Expansion should succeed in writing subpaths in container
  2544      [Slow] [Conformance]'
  2545    description: "Verify that a subpath expansion can be used to write files into subpaths.
  2546      1.\tvalid subpathexpr starts a container running 2.\ttest for valid subpath writes
  2547      3.\tsuccessful expansion of the subpathexpr isn't required for volume cleanup"
  2548    release: v1.19
  2549    file: test/e2e/common/node/expansion.go
  2550  - testname: VolumeSubpathEnvExpansion, subpath ready from failed state
  2551    codename: '[sig-node] Variable Expansion should verify that a failing subpath expansion
  2552      can be modified during the lifecycle of a container [Slow] [Conformance]'
  2553    description: Verify that a failing subpath expansion can be modified during the
  2554      lifecycle of a container.
  2555    release: v1.19
  2556    file: test/e2e/common/node/expansion.go
  2557  - testname: LimitRange, resources
  2558    codename: '[sig-scheduling] LimitRange should create a LimitRange with defaults
  2559      and ensure pod has those defaults applied. [Conformance]'
  2560    description: Creating a Limitrange and verifying the creation of Limitrange, updating
  2561      the Limitrange and validating the Limitrange. Creating Pods with resources and
  2562      validate the pod resources are applied to the Limitrange
  2563    release: v1.18
  2564    file: test/e2e/scheduling/limit_range.go
  2565  - testname: LimitRange, list, patch and delete a LimitRange by collection
  2566    codename: '[sig-scheduling] LimitRange should list, patch and delete a LimitRange
  2567      by collection [Conformance]'
  2568    description: When two limitRanges are created in different namespaces, both MUST
  2569      succeed. Listing limitRanges across all namespaces with a labelSelector MUST find
  2570      both limitRanges. When patching the first limitRange it MUST succeed and the fields
  2571      MUST equal the new values. When deleting the limitRange by collection with a labelSelector
  2572      it MUST delete only one limitRange.
  2573    release: v1.26
  2574    file: test/e2e/scheduling/limit_range.go
  2575  - testname: Scheduler, resource limits
  2576    codename: '[sig-scheduling] SchedulerPredicates [Serial] validates resource limits
  2577      of pods that are allowed to run [Conformance]'
  2578    description: Scheduling Pods MUST fail if the resource requests exceed Machine capacity.
  2579    release: v1.9
  2580    file: test/e2e/scheduling/predicates.go
  2581  - testname: Scheduler, node selector matching
  2582    codename: '[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector
  2583      is respected if matching [Conformance]'
  2584    description: 'Create a label on the node {k: v}. Then create a Pod with a NodeSelector
  2585      set to {k: v}. Check to see if the Pod is scheduled. When the NodeSelector matches
  2586      then Pod MUST be scheduled on that node.'
  2587    release: v1.9
  2588    file: test/e2e/scheduling/predicates.go
  2589  - testname: Scheduler, node selector not matching
  2590    codename: '[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector
  2591      is respected if not matching [Conformance]'
  2592    description: Create a Pod with a NodeSelector set to a value that does not match
  2593      a node in the cluster. Since there are no nodes matching the criteria the Pod
  2594      MUST not be scheduled.
  2595    release: v1.9
  2596    file: test/e2e/scheduling/predicates.go
  2597  - testname: Scheduling, HostPort and Protocol match, HostIPs different but one is
  2598      default HostIP (0.0.0.0)
  2599    codename: '[sig-scheduling] SchedulerPredicates [Serial] validates that there exists
  2600      conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP
  2601      [Conformance]'
  2602    description: Pods with the same HostPort and Protocol, but different HostIPs, MUST
  2603      NOT schedule to the same node if one of those IPs is the default HostIP of 0.0.0.0,
  2604      which represents all IPs on the host.
  2605    release: v1.16
  2606    file: test/e2e/scheduling/predicates.go
  2607  - testname: Pod preemption verification
  2608    codename: '[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath
  2609      runs ReplicaSets to verify preemption running path [Conformance]'
  2610    description: Four levels of Pods in ReplicaSets with different levels of Priority,
  2611      restricted by given CPU limits MUST launch. Priority 1 - 3 Pods MUST spawn first
  2612      followed by Priority 4 Pod. The ReplicaSets with Replicas MUST contain the expected
  2613      number of Replicas.
  2614    release: v1.19
  2615    file: test/e2e/scheduling/preemption.go
  2616  - testname: Scheduler, Verify PriorityClass endpoints
  2617    codename: '[sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints
  2618      verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]'
  2619    description: Verify that PriorityClass endpoints can be listed. When any mutable
  2620      field is either patched or updated it MUST succeed. When any immutable field is
  2621      either patched or updated it MUST fail.
  2622    release: v1.20
  2623    file: test/e2e/scheduling/preemption.go
  2624  - testname: Scheduler, Basic Preemption
  2625    codename: '[sig-scheduling] SchedulerPreemption [Serial] validates basic preemption
  2626      works [Conformance]'
  2627    description: When a higher priority pod is created and no node with enough resources
  2628      is found, the scheduler MUST preempt a lower priority pod and schedule the high
  2629      priority pod.
  2630    release: v1.19
  2631    file: test/e2e/scheduling/preemption.go
  2632  - testname: Scheduler, Preemption for critical pod
  2633    codename: '[sig-scheduling] SchedulerPreemption [Serial] validates lower priority
  2634      pod preemption by critical pod [Conformance]'
  2635    description: When a critical pod is created and no node with enough resources is
  2636      found, the scheduler MUST preempt a lower priority pod to schedule the critical
  2637      pod.
  2638    release: v1.19
  2639    file: test/e2e/scheduling/preemption.go
  2640  - testname: CSIDriver, lifecycle
  2641    codename: '[sig-storage] CSIInlineVolumes should run through the lifecycle of a
  2642      CSIDriver [Conformance]'
  2643    description: Creating two CSIDrivers MUST succeed. Patching a CSIDriver MUST succeed
  2644      with its new label found. Updating a CSIDriver MUST succeed with its new label
  2645      found. Two CSIDrivers MUST be found when listed. Deleting the first CSIDriver
  2646      MUST succeed. Deleting the second CSIDriver via deleteCollection MUST succeed.
  2647    release: v1.28
  2648    file: test/e2e/storage/csi_inline.go
  2649  - testname: CSIInlineVolumes should support Pods with inline volumes
  2650    codename: '[sig-storage] CSIInlineVolumes should support CSIVolumeSource in Pod
  2651      API [Conformance]'
  2652    description: Pod resources with CSIVolumeSource should support create, get, list,
  2653      patch, and delete operations.
  2654    release: v1.26
  2655    file: test/e2e/storage/csi_inline.go
  2656  - testname: CSIStorageCapacity API
  2657    codename: '[sig-storage] CSIStorageCapacity should support CSIStorageCapacities
  2658      API operations [Conformance]'
  2659    description: ' The storage.k8s.io API group MUST exist in the /apis discovery document.
  2660      The storage.k8s.io/v1 API group/version MUST exist in the /apis/mode.k8s.io discovery
  2661      document. The csistoragecapacities resource MUST exist in the /apis/storage.k8s.io/v1
  2662      discovery document. The csistoragecapacities resource must support create, get,
  2663      list, watch, update, patch, delete, and deletecollection.'
  2664    release: v1.24
  2665    file: test/e2e/storage/csistoragecapacity.go
  2666  - testname: ConfigMap Volume, text data, binary data
  2667    codename: '[sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance]
  2668      [Conformance]'
  2669    description: The ConfigMap that is created with text data and binary data MUST be
  2670      accessible to read from the newly created Pod using the volume mount that is mapped
  2671      to custom path in the Pod. ConfigMap's text data and binary data MUST be verified
  2672      by reading the content from the mounted files in the Pod.
  2673    release: v1.12
  2674    file: test/e2e/common/storage/configmap_volume.go
  2675  - testname: ConfigMap Volume, create, update and delete
  2676    codename: '[sig-storage] ConfigMap optional updates should be reflected in volume
  2677      [NodeConformance] [Conformance]'
  2678    description: The ConfigMap that is created MUST be accessible to read from the newly
  2679      created Pod using the volume mount that is mapped to custom path in the Pod. When
  2680      the config map is updated the change to the config map MUST be verified by reading
  2681      the content from the mounted file in the Pod. Also when the item(file) is deleted
  2682      from the map that MUST result in a error reading that item(file).
  2683    release: v1.9
  2684    file: test/e2e/common/storage/configmap_volume.go
  2685  - testname: ConfigMap Volume, without mapping
  2686    codename: '[sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance]
  2687      [Conformance]'
  2688    description: Create a ConfigMap, create a Pod that mounts a volume and populates
  2689      the volume with data stored in the ConfigMap. The ConfigMap that is created MUST
  2690      be accessible to read from the newly created Pod using the volume mount. The data
  2691      content of the file MUST be readable and verified and file modes MUST default
  2692      to 0x644.
  2693    release: v1.9
  2694    file: test/e2e/common/storage/configmap_volume.go
  2695  - testname: ConfigMap Volume, without mapping, non-root user
  2696    codename: '[sig-storage] ConfigMap should be consumable from pods in volume as non-root
  2697      [NodeConformance] [Conformance]'
  2698    description: Create a ConfigMap, create a Pod that mounts a volume and populates
  2699      the volume with data stored in the ConfigMap. Pod is run as a non-root user with
  2700      uid=1000. The ConfigMap that is created MUST be accessible to read from the newly
  2701      created Pod using the volume mount. The file on the volume MUST have file mode
  2702      set to default value of 0x644.
  2703    release: v1.9
  2704    file: test/e2e/common/storage/configmap_volume.go
  2705  - testname: ConfigMap Volume, without mapping, volume mode set
  2706    codename: '[sig-storage] ConfigMap should be consumable from pods in volume with
  2707      defaultMode set [LinuxOnly] [NodeConformance] [Conformance]'
  2708    description: Create a ConfigMap, create a Pod that mounts a volume and populates
  2709      the volume with data stored in the ConfigMap. File mode is changed to a custom
  2710      value of '0x400'. The ConfigMap that is created MUST be accessible to read from
  2711      the newly created Pod using the volume mount. The data content of the file MUST
  2712      be readable and verified and file modes MUST be set to the custom value of '0x400'
  2713      This test is marked LinuxOnly since Windows does not support setting specific
  2714      file permissions.
  2715    release: v1.9
  2716    file: test/e2e/common/storage/configmap_volume.go
  2717  - testname: ConfigMap Volume, with mapping
  2718    codename: '[sig-storage] ConfigMap should be consumable from pods in volume with
  2719      mappings [NodeConformance] [Conformance]'
  2720    description: Create a ConfigMap, create a Pod that mounts a volume and populates
  2721      the volume with data stored in the ConfigMap. Files are mapped to a path in the
  2722      volume. The ConfigMap that is created MUST be accessible to read from the newly
  2723      created Pod using the volume mount. The data content of the file MUST be readable
  2724      and verified and file modes MUST default to 0x644.
  2725    release: v1.9
  2726    file: test/e2e/common/storage/configmap_volume.go
  2727  - testname: ConfigMap Volume, with mapping, volume mode set
  2728    codename: '[sig-storage] ConfigMap should be consumable from pods in volume with
  2729      mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]'
  2730    description: Create a ConfigMap, create a Pod that mounts a volume and populates
  2731      the volume with data stored in the ConfigMap. Files are mapped to a path in the
  2732      volume. File mode is changed to a custom value of '0x400'. The ConfigMap that
  2733      is created MUST be accessible to read from the newly created Pod using the volume
  2734      mount. The data content of the file MUST be readable and verified and file modes
  2735      MUST be set to the custom value of '0x400' This test is marked LinuxOnly since
  2736      Windows does not support setting specific file permissions.
  2737    release: v1.9
  2738    file: test/e2e/common/storage/configmap_volume.go
  2739  - testname: ConfigMap Volume, with mapping, non-root user
  2740    codename: '[sig-storage] ConfigMap should be consumable from pods in volume with
  2741      mappings as non-root [NodeConformance] [Conformance]'
  2742    description: Create a ConfigMap, create a Pod that mounts a volume and populates
  2743      the volume with data stored in the ConfigMap. Files are mapped to a path in the
  2744      volume. Pod is run as a non-root user with uid=1000. The ConfigMap that is created
  2745      MUST be accessible to read from the newly created Pod using the volume mount.
  2746      The file on the volume MUST have file mode set to default value of 0x644.
  2747    release: v1.9
  2748    file: test/e2e/common/storage/configmap_volume.go
  2749  - testname: ConfigMap Volume, multiple volume maps
  2750    codename: '[sig-storage] ConfigMap should be consumable in multiple volumes in the
  2751      same pod [NodeConformance] [Conformance]'
  2752    description: The ConfigMap that is created MUST be accessible to read from the newly
  2753      created Pod using the volume mount that is mapped to multiple paths in the Pod.
  2754      The content MUST be accessible from all the mapped volume mounts.
  2755    release: v1.9
  2756    file: test/e2e/common/storage/configmap_volume.go
  2757  - testname: ConfigMap Volume, immutability
  2758    codename: '[sig-storage] ConfigMap should be immutable if `immutable` field is set
  2759      [Conformance]'
  2760    description: Create a ConfigMap. Update it's data field, the update MUST succeed.
  2761      Mark the ConfigMap as immutable, the update MUST succeed. Try to update its data,
  2762      the update MUST fail. Try to mark the ConfigMap back as not immutable, the update
  2763      MUST fail. Try to update the ConfigMap`s metadata (labels), the update must succeed.
  2764      Try to delete the ConfigMap, the deletion must succeed.
  2765    release: v1.21
  2766    file: test/e2e/common/storage/configmap_volume.go
  2767  - testname: ConfigMap Volume, update
  2768    codename: '[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance]
  2769      [Conformance]'
  2770    description: The ConfigMap that is created MUST be accessible to read from the newly
  2771      created Pod using the volume mount that is mapped to custom path in the Pod. When
  2772      the ConfigMap is updated the change to the config map MUST be verified by reading
  2773      the content from the mounted file in the Pod.
  2774    release: v1.9
  2775    file: test/e2e/common/storage/configmap_volume.go
  2776  - testname: DownwardAPI volume, CPU limits
  2777    codename: '[sig-storage] Downward API volume should provide container''s cpu limit
  2778      [NodeConformance] [Conformance]'
  2779    description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles
  2780      contains a item for the CPU limits. The container runtime MUST be able to access
  2781      CPU limits from the specified path on the mounted volume.
  2782    release: v1.9
  2783    file: test/e2e/common/storage/downwardapi_volume.go
  2784  - testname: DownwardAPI volume, CPU request
  2785    codename: '[sig-storage] Downward API volume should provide container''s cpu request
  2786      [NodeConformance] [Conformance]'
  2787    description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles
  2788      contains a item for the CPU request. The container runtime MUST be able to access
  2789      CPU request from the specified path on the mounted volume.
  2790    release: v1.9
  2791    file: test/e2e/common/storage/downwardapi_volume.go
  2792  - testname: DownwardAPI volume, memory limits
  2793    codename: '[sig-storage] Downward API volume should provide container''s memory
  2794      limit [NodeConformance] [Conformance]'
  2795    description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles
  2796      contains a item for the memory limits. The container runtime MUST be able to access
  2797      memory limits from the specified path on the mounted volume.
  2798    release: v1.9
  2799    file: test/e2e/common/storage/downwardapi_volume.go
  2800  - testname: DownwardAPI volume, memory request
  2801    codename: '[sig-storage] Downward API volume should provide container''s memory
  2802      request [NodeConformance] [Conformance]'
  2803    description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles
  2804      contains a item for the memory request. The container runtime MUST be able to
  2805      access memory request from the specified path on the mounted volume.
  2806    release: v1.9
  2807    file: test/e2e/common/storage/downwardapi_volume.go
  2808  - testname: DownwardAPI volume, CPU limit, default node allocatable
  2809    codename: '[sig-storage] Downward API volume should provide node allocatable (cpu)
  2810      as default cpu limit if the limit is not set [NodeConformance] [Conformance]'
  2811    description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles
  2812      contains a item for the CPU limits. CPU limits is not specified for the container.
  2813      The container runtime MUST be able to access CPU limits from the specified path
  2814      on the mounted volume and the value MUST be default node allocatable.
  2815    release: v1.9
  2816    file: test/e2e/common/storage/downwardapi_volume.go
  2817  - testname: DownwardAPI volume, memory limit, default node allocatable
  2818    codename: '[sig-storage] Downward API volume should provide node allocatable (memory)
  2819      as default memory limit if the limit is not set [NodeConformance] [Conformance]'
  2820    description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles
  2821      contains a item for the memory limits. memory limits is not specified for the
  2822      container. The container runtime MUST be able to access memory limits from the
  2823      specified path on the mounted volume and the value MUST be default node allocatable.
  2824    release: v1.9
  2825    file: test/e2e/common/storage/downwardapi_volume.go
  2826  - testname: DownwardAPI volume, pod name
  2827    codename: '[sig-storage] Downward API volume should provide podname only [NodeConformance]
  2828      [Conformance]'
  2829    description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles
  2830      contains a item for the Pod name. The container runtime MUST be able to access
  2831      Pod name from the specified path on the mounted volume.
  2832    release: v1.9
  2833    file: test/e2e/common/storage/downwardapi_volume.go
  2834  - testname: DownwardAPI volume, volume mode 0400
  2835    codename: '[sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly]
  2836      [NodeConformance] [Conformance]'
  2837    description: A Pod is configured with DownwardAPIVolumeSource with the volumesource
  2838      mode set to -r-------- and DownwardAPIVolumeFiles contains a item for the Pod
  2839      name. The container runtime MUST be able to access Pod name from the specified
  2840      path on the mounted volume. This test is marked LinuxOnly since Windows does not
  2841      support setting specific file permissions.
  2842    release: v1.9
  2843    file: test/e2e/common/storage/downwardapi_volume.go
  2844  - testname: DownwardAPI volume, file mode 0400
  2845    codename: '[sig-storage] Downward API volume should set mode on item file [LinuxOnly]
  2846      [NodeConformance] [Conformance]'
  2847    description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles
  2848      contains a item for the Pod name with the file mode set to -r--------. The container
  2849      runtime MUST be able to access Pod name from the specified path on the mounted
  2850      volume. This test is marked LinuxOnly since Windows does not support setting specific
  2851      file permissions.
  2852    release: v1.9
  2853    file: test/e2e/common/storage/downwardapi_volume.go
  2854  - testname: DownwardAPI volume, update annotations
  2855    codename: '[sig-storage] Downward API volume should update annotations on modification
  2856      [NodeConformance] [Conformance]'
  2857    description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles
  2858      contains list of items for each of the Pod annotations. The container runtime
  2859      MUST be able to access Pod annotations from the specified path on the mounted
  2860      volume. Update the annotations by adding a new annotation to the running Pod.
  2861      The new annotation MUST be available from the mounted volume.
  2862    release: v1.9
  2863    file: test/e2e/common/storage/downwardapi_volume.go
  2864  - testname: DownwardAPI volume, update label
  2865    codename: '[sig-storage] Downward API volume should update labels on modification
  2866      [NodeConformance] [Conformance]'
  2867    description: A Pod is configured with DownwardAPIVolumeSource and DownwardAPIVolumeFiles
  2868      contains list of items for each of the Pod labels. The container runtime MUST
  2869      be able to access Pod labels from the specified path on the mounted volume. Update
  2870      the labels by adding a new label to the running Pod. The new label MUST be available
  2871      from the mounted volume.
  2872    release: v1.9
  2873    file: test/e2e/common/storage/downwardapi_volume.go
  2874  - testname: EmptyDir, Shared volumes between containers
  2875    codename: '[sig-storage] EmptyDir volumes pod should support shared volumes between
  2876      containers [Conformance]'
  2877    description: A Pod created with an 'emptyDir' Volume, should share volumes between
  2878      the containeres in the pod. The two busybox image containers should share the
  2879      volumes mounted to the pod. The main container should wait until the sub container
  2880      drops a file, and main container access the shared data.
  2881    release: v1.15
  2882    file: test/e2e/common/storage/empty_dir.go
  2883  - testname: EmptyDir, medium default, volume mode 0644
  2884    codename: '[sig-storage] EmptyDir volumes should support (non-root,0644,default)
  2885      [LinuxOnly] [NodeConformance] [Conformance]'
  2886    description: A Pod created with an 'emptyDir' Volume, the volume mode set to 0644.
  2887      Volume is mounted into the container where container is run as a non-root user.
  2888      The volume MUST have mode -rw-r--r-- and mount type set to tmpfs and the contents
  2889      MUST be readable. This test is marked LinuxOnly since Windows does not support
  2890      setting specific file permissions, or running as UID / GID.
  2891    release: v1.9
  2892    file: test/e2e/common/storage/empty_dir.go
  2893  - testname: EmptyDir, medium memory, volume mode 0644, non-root user
  2894    codename: '[sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly]
  2895      [NodeConformance] [Conformance]'
  2896    description: A Pod created with an 'emptyDir' Volume and 'medium' as 'Memory', the
  2897      volume mode set to 0644. Volume is mounted into the container where container
  2898      is run as a non-root user. The volume MUST have mode -rw-r--r-- and mount type
  2899      set to tmpfs and the contents MUST be readable. This test is marked LinuxOnly
  2900      since Windows does not support setting specific file permissions, or running as
  2901      UID / GID, or the medium = 'Memory'.
  2902    release: v1.9
  2903    file: test/e2e/common/storage/empty_dir.go
  2904  - testname: EmptyDir, medium default, volume mode 0666
  2905    codename: '[sig-storage] EmptyDir volumes should support (non-root,0666,default)
  2906      [LinuxOnly] [NodeConformance] [Conformance]'
  2907    description: A Pod created with an 'emptyDir' Volume, the volume mode set to 0666.
  2908      Volume is mounted into the container where container is run as a non-root user.
  2909      The volume MUST have mode -rw-rw-rw- and mount type set to tmpfs and the contents
  2910      MUST be readable. This test is marked LinuxOnly since Windows does not support
  2911      setting specific file permissions, or running as UID / GID.
  2912    release: v1.9
  2913    file: test/e2e/common/storage/empty_dir.go
  2914  - testname: EmptyDir, medium memory, volume mode 0666,, non-root user
  2915    codename: '[sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly]
  2916      [NodeConformance] [Conformance]'
  2917    description: A Pod created with an 'emptyDir' Volume and 'medium' as 'Memory', the
  2918      volume mode set to 0666. Volume is mounted into the container where container
  2919      is run as a non-root user. The volume MUST have mode -rw-rw-rw- and mount type
  2920      set to tmpfs and the contents MUST be readable. This test is marked LinuxOnly
  2921      since Windows does not support setting specific file permissions, or running as
  2922      UID / GID, or the medium = 'Memory'.
  2923    release: v1.9
  2924    file: test/e2e/common/storage/empty_dir.go
  2925  - testname: EmptyDir, medium default, volume mode 0777
  2926    codename: '[sig-storage] EmptyDir volumes should support (non-root,0777,default)
  2927      [LinuxOnly] [NodeConformance] [Conformance]'
  2928    description: A Pod created with an 'emptyDir' Volume, the volume mode set to 0777.
  2929      Volume is mounted into the container where container is run as a non-root user.
  2930      The volume MUST have mode -rwxrwxrwx and mount type set to tmpfs and the contents
  2931      MUST be readable. This test is marked LinuxOnly since Windows does not support
  2932      setting specific file permissions, or running as UID / GID.
  2933    release: v1.9
  2934    file: test/e2e/common/storage/empty_dir.go
  2935  - testname: EmptyDir, medium memory, volume mode 0777, non-root user
  2936    codename: '[sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly]
  2937      [NodeConformance] [Conformance]'
  2938    description: A Pod created with an 'emptyDir' Volume and 'medium' as 'Memory', the
  2939      volume mode set to 0777. Volume is mounted into the container where container
  2940      is run as a non-root user. The volume MUST have mode -rwxrwxrwx and mount type
  2941      set to tmpfs and the contents MUST be readable. This test is marked LinuxOnly
  2942      since Windows does not support setting specific file permissions, or running as
  2943      UID / GID, or the medium = 'Memory'.
  2944    release: v1.9
  2945    file: test/e2e/common/storage/empty_dir.go
  2946  - testname: EmptyDir, medium default, volume mode 0644
  2947    codename: '[sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly]
  2948      [NodeConformance] [Conformance]'
  2949    description: A Pod created with an 'emptyDir' Volume, the volume mode set to 0644.
  2950      The volume MUST have mode -rw-r--r-- and mount type set to tmpfs and the contents
  2951      MUST be readable. This test is marked LinuxOnly since Windows does not support
  2952      setting specific file permissions, or running as UID / GID.
  2953    release: v1.9
  2954    file: test/e2e/common/storage/empty_dir.go
  2955  - testname: EmptyDir, medium memory, volume mode 0644
  2956    codename: '[sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly]
  2957      [NodeConformance] [Conformance]'
  2958    description: A Pod created with an 'emptyDir' Volume and 'medium' as 'Memory', the
  2959      volume mode set to 0644. The volume MUST have mode -rw-r--r-- and mount type set
  2960      to tmpfs and the contents MUST be readable. This test is marked LinuxOnly since
  2961      Windows does not support setting specific file permissions, or running as UID
  2962      / GID, or the medium = 'Memory'.
  2963    release: v1.9
  2964    file: test/e2e/common/storage/empty_dir.go
  2965  - testname: EmptyDir, medium default, volume mode 0666
  2966    codename: '[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly]
  2967      [NodeConformance] [Conformance]'
  2968    description: A Pod created with an 'emptyDir' Volume, the volume mode set to 0666.
  2969      The volume MUST have mode -rw-rw-rw- and mount type set to tmpfs and the contents
  2970      MUST be readable. This test is marked LinuxOnly since Windows does not support
  2971      setting specific file permissions, or running as UID / GID.
  2972    release: v1.9
  2973    file: test/e2e/common/storage/empty_dir.go
  2974  - testname: EmptyDir, medium memory, volume mode 0666
  2975    codename: '[sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly]
  2976      [NodeConformance] [Conformance]'
  2977    description: A Pod created with an 'emptyDir' Volume and 'medium' as 'Memory', the
  2978      volume mode set to 0666. The volume MUST have mode -rw-rw-rw- and mount type set
  2979      to tmpfs and the contents MUST be readable. This test is marked LinuxOnly since
  2980      Windows does not support setting specific file permissions, or running as UID
  2981      / GID, or the medium = 'Memory'.
  2982    release: v1.9
  2983    file: test/e2e/common/storage/empty_dir.go
  2984  - testname: EmptyDir, medium default, volume mode 0777
  2985    codename: '[sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly]
  2986      [NodeConformance] [Conformance]'
  2987    description: A Pod created with an 'emptyDir' Volume, the volume mode set to 0777.  The
  2988      volume MUST have mode set as -rwxrwxrwx and mount type set to tmpfs and the contents
  2989      MUST be readable. This test is marked LinuxOnly since Windows does not support
  2990      setting specific file permissions, or running as UID / GID.
  2991    release: v1.9
  2992    file: test/e2e/common/storage/empty_dir.go
  2993  - testname: EmptyDir, medium memory, volume mode 0777
  2994    codename: '[sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly]
  2995      [NodeConformance] [Conformance]'
  2996    description: A Pod created with an 'emptyDir' Volume and 'medium' as 'Memory', the
  2997      volume mode set to 0777.  The volume MUST have mode set as -rwxrwxrwx and mount
  2998      type set to tmpfs and the contents MUST be readable. This test is marked LinuxOnly
  2999      since Windows does not support setting specific file permissions, or running as
  3000      UID / GID, or the medium = 'Memory'.
  3001    release: v1.9
  3002    file: test/e2e/common/storage/empty_dir.go
  3003  - testname: EmptyDir, medium default, volume mode default
  3004    codename: '[sig-storage] EmptyDir volumes volume on default medium should have the
  3005      correct mode [LinuxOnly] [NodeConformance] [Conformance]'
  3006    description: A Pod created with an 'emptyDir' Volume, the volume MUST have mode
  3007      set as -rwxrwxrwx and mount type set to tmpfs. This test is marked LinuxOnly since
  3008      Windows does not support setting specific file permissions.
  3009    release: v1.9
  3010    file: test/e2e/common/storage/empty_dir.go
  3011  - testname: EmptyDir, medium memory, volume mode default
  3012    codename: '[sig-storage] EmptyDir volumes volume on tmpfs should have the correct
  3013      mode [LinuxOnly] [NodeConformance] [Conformance]'
  3014    description: A Pod created with an 'emptyDir' Volume and 'medium' as 'Memory', the
  3015      volume MUST have mode set as -rwxrwxrwx and mount type set to tmpfs. This test
  3016      is marked LinuxOnly since Windows does not support setting specific file permissions,
  3017      or the medium = 'Memory'.
  3018    release: v1.9
  3019    file: test/e2e/common/storage/empty_dir.go
  3020  - testname: EmptyDir Wrapper Volume, ConfigMap volumes, no race
  3021    codename: '[sig-storage] EmptyDir wrapper volumes should not cause race condition
  3022      when used for configmaps [Serial] [Conformance]'
  3023    description: Create 50 ConfigMaps Volumes and 5 replicas of pod with these ConfigMapvolumes
  3024      mounted. Pod MUST NOT fail waiting for Volumes.
  3025    release: v1.13
  3026    file: test/e2e/storage/empty_dir_wrapper.go
  3027  - testname: EmptyDir Wrapper Volume, Secret and ConfigMap volumes, no conflict
  3028    codename: '[sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]'
  3029    description: Secret volume and ConfigMap volume is created with data. Pod MUST be
  3030      able to start with Secret and ConfigMap volumes mounted into the container.
  3031    release: v1.13
  3032    file: test/e2e/storage/empty_dir_wrapper.go
  3033  - testname: PersistentVolumes(Claims), apply changes to a pv/pvc status
  3034    codename: '[sig-storage] PersistentVolumes CSI Conformance should apply changes
  3035      to a pv/pvc status [Conformance]'
  3036    description: Creating PV and PVC MUST succeed. Listing PVs with a labelSelector
  3037      MUST succeed. Listing PVCs in a namespace MUST succeed. Reading PVC status MUST
  3038      succeed with a valid phase found. Reading PV status MUST succeed with a valid
  3039      phase found. Patching the PVC status MUST succeed with its new condition found.
  3040      Patching the PV status MUST succeed with the new reason/message found. Updating
  3041      the PVC status MUST succeed with its new condition found. Updating the PV status
  3042      MUST succeed with the new reason/message found.
  3043    release: v1.29
  3044    file: test/e2e/storage/persistent_volumes.go
  3045  - testname: PersistentVolumes(Claims), lifecycle
  3046    codename: '[sig-storage] PersistentVolumes CSI Conformance should run through the
  3047      lifecycle of a PV and a PVC [Conformance]'
  3048    description: Creating PV and PVC MUST succeed. Listing PVs with a labelSelector
  3049      MUST succeed. Listing PVCs in a namespace MUST succeed. Patching a PV MUST succeed
  3050      with its new label found. Patching a PVC MUST succeed with its new label found.
  3051      Reading a PV and PVC MUST succeed with required UID retrieved. Deleting a PVC
  3052      and PV MUST succeed and it MUST be confirmed. Replacement PV and PVC MUST be created.
  3053      Updating a PV MUST succeed with its new label found. Updating a PVC MUST succeed
  3054      with its new label found. Deleting the PVC and PV via deleteCollection MUST succeed
  3055      and it MUST be confirmed.
  3056    release: v1.29
  3057    file: test/e2e/storage/persistent_volumes.go
  3058  - testname: Projected Volume, multiple projections
  3059    codename: '[sig-storage] Projected combined should project all components that make
  3060      up the projection API [Projection] [NodeConformance] [Conformance]'
  3061    description: A Pod is created with a projected volume source for secrets, configMap
  3062      and downwardAPI with pod name, cpu and memory limits and cpu and memory requests.
  3063      Pod MUST be able to read the secrets, configMap values and the cpu and memory
  3064      limits as well as cpu and memory requests from the mounted DownwardAPIVolumeFiles.
  3065    release: v1.9
  3066    file: test/e2e/common/storage/projected_combined.go
  3067  - testname: Projected Volume, ConfigMap, create, update and delete
  3068    codename: '[sig-storage] Projected configMap optional updates should be reflected
  3069      in volume [NodeConformance] [Conformance]'
  3070    description: Create a Pod with three containers with ConfigMaps namely a create,
  3071      update and delete container. Create Container when started MUST not have configMap,
  3072      update and delete containers MUST be created with a ConfigMap value as 'value-1'.
  3073      Create a configMap in the create container, the Pod MUST be able to read the configMap
  3074      from the create container. Update the configMap in the update container, Pod MUST
  3075      be able to read the updated configMap value. Delete the configMap in the delete
  3076      container. Pod MUST fail to read the configMap from the delete container.
  3077    release: v1.9
  3078    file: test/e2e/common/storage/projected_configmap.go
  3079  - testname: Projected Volume, ConfigMap, volume mode default
  3080    codename: '[sig-storage] Projected configMap should be consumable from pods in volume
  3081      [NodeConformance] [Conformance]'
  3082    description: A Pod is created with projected volume source 'ConfigMap' to store
  3083      a configMap with default permission mode. Pod MUST be able to read the content
  3084      of the ConfigMap successfully and the mode on the volume MUST be -rw-r--r--.
  3085    release: v1.9
  3086    file: test/e2e/common/storage/projected_configmap.go
  3087  - testname: Projected Volume, ConfigMap, non-root user
  3088    codename: '[sig-storage] Projected configMap should be consumable from pods in volume
  3089      as non-root [NodeConformance] [Conformance]'
  3090    description: A Pod is created with projected volume source 'ConfigMap' to store
  3091      a configMap as non-root user with uid 1000. Pod MUST be able to read the content
  3092      of the ConfigMap successfully and the mode on the volume MUST be -rw-r--r--.
  3093    release: v1.9
  3094    file: test/e2e/common/storage/projected_configmap.go
  3095  - testname: Projected Volume, ConfigMap, volume mode 0400
  3096    codename: '[sig-storage] Projected configMap should be consumable from pods in volume
  3097      with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]'
  3098    description: A Pod is created with projected volume source 'ConfigMap' to store
  3099      a configMap with permission mode set to 0400. Pod MUST be able to read the content
  3100      of the ConfigMap successfully and the mode on the volume MUST be -r--------. This
  3101      test is marked LinuxOnly since Windows does not support setting specific file
  3102      permissions.
  3103    release: v1.9
  3104    file: test/e2e/common/storage/projected_configmap.go
  3105  - testname: Projected Volume, ConfigMap, mapped
  3106    codename: '[sig-storage] Projected configMap should be consumable from pods in volume
  3107      with mappings [NodeConformance] [Conformance]'
  3108    description: A Pod is created with projected volume source 'ConfigMap' to store
  3109      a configMap with default permission mode. The ConfigMap is also mapped to a custom
  3110      path. Pod MUST be able to read the content of the ConfigMap from the custom location
  3111      successfully and the mode on the volume MUST be -rw-r--r--.
  3112    release: v1.9
  3113    file: test/e2e/common/storage/projected_configmap.go
  3114  - testname: Projected Volume, ConfigMap, mapped, volume mode 0400
  3115    codename: '[sig-storage] Projected configMap should be consumable from pods in volume
  3116      with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]'
  3117    description: A Pod is created with projected volume source 'ConfigMap' to store
  3118      a configMap with permission mode set to 0400. The ConfigMap is also mapped to
  3119      a custom path. Pod MUST be able to read the content of the ConfigMap from the
  3120      custom location successfully and the mode on the volume MUST be -r--r--r--. This
  3121      test is marked LinuxOnly since Windows does not support setting specific file
  3122      permissions.
  3123    release: v1.9
  3124    file: test/e2e/common/storage/projected_configmap.go
  3125  - testname: Projected Volume, ConfigMap, mapped, non-root user
  3126    codename: '[sig-storage] Projected configMap should be consumable from pods in volume
  3127      with mappings as non-root [NodeConformance] [Conformance]'
  3128    description: A Pod is created with projected volume source 'ConfigMap' to store
  3129      a configMap as non-root user with uid 1000. The ConfigMap is also mapped to a
  3130      custom path. Pod MUST be able to read the content of the ConfigMap from the custom
  3131      location successfully and the mode on the volume MUST be -r--r--r--.
  3132    release: v1.9
  3133    file: test/e2e/common/storage/projected_configmap.go
  3134  - testname: Projected Volume, ConfigMap, multiple volume paths
  3135    codename: '[sig-storage] Projected configMap should be consumable in multiple volumes
  3136      in the same pod [NodeConformance] [Conformance]'
  3137    description: A Pod is created with a projected volume source 'ConfigMap' to store
  3138      a configMap. The configMap is mapped to two different volume mounts. Pod MUST
  3139      be able to read the content of the configMap successfully from the two volume
  3140      mounts.
  3141    release: v1.9
  3142    file: test/e2e/common/storage/projected_configmap.go
  3143  - testname: Projected Volume, ConfigMap, update
  3144    codename: '[sig-storage] Projected configMap updates should be reflected in volume
  3145      [NodeConformance] [Conformance]'
  3146    description: A Pod is created with projected volume source 'ConfigMap' to store
  3147      a configMap and performs a create and update to new value. Pod MUST be able to
  3148      create the configMap with value-1. Pod MUST be able to update the value in the
  3149      confgiMap to value-2.
  3150    release: v1.9
  3151    file: test/e2e/common/storage/projected_configmap.go
  3152  - testname: Projected Volume, DownwardAPI, CPU limits
  3153    codename: '[sig-storage] Projected downwardAPI should provide container''s cpu limit
  3154      [NodeConformance] [Conformance]'
  3155    description: A Pod is created with a projected volume source for downwardAPI with
  3156      pod name, cpu and memory limits and cpu and memory requests. Pod MUST be able
  3157      to read the cpu limits from the mounted DownwardAPIVolumeFiles.
  3158    release: v1.9
  3159    file: test/e2e/common/storage/projected_downwardapi.go
  3160  - testname: Projected Volume, DownwardAPI, CPU request
  3161    codename: '[sig-storage] Projected downwardAPI should provide container''s cpu request
  3162      [NodeConformance] [Conformance]'
  3163    description: A Pod is created with a projected volume source for downwardAPI with
  3164      pod name, cpu and memory limits and cpu and memory requests. Pod MUST be able
  3165      to read the cpu request from the mounted DownwardAPIVolumeFiles.
  3166    release: v1.9
  3167    file: test/e2e/common/storage/projected_downwardapi.go
  3168  - testname: Projected Volume, DownwardAPI, memory limits
  3169    codename: '[sig-storage] Projected downwardAPI should provide container''s memory
  3170      limit [NodeConformance] [Conformance]'
  3171    description: A Pod is created with a projected volume source for downwardAPI with
  3172      pod name, cpu and memory limits and cpu and memory requests. Pod MUST be able
  3173      to read the memory limits from the mounted DownwardAPIVolumeFiles.
  3174    release: v1.9
  3175    file: test/e2e/common/storage/projected_downwardapi.go
  3176  - testname: Projected Volume, DownwardAPI, memory request
  3177    codename: '[sig-storage] Projected downwardAPI should provide container''s memory
  3178      request [NodeConformance] [Conformance]'
  3179    description: A Pod is created with a projected volume source for downwardAPI with
  3180      pod name, cpu and memory limits and cpu and memory requests. Pod MUST be able
  3181      to read the memory request from the mounted DownwardAPIVolumeFiles.
  3182    release: v1.9
  3183    file: test/e2e/common/storage/projected_downwardapi.go
  3184  - testname: Projected Volume, DownwardAPI, CPU limit, node allocatable
  3185    codename: '[sig-storage] Projected downwardAPI should provide node allocatable (cpu)
  3186      as default cpu limit if the limit is not set [NodeConformance] [Conformance]'
  3187    description: A Pod is created with a projected volume source for downwardAPI with
  3188      pod name, cpu and memory limits and cpu and memory requests.  The CPU and memory
  3189      resources for requests and limits are NOT specified for the container. Pod MUST
  3190      be able to read the default cpu limits from the mounted DownwardAPIVolumeFiles.
  3191    release: v1.9
  3192    file: test/e2e/common/storage/projected_downwardapi.go
  3193  - testname: Projected Volume, DownwardAPI, memory limit, node allocatable
  3194    codename: '[sig-storage] Projected downwardAPI should provide node allocatable (memory)
  3195      as default memory limit if the limit is not set [NodeConformance] [Conformance]'
  3196    description: A Pod is created with a projected volume source for downwardAPI with
  3197      pod name, cpu and memory limits and cpu and memory requests.  The CPU and memory
  3198      resources for requests and limits are NOT specified for the container. Pod MUST
  3199      be able to read the default memory limits from the mounted DownwardAPIVolumeFiles.
  3200    release: v1.9
  3201    file: test/e2e/common/storage/projected_downwardapi.go
  3202  - testname: Projected Volume, DownwardAPI, pod name
  3203    codename: '[sig-storage] Projected downwardAPI should provide podname only [NodeConformance]
  3204      [Conformance]'
  3205    description: A Pod is created with a projected volume source for downwardAPI with
  3206      pod name, cpu and memory limits and cpu and memory requests. Pod MUST be able
  3207      to read the pod name from the mounted DownwardAPIVolumeFiles.
  3208    release: v1.9
  3209    file: test/e2e/common/storage/projected_downwardapi.go
  3210  - testname: Projected Volume, DownwardAPI, volume mode 0400
  3211    codename: '[sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly]
  3212      [NodeConformance] [Conformance]'
  3213    description: A Pod is created with a projected volume source for downwardAPI with
  3214      pod name, cpu and memory limits and cpu and memory requests. The default mode
  3215      for the volume mount is set to 0400. Pod MUST be able to read the pod name from
  3216      the mounted DownwardAPIVolumeFiles and the volume mode must be -r--------. This
  3217      test is marked LinuxOnly since Windows does not support setting specific file
  3218      permissions.
  3219    release: v1.9
  3220    file: test/e2e/common/storage/projected_downwardapi.go
  3221  - testname: Projected Volume, DownwardAPI, volume mode 0400
  3222    codename: '[sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly]
  3223      [NodeConformance] [Conformance]'
  3224    description: A Pod is created with a projected volume source for downwardAPI with
  3225      pod name, cpu and memory limits and cpu and memory requests. The default mode
  3226      for the volume mount is set to 0400. Pod MUST be able to read the pod name from
  3227      the mounted DownwardAPIVolumeFiles and the volume mode must be -r--------. This
  3228      test is marked LinuxOnly since Windows does not support setting specific file
  3229      permissions.
  3230    release: v1.9
  3231    file: test/e2e/common/storage/projected_downwardapi.go
  3232  - testname: Projected Volume, DownwardAPI, update annotation
  3233    codename: '[sig-storage] Projected downwardAPI should update annotations on modification
  3234      [NodeConformance] [Conformance]'
  3235    description: A Pod is created with a projected volume source for downwardAPI with
  3236      pod name, cpu and memory limits and cpu and memory requests and annotation items.
  3237      Pod MUST be able to read the annotations from the mounted DownwardAPIVolumeFiles.
  3238      Annotations are then updated. Pod MUST be able to read the updated values for
  3239      the Annotations.
  3240    release: v1.9
  3241    file: test/e2e/common/storage/projected_downwardapi.go
  3242  - testname: Projected Volume, DownwardAPI, update labels
  3243    codename: '[sig-storage] Projected downwardAPI should update labels on modification
  3244      [NodeConformance] [Conformance]'
  3245    description: A Pod is created with a projected volume source for downwardAPI with
  3246      pod name, cpu and memory limits and cpu and memory requests and label items. Pod
  3247      MUST be able to read the labels from the mounted DownwardAPIVolumeFiles. Labels
  3248      are then updated. Pod MUST be able to read the updated values for the Labels.
  3249    release: v1.9
  3250    file: test/e2e/common/storage/projected_downwardapi.go
  3251  - testname: Projected Volume, Secrets, create, update delete
  3252    codename: '[sig-storage] Projected secret optional updates should be reflected in
  3253      volume [NodeConformance] [Conformance]'
  3254    description: Create a Pod with three containers with secrets namely a create, update
  3255      and delete container. Create Container when started MUST no have a secret, update
  3256      and delete containers MUST be created with a secret value. Create a secret in
  3257      the create container, the Pod MUST be able to read the secret from the create
  3258      container. Update the secret in the update container, Pod MUST be able to read
  3259      the updated secret value. Delete the secret in the delete container. Pod MUST
  3260      fail to read the secret from the delete container.
  3261    release: v1.9
  3262    file: test/e2e/common/storage/projected_secret.go
  3263  - testname: Projected Volume, Secrets, volume mode default
  3264    codename: '[sig-storage] Projected secret should be consumable from pods in volume
  3265      [NodeConformance] [Conformance]'
  3266    description: A Pod is created with a projected volume source 'secret' to store a
  3267      secret with a specified key with default permission mode. Pod MUST be able to
  3268      read the content of the key successfully and the mode MUST be -rw-r--r-- by default.
  3269    release: v1.9
  3270    file: test/e2e/common/storage/projected_secret.go
  3271  - testname: Project Volume, Secrets, non-root, custom fsGroup
  3272    codename: '[sig-storage] Projected secret should be consumable from pods in volume
  3273      as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]'
  3274    description: A Pod is created with a projected volume source 'secret' to store a
  3275      secret with a specified key. The volume has permission mode set to 0440, fsgroup
  3276      set to 1001 and user set to non-root uid of 1000. Pod MUST be able to read the
  3277      content of the key successfully and the mode MUST be -r--r-----. This test is
  3278      marked LinuxOnly since Windows does not support setting specific file permissions,
  3279      or running as UID / GID.
  3280    release: v1.9
  3281    file: test/e2e/common/storage/projected_secret.go
  3282  - testname: Projected Volume, Secrets, volume mode 0400
  3283    codename: '[sig-storage] Projected secret should be consumable from pods in volume
  3284      with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]'
  3285    description: A Pod is created with a projected volume source 'secret' to store a
  3286      secret with a specified key with permission mode set to 0x400 on the Pod. Pod
  3287      MUST be able to read the content of the key successfully and the mode MUST be
  3288      -r--------. This test is marked LinuxOnly since Windows does not support setting
  3289      specific file permissions.
  3290    release: v1.9
  3291    file: test/e2e/common/storage/projected_secret.go
  3292  - testname: Projected Volume, Secrets, mapped
  3293    codename: '[sig-storage] Projected secret should be consumable from pods in volume
  3294      with mappings [NodeConformance] [Conformance]'
  3295    description: A Pod is created with a projected volume source 'secret' to store a
  3296      secret with a specified key with default permission mode. The secret is also mapped
  3297      to a custom path. Pod MUST be able to read the content of the key successfully
  3298      and the mode MUST be -r--------on the mapped volume.
  3299    release: v1.9
  3300    file: test/e2e/common/storage/projected_secret.go
  3301  - testname: Projected Volume, Secrets, mapped, volume mode 0400
  3302    codename: '[sig-storage] Projected secret should be consumable from pods in volume
  3303      with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]'
  3304    description: A Pod is created with a projected volume source 'secret' to store a
  3305      secret with a specified key with permission mode set to 0400. The secret is also
  3306      mapped to a specific name. Pod MUST be able to read the content of the key successfully
  3307      and the mode MUST be -r-------- on the mapped volume. This test is marked LinuxOnly
  3308      since Windows does not support setting specific file permissions.
  3309    release: v1.9
  3310    file: test/e2e/common/storage/projected_secret.go
  3311  - testname: Projected Volume, Secrets, mapped, multiple paths
  3312    codename: '[sig-storage] Projected secret should be consumable in multiple volumes
  3313      in a pod [NodeConformance] [Conformance]'
  3314    description: A Pod is created with a projected volume source 'secret' to store a
  3315      secret with a specified key. The secret is mapped to two different volume mounts.
  3316      Pod MUST be able to read the content of the key successfully from the two volume
  3317      mounts and the mode MUST be -r-------- on the mapped volumes.
  3318    release: v1.9
  3319    file: test/e2e/common/storage/projected_secret.go
  3320  - testname: Secrets Volume, create, update and delete
  3321    codename: '[sig-storage] Secrets optional updates should be reflected in volume
  3322      [NodeConformance] [Conformance]'
  3323    description: Create a Pod with three containers with secrets volume sources namely
  3324      a create, update and delete container. Create Container when started MUST not
  3325      have secret, update and delete containers MUST be created with a secret value.
  3326      Create a secret in the create container, the Pod MUST be able to read the secret
  3327      from the create container. Update the secret in the update container, Pod MUST
  3328      be able to read the updated secret value. Delete the secret in the delete container.
  3329      Pod MUST fail to read the secret from the delete container.
  3330    release: v1.9
  3331    file: test/e2e/common/storage/secrets_volume.go
  3332  - testname: Secrets Volume, volume mode default, secret with same name in different
  3333      namespace
  3334    codename: '[sig-storage] Secrets should be able to mount in a volume regardless
  3335      of a different secret existing with same name in different namespace [NodeConformance]
  3336      [Conformance]'
  3337    description: Create a secret with same name in two namespaces. Create a Pod with
  3338      secret volume source configured into the container. Pod MUST be able to read the
  3339      secrets from the mounted volume from the container runtime and only secrets which
  3340      are associated with namespace where pod is created. The file mode of the secret
  3341      MUST be -rw-r--r-- by default.
  3342    release: v1.12
  3343    file: test/e2e/common/storage/secrets_volume.go
  3344  - testname: Secrets Volume, default
  3345    codename: '[sig-storage] Secrets should be consumable from pods in volume [NodeConformance]
  3346      [Conformance]'
  3347    description: Create a secret. Create a Pod with secret volume source configured
  3348      into the container. Pod MUST be able to read the secret from the mounted volume
  3349      from the container runtime and the file mode of the secret MUST be -rw-r--r--
  3350      by default.
  3351    release: v1.9
  3352    file: test/e2e/common/storage/secrets_volume.go
  3353  - testname: Secrets Volume, volume mode 0440, fsGroup 1001 and uid 1000
  3354    codename: '[sig-storage] Secrets should be consumable from pods in volume as non-root
  3355      with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]'
  3356    description: Create a secret. Create a Pod with secret volume source configured
  3357      into the container with file mode set to 0x440 as a non-root user with uid 1000
  3358      and fsGroup id 1001. Pod MUST be able to read the secret from the mounted volume
  3359      from the container runtime and the file mode of the secret MUST be -r--r-----by
  3360      default. This test is marked LinuxOnly since Windows does not support setting
  3361      specific file permissions, or running as UID / GID.
  3362    release: v1.9
  3363    file: test/e2e/common/storage/secrets_volume.go
  3364  - testname: Secrets Volume, volume mode 0400
  3365    codename: '[sig-storage] Secrets should be consumable from pods in volume with defaultMode
  3366      set [LinuxOnly] [NodeConformance] [Conformance]'
  3367    description: Create a secret. Create a Pod with secret volume source configured
  3368      into the container with file mode set to 0x400. Pod MUST be able to read the secret
  3369      from the mounted volume from the container runtime and the file mode of the secret
  3370      MUST be -r-------- by default. This test is marked LinuxOnly since Windows does
  3371      not support setting specific file permissions.
  3372    release: v1.9
  3373    file: test/e2e/common/storage/secrets_volume.go
  3374  - testname: Secrets Volume, mapping
  3375    codename: '[sig-storage] Secrets should be consumable from pods in volume with mappings
  3376      [NodeConformance] [Conformance]'
  3377    description: Create a secret. Create a Pod with secret volume source configured
  3378      into the container with a custom path. Pod MUST be able to read the secret from
  3379      the mounted volume from the specified custom path. The file mode of the secret
  3380      MUST be -rw-r--r-- by default.
  3381    release: v1.9
  3382    file: test/e2e/common/storage/secrets_volume.go
  3383  - testname: Secrets Volume, mapping, volume mode 0400
  3384    codename: '[sig-storage] Secrets should be consumable from pods in volume with mappings
  3385      and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]'
  3386    description: Create a secret. Create a Pod with secret volume source configured
  3387      into the container with a custom path and file mode set to 0x400. Pod MUST be
  3388      able to read the secret from the mounted volume from the specified custom path.
  3389      The file mode of the secret MUST be -r--r--r--. This test is marked LinuxOnly
  3390      since Windows does not support setting specific file permissions.
  3391    release: v1.9
  3392    file: test/e2e/common/storage/secrets_volume.go
  3393  - testname: Secrets Volume, mapping multiple volume paths
  3394    codename: '[sig-storage] Secrets should be consumable in multiple volumes in a pod
  3395      [NodeConformance] [Conformance]'
  3396    description: Create a secret. Create a Pod with two secret volume sources configured
  3397      into the container in to two different custom paths. Pod MUST be able to read
  3398      the secret from the both the mounted volumes from the two specified custom paths.
  3399    release: v1.9
  3400    file: test/e2e/common/storage/secrets_volume.go
  3401  - testname: Secrets Volume, immutability
  3402    codename: '[sig-storage] Secrets should be immutable if `immutable` field is set
  3403      [Conformance]'
  3404    description: Create a secret. Update it's data field, the update MUST succeed. Mark
  3405      the secret as immutable, the update MUST succeed. Try to update its data, the
  3406      update MUST fail. Try to mark the secret back as not immutable, the update MUST
  3407      fail. Try to update the secret`s metadata (labels), the update must succeed. Try
  3408      to delete the secret, the deletion must succeed.
  3409    release: v1.21
  3410    file: test/e2e/common/storage/secrets_volume.go
  3411  - testname: StorageClass, lifecycle
  3412    codename: '[sig-storage] StorageClasses CSI Conformance should run through the lifecycle
  3413      of a StorageClass [Conformance]'
  3414    description: Creating a StorageClass MUST succeed. Reading the StorageClass MUST
  3415      succeed. Patching the StorageClass MUST succeed with its new label found. Deleting
  3416      the StorageClass MUST succeed and it MUST be confirmed. Replacement StorageClass
  3417      MUST be created. Updating the StorageClass MUST succeed with its new label found.
  3418      Deleting the StorageClass via deleteCollection MUST succeed and it MUST be confirmed.
  3419    release: v1.29
  3420    file: test/e2e/storage/storageclass.go
  3421  - testname: 'SubPath: Reading content from a configmap volume.'
  3422    codename: '[sig-storage] Subpath Atomic writer volumes should support subpaths with
  3423      configmap pod [Conformance]'
  3424    description: Containers in a pod can read content from a configmap mounted volume
  3425      which was configured with a subpath.
  3426    release: v1.12
  3427    file: test/e2e/storage/subpath.go
  3428  - testname: 'SubPath: Reading content from a configmap volume.'
  3429    codename: '[sig-storage] Subpath Atomic writer volumes should support subpaths with
  3430      configmap pod with mountPath of existing file [Conformance]'
  3431    description: Containers in a pod can read content from a configmap mounted volume
  3432      which was configured with a subpath and also using a mountpath that is a specific
  3433      file.
  3434    release: v1.12
  3435    file: test/e2e/storage/subpath.go
  3436  - testname: 'SubPath: Reading content from a downwardAPI volume.'
  3437    codename: '[sig-storage] Subpath Atomic writer volumes should support subpaths with
  3438      downward pod [Conformance]'
  3439    description: Containers in a pod can read content from a downwardAPI mounted volume
  3440      which was configured with a subpath.
  3441    release: v1.12
  3442    file: test/e2e/storage/subpath.go
  3443  - testname: 'SubPath: Reading content from a projected volume.'
  3444    codename: '[sig-storage] Subpath Atomic writer volumes should support subpaths with
  3445      projected pod [Conformance]'
  3446    description: Containers in a pod can read content from a projected mounted volume
  3447      which was configured with a subpath.
  3448    release: v1.12
  3449    file: test/e2e/storage/subpath.go
  3450  - testname: 'SubPath: Reading content from a secret volume.'
  3451    codename: '[sig-storage] Subpath Atomic writer volumes should support subpaths with
  3452      secret pod [Conformance]'
  3453    description: Containers in a pod can read content from a secret mounted volume which
  3454      was configured with a subpath.
  3455    release: v1.12
  3456    file: test/e2e/storage/subpath.go
  3457