github.com/IBM-Blockchain/fabric-operator@v1.0.4/docs/DEVELOPING.md (about)

     1  # Fabric Operator Development
     2  
     3  ## Prerequisites
     4  
     5  - Golang 1.17+
     6  - A good IDE, any will do.  VSCode and GoLand are great tools - use them!
     7  - A healthy mix of patience, curiosity, and ingenuity.
     8  - A strong desire to make Fabric ... _right_.  
     9  - Check your ego at the door.
    10  
    11  
    12  ## Build the Operator
    13  
    14  ```shell
    15  # Let's Go! 
    16  make
    17  ```
    18  
    19  ```shell
    20  # Build ghcr.io/ibm-blockchain/fabric-operator:latest-amd64 
    21  make image
    22  ```
    23  
    24  ```shell
    25  # Build Fabric CRDs
    26  make manifests
    27  ```
    28  
    29  ## Unit Tests
    30  
    31  ```shell
    32  # Just like it says: 
    33  make test
    34  ```
    35  
    36  
    37  ## Integration Tests
    38  
    39  Integration tests run the operator binary _locally_ as a native process, connecting to a "remote" Kube API 
    40  controller.
    41  
    42  ```shell
    43  # point the operator at a kubernetes 
    44  export KUBECONFIG_PATH=$HOME/.kube/config
    45  
    46  make integration-tests
    47  ```
    48  
    49  
    50  Or focus on a targeted suite: 
    51  ```shell
    52  INT_TEST_NAME=<folder under /integration> make integration-tests 
    53  ```
    54  
    55  
    56  ## Debug the Operator
    57  
    58  Launch main.go with the following environment:
    59  ```shell
    60  export KUBECONFIG=$HOME/.kube/config
    61  export WATCH_NAMESPACE=test-network
    62  export CLUSTERTYPE=k8s
    63  export OPERATOR_LOCAL_MODE=true
    64  
    65  go run .
    66  ```
    67  
    68  
    69  ## Local Kube Options:
    70  
    71  ### Rancher / k3s
    72  
    73  [Rancher Desktop](https://rancherdesktop.io) is a _fantastic_ alternative for running a local Kubernetes on
    74  _either_ containerd _or_ mobyd / Docker.
    75  
    76  It's great.
    77  
    78  Use it.
    79  
    80  Learn to love typing `nerdctl --namespace k8s.io`, providing a direct line of sight for k3s to read directly from
    81  the local image cache.
    82  
    83  
    84  ### KIND
    85  ```shell
    86  # Create a KIND cluster - suitable for integration testing.
    87  make kind
    88  
    89  # Why?
    90  make unkind
    91  ```
    92  
    93  OR ... create a KIND cluster pre-configured with Nginx ingress and Fabric CRDs:
    94  ```shell
    95  sample-network/network kind
    96  sample-network/network cluster init
    97  ```
    98  
    99  Note that KIND does not have [visibility to images](https://iximiuz.com/en/posts/kubernetes-kind-load-docker-image/) 
   100  in the local Docker cache.  If you build an image, make sure to directly load it into the KIND image plane
   101  (`kind load docker-image ...`) AND set `imagePullPolicy: IfNotPresent` in any Kube spec referencing the container.
   102  
   103  Running `network kind` will deploy a companion, insecure Docker registry at `localhost:5000`.  This can be
   104  _extremely useful_ for relaying custom images into the cluster when the imagePullPolicy can not be overridden.
   105  If for some reason you can't seem to mangle an image into KIND, build, tag, and push the custom image over to
   106  the `localhost:5000` container registry.  (Or use Rancher/k3s.)
   107  
   108  
   109  ## What's up with Ingress, vcap.me, and nip.io domains?
   110  
   111  Fabric Operator uses Kube Ingress to route traffic through a common, DNS wildcard domain (e.g. *.my-network.example.com.)
   112  In cloud-native environments, where a DNS wildcard domain resolvers are readily available, it is possible to 
   113  map a top-level A record to a single IP address bound to the cluster ingress.
   114  
   115  Unfortunately it is _exceedingly annoying_ to emulate a top-level A wildcard DNS domain in a way that can be visible
   116  to pods running in a Docker network (e.g. KIND) AND to the host OS using the same domain alias and IP.
   117  
   118  Two solutions available are: 
   119  
   120  - Use the `*.vcap.me` domain alias for your Fabric network, mapping to 127.0.0.1 in all cases.  This is convenient for
   121    scenarios where pods in the cluster will have no need to traverse the ingress (e.g. in integration testing).
   122  
   123  
   124  - Use the [Dead simple wildcard DNS for any IP Address](https://nip.io) *.nip.io domain for the cluster, providing 
   125    full flexibility for the IP address of the ingress port.
   126  
   127  
   128  ## Commit Practices
   129  
   130  - There is no "Q/A" team, other than the "A Team" : you.  
   131  - When you write a new feature, develop BOTH unit tests and a functional / integration test.
   132  - When you find a bug, write a regression/unit test to illuminate it, and step on it AFTER it's in the spotlight.
   133  - Submit PRs in tandem with GitHub Issues describing the feature, fix, or enhancement.
   134  - Don't allow PRs to linger.
   135  - Ask your peers and maintainers to review PRs.  Be efficient by including solid test cases.
   136  - Have fun, and learn something new from your peers.
   137  
   138  
   139  ## Pitfalls / Hacks / Tips / Tricks 
   140  
   141  - On OSX, there is a bug in the Golang DNS resolver, causing the Fabric binaries to stall out when resolving DNS.
   142    See [Fabric #3372](https://github.com/hyperledger/fabric/issues/3372) and [Golang #43398](https://github.com/golang/go/issues/43398).
   143    Fix this by turning a build of [fabric](https://github.com/hyperledger/fabric) binaries and copying the build outputs
   144    from `fabric/build/bin/*` --> `sample-network/bin`
   145  
   146  
   147  - ???