github.com/qsunny/k8s@v0.0.0-20220101153623-e6dca256d5bf/examples-master/staging/nodesjs-mongodb/README.md (about)

     1  ## Node.js and MongoDB on Kubernetes
     2  
     3  The following document describes the deployment of a basic Node.js and MongoDB web stack on Kubernetes.  Currently this example does not use replica sets for MongoDB.
     4  
     5  For more a in-depth explanation of this example, please [read this post.](https://medium.com/google-cloud-platform-developer-advocates/running-a-mean-stack-on-google-cloud-platform-with-kubernetes-149ca81c2b5d)
     6  
     7  ### Prerequisites
     8  
     9  This example assumes that you have a basic understanding of Kubernetes concepts (Pods, Services, Replication Controllers), a Kubernetes cluster up and running, and that you have installed the ```kubectl``` command line tool somewhere in your path.  Please see the [getting started](https://kubernetes.io/docs/getting-started-guides/) for installation instructions for your platform.
    10  
    11  Note: This example was tested on [Google Container Engine](https://cloud.google.com/container-engine/docs/). Some optional commands require the [Google Cloud SDK](https://cloud.google.com/sdk/).
    12  
    13  ### Creating the MongoDB Service
    14  
    15  The first thing to do is create the MongoDB Service.  This service is used by the other Pods in the cluster to find and connect to the MongoDB instance.
    16  
    17  ```yaml
    18  apiVersion: v1
    19  kind: Service
    20  metadata:
    21    labels:
    22      name: mongo
    23    name: mongo
    24  spec:
    25    ports:
    26      - port: 27017
    27        targetPort: 27017
    28    selector:
    29      name: mongo
    30  ```
    31  
    32  [Download file](mongo-service.yaml)
    33  
    34  This service looks for all pods with the "mongo" tag, and creates a Service on port 27017 that targets port 27017 on the MongoDB pods. Port 27017 is the standard MongoDB port.
    35  
    36  To start the service, run:
    37  
    38  ```sh
    39  kubectl create -f examples/nodesjs-mongodb/mongo-service.yaml
    40  ```
    41  
    42  ### Creating the MongoDB Controller
    43  
    44  Next, create the MongoDB instance that runs the Database.  Databases also need persistent storage, which will be different for each platform.
    45  
    46  ```yaml
    47  apiVersion: v1
    48  kind: ReplicationController
    49  metadata:
    50    labels:
    51      name: mongo
    52    name: mongo-controller
    53  spec:
    54    replicas: 1
    55    template:
    56      metadata:
    57        labels:
    58          name: mongo
    59      spec:
    60        containers:
    61        - image: mongo
    62          name: mongo
    63          ports:
    64          - name: mongo
    65            containerPort: 27017
    66            hostPort: 27017
    67          volumeMounts:
    68              - name: mongo-persistent-storage
    69                mountPath: /data/db
    70        volumes:
    71          - name: mongo-persistent-storage
    72            gcePersistentDisk:
    73              pdName: mongo-disk
    74              fsType: ext4
    75  ```
    76  
    77  [Download file](mongo-controller.yaml)
    78  
    79  Looking at this file from the bottom up:
    80  
    81  First, it creates a volume called "mongo-persistent-storage."
    82  
    83  In the above example, it is using a "gcePersistentDisk" to back the storage. This is only applicable if you are running your Kubernetes cluster in Google Cloud Platform.
    84  
    85  If you don't already have a [Google Persistent Disk](https://cloud.google.com/compute/docs/disks) created in the same zone as your cluster, create a new disk in the same Google Compute Engine / Container Engine zone as your cluster with this command:
    86  
    87  ```sh
    88  gcloud compute disks create --size=200GB --zone=$ZONE mongo-disk
    89  ```
    90  
    91  If you are using AWS, replace the "volumes" section with this (untested):
    92  
    93  ```yaml
    94        volumes:
    95          - name: mongo-persistent-storage
    96            awsElasticBlockStore:
    97              volumeID: aws://{region}/{volume ID}
    98              fsType: ext4
    99  ```
   100  
   101  If you don't have a EBS volume in the same region as your cluster, create a new EBS volume in the same region with this command (untested):
   102  
   103  ```sh
   104  ec2-create-volume --size 200 --region $REGION --availability-zone $ZONE
   105  ```
   106  
   107  This command will return a volume ID to use.
   108  
   109  For other storage options (iSCSI, NFS, OpenStack), please follow the documentation.
   110  
   111  Now that the volume is created and usable by Kubernetes, the next step is to create the Pod.
   112  
   113  Looking at the container section: It uses the official MongoDB container, names itself "mongo", opens up port 27017, and mounts the disk to "/data/db" (where the mongo container expects the data to be).
   114  
   115  Now looking at the rest of the file, it is creating a Replication Controller with one replica, called mongo-controller. It is important to use a Replication Controller and not just a Pod, as a Replication Controller will restart the instance in case it crashes.
   116  
   117  Create this controller with this command:
   118  
   119  ```sh
   120  kubectl create -f examples/nodesjs-mongodb/mongo-controller.yaml
   121  ```
   122  
   123  At this point, MongoDB is up and running.
   124  
   125  Note: There is no password protection or auth running on the database by default. Please keep this in mind!
   126  
   127  ### Creating the Node.js Service
   128  
   129  The next step is to create the Node.js service. This service is what will be the endpoint for the web site, and will load balance requests to the Node.js instances.
   130  
   131  ```yaml
   132  apiVersion: v1
   133  kind: Service
   134  metadata:
   135    name: web
   136    labels:
   137      name: web
   138  spec:
   139    type: LoadBalancer
   140    ports:
   141      - port: 80
   142        targetPort: 3000
   143        protocol: TCP
   144    selector:
   145      name: web
   146  ```
   147  
   148  [Download file](web-service.yaml)
   149  
   150  This service is called "web," and it uses a [LoadBalancer](https://kubernetes.io/docs/user-guide/services.md#type-loadbalancer) to distribute traffic on port 80 to port 3000 running on Pods with the "web" tag. Port 80 is the standard HTTP port, and port 3000 is the standard Node.js port.
   151  
   152  On Google Container Engine, a [network load balancer](https://cloud.google.com/compute/docs/load-balancing/network/) and [firewall rule](https://cloud.google.com/compute/docs/networking#addingafirewall) to allow traffic are automatically created.
   153  
   154  To start the service, run:
   155  
   156  ```sh
   157  kubectl create -f examples/nodesjs-mongodb/web-service.yaml
   158  ```
   159  
   160  If you are running on a platform that does not support LoadBalancer (i.e Bare Metal), you need to use a [NodePort](https://kubernetes.io/docs/user-guide/services.md#type-nodeport) with your own load balancer.
   161  
   162  You may also need to open appropriate Firewall ports to allow traffic.
   163  
   164  ### Creating the Node.js Controller
   165  
   166  The final step is deploying the Node.js container that will run the application code. This container can easily by replaced by any other web serving frontend, such as Rails, LAMP, Java, Go, etc.
   167  
   168  The most important thing to keep in mind is how to access the MongoDB service.
   169  
   170  If you were running MongoDB and Node.js on the same server, you would access MongoDB like so:
   171  
   172  ```javascript
   173  MongoClient.connect('mongodb://localhost:27017/database-name', function(err, db) { console.log(db); });
   174  ```
   175  
   176  With this Kubernetes setup, that line of code would become:
   177  
   178  ```javascript
   179  MongoClient.connect('mongodb://mongo:27017/database-name', function(err, db) { console.log(db); });
   180  ```
   181  
   182  The MongoDB Service previously created tells Kubernetes to configure the cluster so 'mongo' points to the MongoDB instance created earlier.
   183  
   184  #### Custom Container
   185  
   186  You should have your own container that runs your Node.js code hosted in a container registry.
   187  
   188  See [this example](https://medium.com/google-cloud-platform-developer-advocates/running-a-mean-stack-on-google-cloud-platform-with-kubernetes-149ca81c2b5d#8edc) to see how to make your own Node.js container.
   189  
   190  Once you have created your container, create the web controller.
   191  
   192  ```yaml
   193  apiVersion: v1
   194  kind: ReplicationController
   195  metadata:
   196    labels:
   197      name: web
   198    name: web-controller
   199  spec:
   200    replicas: 2
   201    selector:
   202      name: web
   203    template:
   204      metadata:
   205        labels:
   206          name: web
   207      spec:
   208        containers:
   209        - image: <YOUR-CONTAINER>
   210          name: web
   211          ports:
   212          - containerPort: 3000
   213            name: http-server
   214  ```
   215  
   216  [Download file](web-controller.yaml)
   217  
   218  Replace <YOUR-CONTAINER> with the url of your container.
   219  
   220  This Controller will create two replicas of the Node.js container, and each Node.js container will have the tag "web" and expose port 3000. The Service LoadBalancer will forward port 80 traffic to port 3000 automatically, along with load balancing traffic between the two instances.
   221  
   222  To start the Controller, run:
   223  
   224  ```sh
   225  kubectl create -f examples/nodesjs-mongodb/web-controller.yaml
   226  ```
   227  
   228  #### Demo Container
   229  
   230  If you DON'T want to create a custom container, you can use the following YAML file:
   231  
   232  Note: You cannot run both Controllers at the same time, as they both try to control the same Pods.
   233  
   234  ```yaml
   235  apiVersion: v1
   236  kind: ReplicationController
   237  metadata:
   238    labels:
   239      name: web
   240    name: web-controller
   241  spec:
   242    replicas: 2
   243    selector:
   244      name: web
   245    template:
   246      metadata:
   247        labels:
   248          name: web
   249      spec:
   250        containers:
   251        - image: node:0.10.40
   252          command: ['/bin/sh', '-c']
   253          args: ['cd /home && git clone https://github.com/ijason/NodeJS-Sample-App.git demo && cd demo/EmployeeDB/ && npm install && sed -i -- ''s/localhost/mongo/g'' app.js && node app.js']
   254          name: web
   255          ports:
   256          - containerPort: 3000
   257            name: http-server
   258  ```
   259  
   260  [Download file](web-controller-demo.yaml)
   261  
   262  This will use the default Node.js container, and will pull and execute code at run time. This is not recommended; typically, your code should be part of the container.
   263  
   264  To start the Controller, run:
   265  
   266  ```sh
   267  kubectl create -f examples/nodesjs-mongodb/web-controller-demo.yaml
   268  ```
   269  
   270  ### Testing it out
   271  
   272  Now that all the components are running, visit the IP address of the load balancer to access the website.
   273  
   274  With Google Cloud Platform, get the IP address of all load balancers with the following command:
   275  
   276  ```sh
   277  gcloud compute forwarding-rules list
   278  ```
   279  
   280  <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
   281  [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/nodesjs-mongodb/README.md?pixel)]()
   282  <!-- END MUNGE: GENERATED_ANALYTICS -->