github.com/ferranbt/nomad@v0.9.3-0.20190607002617-85c449b7667c/website/source/guides/stateful-workloads/portworx.html.md (about)

     1  ---
     2  layout: "guides"
     3  page_title: "Stateful Workloads with Portworx"
     4  sidebar_current: "guides-stateful-workloads"
     5  description: |-
     6    There are multiple approaches to deploying stateful applications in Nomad.
     7    This guide uses Portworx deploy a MySQL database.
     8  ---
     9  
    10  # Stateful Workloads with Portworx
    11  
    12  Portworx integrates with Nomad and can manage storage for stateful workloads
    13  running inside your Nomad cluster. This guide walks you through deploying an HA
    14  MySQL workload.
    15  
    16  ## Reference Material
    17  
    18  - [Portworx on Nomad][portworx-nomad]
    19  
    20  ## Estimated Time to Complete
    21  
    22  20 minutes
    23  
    24  ## Challenge
    25  
    26  Deploy an HA MySQL database with a replication factor of 3, ensuring the data
    27  will be replicated on 3 different client nodes.
    28  
    29  ## Solution
    30  
    31  Configure Portworx on each Nomad client node in order to create a storage pool
    32  that the MySQL task can use for storage and replication.
    33  
    34  ## Prerequisites
    35  
    36  To perform the tasks described in this guide, you need to have a Nomad
    37  environment with Consul installed. You can use this [repo][repo] to easily
    38  provision a sandbox environment. This guide will assume a cluster with one
    39  server node and three client nodes.
    40  
    41  ~> **Please Note:** This guide is for demo purposes and is only using a single
    42  server node. In a production cluster, 3 or 5 server nodes are recommended.
    43  
    44  ## Steps
    45  
    46  ### Step 1: Ensure Block Device Requirements
    47  
    48  * Portworx needs an unformatted and unmounted block device that it can fully
    49    manage. If you have provisioned a Nomad cluster in AWS using the environment
    50    provided in this guide, you already have an external block device ready to use
    51    (`/dev/xvdd`) with a capacity of 50 GB.
    52  
    53  * Ensure your root volume's size is at least 20 GB. If you are using the
    54    environment provided in this guide, add the following line to your
    55    `terraform.tfvars` file:
    56  
    57    ```
    58    root_block_device_size = 20
    59    ```
    60  
    61  ### Step 2: Install the MySQL client
    62  
    63  We will use the MySQL client to connect to our MySQL database and verify our data.
    64  Ensure it is installed on each client node:
    65  
    66  ```
    67  $ sudo apt install mysql-client
    68  ```
    69  
    70  ### Step 3: Set up the PX-OCI Bundle
    71  
    72  Run the following command on each client node to set up the [PX-OCI][px-oci]
    73  bundle:
    74  
    75  ```
    76  sudo docker run --entrypoint /runc-entry-point.sh \
    77      --rm -i --privileged=true \
    78      -v /opt/pwx:/opt/pwx -v /etc/pwx:/etc/pwx \
    79      portworx/px-enterprise:2.0.2.3
    80  ```
    81  
    82  If the command is successful, you will see output similar to the output show
    83  below (the output has been abbreviated):
    84  
    85  ```
    86  Unable to find image 'portworx/px-enterprise:2.0.2.3' locally
    87  2.0.2.3: Pulling from portworx/px-enterprise
    88  ...
    89  Status: Downloaded newer image for portworx/px-enterprise:2.0.2.3
    90  Executing with arguments: 
    91  INFO: Copying binaries...
    92  INFO: Copying rootfs...
    93  [###############################################################################[.....................................................................................................Total bytes written: 2303375360 (2.2GiB, 48MiB/s)
    94  INFO: Done copying OCI content.
    95  You can now run the Portworx OCI bundle by executing one of the following:
    96  
    97      # sudo /opt/pwx/bin/px-runc run [options]
    98      # sudo /opt/pwx/bin/px-runc install [options]
    99  ...
   100  ```
   101  
   102  ### Step 4: Configure Portworx OCI Bundle
   103  
   104  Configure the Portworx OCI bundle on each client node by running the following
   105  command (the values provided to the options will be different for your
   106  environment):
   107  
   108  ```
   109  $ sudo /opt/pwx/bin/px-runc install -k consul://172.31.49.111:8500 \
   110      -c my_test_cluster -s /dev/xvdd
   111  ```
   112  * You can use client node you are on with the `-k` option since Consul is
   113    installed alongside Nomad
   114  
   115  * Be sure to provide the `-s` option with your external block device path
   116  
   117  If the configuration is successful, you will see the following output
   118  (abbreviated):
   119  
   120  ```
   121  INFO[0000] Rootfs found at /opt/pwx/oci/rootfs          
   122  INFO[0000] PX binaries found at /opt/pwx/bin/px-runc    
   123  INFO[0000] Initializing as version 2.0.2.3-c186a87 (OCI) 
   124  ...
   125  INFO[0000] Successfully written /etc/systemd/system/portworx.socket 
   126  INFO[0000] Successfully written /etc/systemd/system/portworx-output.service 
   127  INFO[0000] Successfully written /etc/systemd/system/portworx.service 
   128  ```
   129  
   130  Since we have created new unit files, please run the following command to reload
   131  the systemd manager configuration:
   132  
   133  ```
   134  sudo systemctl daemon-reload
   135  ```
   136  
   137  ### Step 5: Start Portworx and Check Status
   138  
   139  Run the following command to start Portworx:
   140  
   141  ```
   142  $ sudo systemctl start portworx
   143  ```
   144  Verify the service:
   145  
   146  ```
   147  $ sudo systemctl status portworx
   148  ● portworx.service - Portworx OCI Container
   149     Loaded: loaded (/etc/systemd/system/portworx.service; disabled; vendor preset
   150     Active: active (running) since Wed 2019-03-06 15:16:51 UTC; 1h 47min ago
   151       Docs: https://docs.portworx.com/runc
   152    Process: 28230 ExecStartPre=/bin/sh -c /opt/pwx/bin/runc delete -f portworx ||
   153   Main PID: 28238 (runc)
   154  ...
   155  ```
   156  Wait a few moments (Portworx may still be initializing) and then check the
   157  status of Portworx using the `pxctl` command. 
   158  
   159  ```
   160  $ pxctl status
   161  ```
   162  
   163  If everything is working properly, you should see the following output:
   164  
   165  ```
   166  Status: PX is operational
   167  License: Trial (expires in 31 days)
   168  Node ID: 07113eef-0533-4de8-b1cf-4471c18a7cda
   169  	IP: 172.31.53.231 
   170   	Local Storage Pool: 1 pool
   171  	POOL	IO_PRIORITY	RAID_LEVEL	USABLE	USED	STATUS	ZONE	REGION
   172  	0	LOW		raid0		50 GiB	4.4 GiB	Online	us-east-1c	us-east-1
   173  	Local Storage Devices: 1 device
   174  ```
   175  Once all nodes are configured, you should see a cluster summary with the total
   176  capacity of the storage pool (if you're using the environment provided in this
   177  guide, the total capacity will be 150 GB since the external block device
   178  attached to each client nodes has a capacity of 50 GB):
   179  
   180  ```
   181  Cluster Summary
   182  	Cluster ID: my_test_cluster
   183  	Cluster UUID: 705a1cbd-4d58-4a0e-a970-1e6b28375590
   184  	Scheduler: none
   185  	Nodes: 3 node(s) with storage (3 online)
   186  ...
   187  Global Storage Pool
   188  	Total Used    	:  13 GiB
   189  	Total Capacity	:  150 GiB
   190  ```
   191  
   192  ### Step 6: Create a Portworx Volume
   193  
   194  Run the following command to create a Portworx volume that our job will be able
   195  to use:
   196  
   197  ```
   198  $ pxctl volume create -s 10 -r 3 mysql
   199  ```
   200  You should see output similar to what is shown below:
   201  
   202  ```
   203  Volume successfully created: 693373920899724151
   204  ```
   205  
   206  * Please note from the options provided that the name of the volume we created
   207    is `mysql` and the size is 10 GB.
   208  
   209  * We have configured a replication factor of 3 which ensures our data is
   210    available on all 3 client nodes.
   211  
   212  Run `pxctl volume inspect mysql` to verify the status of the volume:
   213  
   214  ```
   215  $ pxctl volume inspect mysql
   216  Volume	:  693373920899724151
   217  	Name            	 :  mysql
   218  	Size            	 :  10 GiB
   219  	Format          	 :  ext4
   220  	HA              	 :  3
   221  ...
   222  	Replica sets on nodes:
   223  		Set 0
   224  		  Node 		 : 172.31.58.210 (Pool 0)
   225  		  Node 		 : 172.31.51.110 (Pool 0)
   226  		  Node 		 : 172.31.48.98 (Pool 0)
   227  	Replication Status	 :  Up
   228  ```
   229  
   230  ### Step 7: Create the `mysql.nomad` Job File
   231  
   232  We are now ready to deploy a MySQL database that can use Portworx for storage.
   233  Create a file called `mysql.nomad` and provide it the following contents:
   234  
   235  ```
   236  job "mysql-server" {
   237    datacenters = ["dc1"]
   238    type        = "service"
   239  
   240    group "mysql-server" {
   241      count = 1
   242  
   243      restart {
   244        attempts = 10
   245        interval = "5m"
   246        delay    = "25s"
   247        mode     = "delay"
   248      }
   249  
   250      task "mysql-server" {
   251        driver = "docker"
   252  
   253        env = {
   254          "MYSQL_ROOT_PASSWORD" = "password"
   255        }
   256  
   257        config {
   258          image = "hashicorp/mysql-portworx-demo:latest"
   259  
   260          port_map {
   261            db = 3306
   262          }
   263  
   264          volumes = [
   265            "mysql:/var/lib/mysql"
   266          ]
   267  
   268          volume_driver = "pxd"
   269        }
   270  
   271        resources {
   272          cpu    = 500
   273          memory = 1024
   274  
   275          network {
   276            port "db" {
   277              static = 3306
   278            }
   279          }
   280        }
   281  
   282        service {
   283          name = "mysql-server"
   284          port = "db"
   285  
   286          check {
   287            type     = "tcp"
   288            interval = "10s"
   289            timeout  = "2s"
   290          }
   291        }
   292      }
   293    }
   294  }
   295  ```
   296  
   297  * Please note from the job file that we are using the `pxd` volume driver that
   298    has been configured from the previous steps.
   299  
   300  * The service name is `mysql-server` which we will use later to connect to the
   301    database.
   302  
   303  ### Step 8: Deploy the MySQL Database
   304  
   305  Register the job file you created in the previous step with the following
   306  command:
   307  
   308  ```
   309  $ nomad run mysql.nomad 
   310  ==> Monitoring evaluation "aa478d82"
   311      Evaluation triggered by job "mysql-server"
   312      Allocation "6c3b3703" created: node "be8aad4e", group "mysql-server"
   313      Evaluation status changed: "pending" -> "complete"
   314  ==> Evaluation "aa478d82" finished with status "complete"
   315  ```
   316  
   317  Check the status of the allocation and ensure the task is running:
   318  
   319  ```
   320  $ nomad status mysql-server
   321  ID            = mysql-server
   322  ...
   323  Summary
   324  Task Group    Queued  Starting  Running  Failed  Complete  Lost
   325  mysql-server  0       0         1        0       0         0
   326  ```
   327  
   328  ### Step 9: Connect to MySQL 
   329  
   330  Using the mysql client (installed in [Step
   331  2](#step-2-install-the-mysql-client)), connect to the database and access the
   332  information:
   333  
   334  ```
   335  mysql -h mysql-server.service.consul -u web -p -D itemcollection
   336  ```
   337  The password for this demo database is `password`.
   338  
   339  ~> **Please Note:** This guide is for demo purposes and does not follow best
   340  practices for securing database passwords. See [Keeping Passwords
   341  Secure][password-security] for more information.
   342  
   343  Consul is installed alongside Nomad in this cluster so we were able to
   344  connect using the `mysql-server` service name we registered with our task in
   345  our job file.
   346  
   347  ### Step 10: Add Data to MySQL
   348  
   349  Once you are connected to the database, verify the table `items` exists:
   350  
   351  ```
   352  mysql> show tables;
   353  +--------------------------+
   354  | Tables_in_itemcollection |
   355  +--------------------------+
   356  | items                    |
   357  +--------------------------+
   358  1 row in set (0.00 sec)
   359  ```
   360  
   361  Display the contents of this table with the following command:
   362  
   363  ```
   364  mysql> select * from items;
   365  +----+----------+
   366  | id | name     |
   367  +----+----------+
   368  |  1 | bike     |
   369  |  2 | baseball |
   370  |  3 | chair    |
   371  +----+----------+
   372  3 rows in set (0.00 sec)
   373  ```
   374  
   375  Now add some data to this table (after we terminate our database in Nomad and
   376  bring it back up, this data should still be intact):
   377  
   378  ```
   379  mysql> INSERT INTO items (name) VALUES ('glove');
   380  ```
   381  
   382  Run the `INSERT INTO` command as many times as you like with different values.
   383  
   384  ```
   385  mysql> INSERT INTO items (name) VALUES ('hat');
   386  mysql> INSERT INTO items (name) VALUES ('keyboard');
   387  ```
   388  Once you you are done, type `exit` and return back to the Nomad client command
   389  line:
   390  
   391  ```
   392  mysql> exit
   393  Bye
   394  ```
   395  
   396  ### Step 11: Stop and Purge the Database Job
   397  
   398  Run the following command to stop and purge the MySQL job from the cluster:
   399  
   400  ```
   401  $ nomad stop -purge mysql-server
   402  ==> Monitoring evaluation "6b784149"
   403      Evaluation triggered by job "mysql-server"
   404      Evaluation status changed: "pending" -> "complete"
   405  ==> Evaluation "6b784149" finished with status "complete"
   406  ```
   407  
   408  Verify no jobs are running in the cluster:
   409  
   410  ```
   411  $ nomad status
   412  No running jobs
   413  ```
   414  You can optionally stop the nomad service on whichever node you are on and move
   415  to another node to simulate a node failure.
   416  
   417  ### Step 12: Re-deploy the Database
   418  
   419  Using the `mysql.nomad` job file from [Step
   420  6](#step-6-create-the-mysql-nomad-job-file), re-deploy the database to the Nomad
   421  cluster.
   422  
   423  ```
   424  ==> Monitoring evaluation "61b4f648"
   425      Evaluation triggered by job "mysql-server"
   426      Allocation "8e1324d2" created: node "be8aad4e", group "mysql-server"
   427      Evaluation status changed: "pending" -> "complete"
   428  ==> Evaluation "61b4f648" finished with status "complete"
   429  ```
   430  
   431  ### Step 13: Verify Data
   432  
   433  Once you re-connect to MySQL, you should be able to see that the information you
   434  added prior to destroying the database is still present:
   435  
   436  ```
   437  mysql> select * from items;
   438  +----+----------+
   439  | id | name     |
   440  +----+----------+
   441  |  1 | bike     |
   442  |  2 | baseball |
   443  |  3 | chair    |
   444  |  4 | glove    |
   445  |  5 | hat      |
   446  |  6 | keyboard |
   447  +----+----------+
   448  6 rows in set (0.00 sec)
   449  ```
   450  
   451  [password-security]: https://dev.mysql.com/doc/refman/8.0/en/password-security.html
   452  [portworx-nomad]: https://docs.portworx.com/install-with-other/nomad
   453  [px-oci]: https://docs.portworx.com/install-with-other/docker/standalone/#why-oci
   454  [repo]: https://github.com/hashicorp/nomad/tree/master/terraform#provision-a-nomad-cluster-in-the-cloud