github.com/deis/deis@v1.13.5-0.20170519182049-1d9e59fbdbfc/docs/managing_deis/upgrading-deis.rst (about)

     1  :title: Upgrading Deis
     2  :description: Guide to upgrading Deis to a new release.
     3  
     4  
     5  .. _upgrading-deis:
     6  
     7  Upgrading Deis
     8  ==============
     9  
    10  There are currently two strategies for upgrading a Deis cluster:
    11  
    12  * In-place Upgrade (recommended)
    13  * Migration Upgrade
    14  
    15  Before attempting an upgrade, it is strongly recommended to :ref:`backup your data <backing_up_data>`.
    16  
    17  In-place Upgrade
    18  ----------------
    19  
    20  An in-place upgrade swaps out platform containers for newer versions on the same set of hosts,
    21  leaving your applications and platform data intact.  This is the easiest and least disruptive upgrade strategy.
    22  The general approach is to use ``deisctl`` to uninstall all platform components, update the platform version
    23  and then reinstall platform components.
    24  
    25  .. important::
    26  
    27      Always use a version of ``deisctl`` that matches the Deis release.
    28      Verify this with ``deisctl --version``.
    29  
    30  Use the following steps to perform an in-place upgrade of your Deis cluster.
    31  
    32  First, use the current ``deisctl`` to stop and uninstall the Deis platform.
    33  
    34  .. code-block:: console
    35  
    36      $ deisctl --version  # should match the installed platform
    37      1.0.2
    38      $ deisctl stop platform && deisctl uninstall platform
    39  
    40  Finally, update ``deisctl`` to the new version and reinstall:
    41  
    42  .. code-block:: console
    43  
    44      $ curl -sSL http://deis.io/deisctl/install.sh | sh -s 1.13.4
    45      $ deisctl --version  # should match the desired platform
    46      1.13.4
    47      $ deisctl config platform set version=v1.13.4
    48      $ deisctl install platform
    49      $ deisctl start platform
    50  
    51  .. attention::
    52  
    53      In-place upgrades incur approximately 10-30 minutes of downtime for deployed applications, the router mesh
    54      and the platform control plane.  Please plan your maintenance windows accordingly.
    55  
    56  .. note::
    57  
    58      When upgrading an AWS cluster older than Deis v1.6, a :ref:`migration_upgrade` is
    59      preferable.
    60  
    61      On AWS, Deis v1.6 and above enables the :ref:`PROXY protocol <proxy_protocol>` by default.
    62      If an in-place upgrade is required on a cluster running a version older than v1.6,
    63      run ``deisctl config router set proxyProtocol=1``, enable PROXY protocol for ports 80 and
    64      443 on the ELB, add a ``TCP 443:443`` listener.
    65      
    66      Elastic Load Balancer is set to perform health checks to make sure your instances are alive.
    67      When you take your cluster down, there will be a brief period that your instances will be
    68      marked as ``OutOfService``. If deis-cli can't connect to your cluster, check your EC2 Load
    69      Balancer's health check status in the AWS web console. Wait for the instances to return to
    70      ``InService`` status.
    71  
    72  Upgrade Deis clients
    73  ^^^^^^^^^^^^^^^^^^^^
    74  As well as upgrading ``deisctl``, make sure to upgrade the :ref:`deis client <install-client>` to
    75  match the new version of Deis.
    76  
    77  Graceful Upgrade
    78  ----------------
    79  
    80  Alternatively, an experimental feature exists to provide the ability to perform a graceful upgrade. This process is
    81  available for version 1.9.0 moving forward and is intended to facilitate upgrades within a major version (for example,
    82  from 1.9.0 to 1.9.1 or 1.11.2). Upgrading between major versions is not supported (for example, from 1.9.0 to a
    83  future 2.0.0). Unlike the in-place process above, this process keeps the platform's routers and publishers up during
    84  the upgrade process. This means that there should only be a maximum of around 1-2 seconds of downtime while the
    85  routers boot up. Many times, there will be no downtime at all.
    86  
    87  .. note::
    88  
    89      Your loadbalancer configuration is the determining factor for how much downtime will occur during a successful upgrade.
    90      If your loadbalancer is configured to quickly reactivate failed hosts to its pool of active hosts, its quite possible to
    91      achieve zero downtime upgrades. If your loadbalancer is configured to be more pessimistic, such as requiring multiple
    92      successful healthchecks before reactivating a node, then the chance for downtime increases. You should review your
    93      loadbalancers configuration to determine what to expect during the upgrade process.
    94  
    95  The process involves two ``deisctl`` subcommands, ``upgrade-prep`` and ``upgrade-takeover``, in coordination with a few other important commands.
    96  
    97  .. note::
    98  
    99      If you are using Deis in :ref:`stateless mode <running-deis-without-ceph>`, you should add the option `--stateless`
   100      to `upgrade-prep` and `upgrade-takeover` subcommands to start only the necessary components.
   101  
   102  First, a new ``deisctl`` version should be installed to a temporary location, reflecting the desired version to upgrade
   103  to. Care should be taken not to overwrite the existing ``deisctl`` version.
   104  
   105  .. code-block:: console
   106  
   107      $ mkdir /tmp/upgrade
   108      $ curl -sSL http://deis.io/deisctl/install.sh | sh -s 1.13.4 /tmp/upgrade
   109      $ /tmp/upgrade/deisctl --version  # should match the desired platform
   110      1.13.4
   111      $ /tmp/upgrade/deisctl refresh-units
   112      $ /tmp/upgrade/deisctl config platform set version=v1.13.4
   113  
   114  Now it is possible to prepare the cluster for the upgrade using the old ``deisctl`` binary. This command will shutdown
   115  and uninstall all components of the cluster except the router and publisher. This means your services should still be
   116  serving traffic afterwards, but nothing else in the cluster will be functional.
   117  
   118  .. code-block:: console
   119  
   120      $ /opt/bin/deisctl upgrade-prep
   121  
   122  Finally, the rest of the components are brought up by the new binary. First, a rolling restart is done on the routers,
   123  replacing them one by one. Then the rest of the components are brought up. The end result should be an upgraded cluster.
   124  
   125  .. code-block:: console
   126  
   127      $ /tmp/upgrade/deisctl upgrade-takeover
   128  
   129  It is recommended to move the newer ``deisctl`` into ``/opt/bin`` once the procedure is complete.
   130  
   131  If the process were to fail, the old version can be restored manually by reinstalling and starting the old components.
   132  
   133  .. code-block:: console
   134  
   135      $ /tmp/upgrade/deisctl stop platform
   136      $ /tmp/upgrade/deisctl uninstall platform
   137      $ /tmp/upgrade/deisctl config platform set version=v1.13.4
   138      $ /opt/bin/deisctl refresh-units
   139      $ /opt/bin/deisctl install platform
   140      $ /opt/bin/deisctl start platform
   141  
   142  Upgrade Deis clients
   143  ^^^^^^^^^^^^^^^^^^^^
   144  As well as upgrading ``deisctl``, make sure to upgrade the :ref:`deis client <install-client>` to
   145  match the new version of Deis.
   146  
   147  
   148  .. _migration_upgrade:
   149  
   150  Migration Upgrade
   151  -----------------
   152  
   153  This upgrade method provisions a new cluster running in parallel to the old one. Applications are
   154  migrated to this new cluster one-by-one, and DNS records are updated to cut over traffic on a
   155  per-application basis. This results in a no-downtime controlled upgrade, but has the caveat that no
   156  data from the old cluster (users, releases, etc.) is retained. Future ``deisctl`` tooling will have
   157  facilities to export and import this platform data.
   158  
   159  .. note::
   160  
   161      Migration upgrades are useful for moving Deis to a new set of hosts,
   162      but should otherwise be avoided due to the amount of manual work involved.
   163  
   164  .. important::
   165  
   166      In order to migrate applications, your new cluster must have network access
   167      to the registry component on the old cluster
   168  
   169  Enumerate Existing Applications
   170  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   171  Each application will need to be deployed to the new cluster manually.
   172  Log in to the existing cluster as an admin user and use the ``deis`` client to
   173  gather information about your deployed applications.
   174  
   175  List all applications with:
   176  
   177  .. code-block:: console
   178  
   179      $ deis apps:list
   180  
   181  Gather each application's version with:
   182  
   183  .. code-block:: console
   184  
   185      $ deis apps:info -a <app-name>
   186  
   187  Provision servers
   188  ^^^^^^^^^^^^^^^^^
   189  Follow the Deis documentation to provision a new cluster using your desired target release.
   190  Be sure to use a new etcd discovery URL so that the new cluster doesn't interfere with the running one.
   191  
   192  Upgrade Deis clients
   193  ^^^^^^^^^^^^^^^^^^^^
   194  If changing versions, make sure you upgrade your ``deis`` and ``deisctl`` clients
   195  to match the cluster's release.
   196  
   197  Register and login to the new controller
   198  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   199  Register an account on the new controller and login.
   200  
   201  .. code-block:: console
   202  
   203      $ deis register http://deis.newcluster.example.org
   204      $ deis login http://deis.newcluster.example.org
   205  
   206  Migrate applications
   207  ^^^^^^^^^^^^^^^^^^^^
   208  The ``deis pull`` command makes it easy to migrate existing applications from
   209  one cluster to another.  However, you must have network access to the existing
   210  cluster's registry component.
   211  
   212  Migrate a single application with:
   213  
   214  .. code-block:: console
   215  
   216      $ deis create <app-name>
   217      $ deis pull registry.oldcluster.example.org:5000/<app-name>:<version>
   218  
   219  This will move the application's Docker image across clusters, ensuring the application
   220  is migrated bit-for-bit with an identical build and configuration.
   221  
   222  Now each application is running on the new cluster, but they are still running (and serving traffic)
   223  on the old cluster.  Use ``deis domains:add`` to tell Deis that this application can be accessed
   224  by its old name:
   225  
   226  .. code-block:: console
   227  
   228      $ deis domains:add oldappname.oldcluster.example.org
   229  
   230  Repeat for each application.
   231  
   232  Test applications
   233  ^^^^^^^^^^^^^^^^^
   234  Test to make sure applications work as expected on the new Deis cluster.
   235  
   236  Update DNS records
   237  ^^^^^^^^^^^^^^^^^^
   238  For each application, create CNAME records to point the old application names to the new. Note that
   239  once these records propagate, the new cluster is serving live traffic. You can perform cutover on a
   240  per-application basis and slowly retire the old cluster.
   241  
   242  If an application is named 'happy-bandit' on the old Deis cluster and 'jumping-cuddlefish' on the
   243  new cluster, you would create a DNS record that looks like the following:
   244  
   245  .. code-block:: console
   246  
   247      happy-bandit.oldcluster.example.org.        CNAME       jumping-cuddlefish.newcluster.example.org
   248  
   249  Retire the old cluster
   250  ^^^^^^^^^^^^^^^^^^^^^^
   251  Once all applications have been validated, the old cluster can be retired.
   252  
   253  
   254  .. _upgrading-coreos:
   255  
   256  Upgrading CoreOS
   257  ----------------
   258  
   259  By default, Deis disables CoreOS automatic updates. This is partially because in the case of a
   260  machine reboot, Deis components will be scheduled to a new host and will need a few minutes to start
   261  and restore to a running state. This results in a short downtime of the Deis control plane,
   262  which can be disruptive if unplanned.
   263  
   264  Additionally, because Deis customizes the CoreOS cloud-config file, upgrading the CoreOS host to
   265  a new version without accounting for changes in the cloud-config file could cause Deis to stop
   266  functioning properly.
   267  
   268  .. important::
   269  
   270    Enabling updates for CoreOS will result in the machine upgrading to the latest CoreOS release
   271    available in a particular channel. Sometimes, new CoreOS releases make changes that will break
   272    Deis. It is always recommended to provision a Deis release with the CoreOS version specified
   273    in that release's provision scripts or documentation.
   274  
   275  .. important::
   276  
   277    Upgrading a cluster can result in simultaneously running different etcd versions,
   278    which may introduce incompatibilities that result in a broken etcd cluster. It is
   279    always recommended to first test upgrades in a non-production cluster whenever possible.
   280  
   281  While typically not recommended, it is possible to trigger an update of a CoreOS machine. Some
   282  Deis releases may recommend a CoreOS upgrade - in these cases, the release notes for a Deis release
   283  will point to this documentation.
   284  
   285  Checking the CoreOS version
   286  ^^^^^^^^^^^^^^^^^^^^^^^^^^^
   287  
   288  You can check the CoreOS version by running the following command on the CoreOS machine:
   289  
   290  .. code-block:: console
   291  
   292      $ cat /etc/os-release
   293  
   294  Or from your local machine:
   295  
   296  .. code-block:: console
   297  
   298      $ ssh core@<server ip> 'cat /etc/os-release'
   299  
   300  
   301  Triggering an upgrade
   302  ^^^^^^^^^^^^^^^^^^^^^
   303  
   304  To upgrade CoreOS, run the following commands:
   305  
   306  .. code-block:: console
   307  
   308      $ ssh core@<server ip>
   309      $ sudo su
   310      $ echo GROUP=stable > /etc/coreos/update.conf
   311      $ systemctl unmask update-engine.service
   312      $ systemctl start update-engine.service
   313      $ update_engine_client -update
   314      $ systemctl stop update-engine.service
   315      $ systemctl mask update-engine.service
   316      $ reboot
   317  
   318  .. warning::
   319  
   320    You should only upgrade one host at a time. Removing multiple hosts from the cluster
   321    simultaneously can result in failure of the etcd cluster. Ensure the recently-rebooted host
   322    has returned to the cluster with ``fleetctl list-machines`` before moving on to the next host.
   323  
   324  After the host reboots, ``update-engine.service`` should be unmasked and started once again:
   325  
   326  .. code-block:: console
   327  
   328      $ systemctl unmask update-engine.service
   329      $ systemctl start update-engine.service
   330  
   331  It may take a few minutes for CoreOS to recognize that the update has been applied successfully, and
   332  only then will it update the boot flags to use the new image on subsequent reboots. This can be confirmed
   333  by watching the ``update-engine`` journal:
   334  
   335  .. code-block:: console
   336  
   337      $ journalctl -fu update-engine
   338  
   339  Seeing a message like ``Updating boot flags...`` means that the update has finished, and the service
   340  should be stopped and masked once again:
   341  
   342  .. code-block:: console
   343  
   344      $ systemctl stop update-engine.service
   345      $ systemctl mask update-engine.service
   346  
   347  The update is now complete.
   348  
   349  .. note::
   350  
   351      Users have reported that some cloud providers do not allow the boot partition to be updated,
   352      resulting in CoreOS reverting to the originally installed version on a reboot.