github.com/rochacon/deis@v1.0.2-0.20150903015341-6839b592a1ff/docs/managing_deis/upgrading-deis.rst (about)

     1  :title: Upgrading Deis
     2  :description: Guide to upgrading Deis to a new release.
     3  
     4  
     5  .. _upgrading-deis:
     6  
     7  Upgrading Deis
     8  ==============
     9  
    10  There are currently two strategies for upgrading a Deis cluster:
    11  
    12  * In-place Upgrade (recommended)
    13  * Migration Upgrade
    14  
    15  Before attempting an upgrade, it is strongly recommended to :ref:`backup your data <backing_up_data>`.
    16  
    17  In-place Upgrade
    18  ----------------
    19  
    20  An in-place upgrade swaps out platform containers for newer versions on the same set of hosts,
    21  leaving your applications and platform data intact.  This is the easiest and least disruptive upgrade strategy.
    22  The general approach is to use ``deisctl`` to uninstall all platform components, update the platform version
    23  and then reinstall platform components.
    24  
    25  .. important::
    26  
    27      Always use a version of ``deisctl`` that matches the Deis release.
    28      Verify this with ``deisctl --version``.
    29  
    30  Use the following steps to perform an in-place upgrade of your Deis cluster.
    31  
    32  First, use the current ``deisctl`` to stop and uninstall the Deis platform.
    33  
    34  .. code-block:: console
    35  
    36      $ deisctl --version  # should match the installed platform
    37      1.0.2
    38      $ deisctl stop platform && deisctl uninstall platform
    39  
    40  Finally, update ``deisctl`` to the new version and reinstall:
    41  
    42  .. code-block:: console
    43  
    44      $ curl -sSL http://deis.io/deisctl/install.sh | sh -s 1.9.1
    45      $ deisctl --version  # should match the desired platform
    46      1.9.1
    47      $ deisctl config platform set version=v1.9.1
    48      $ deisctl install platform
    49      $ deisctl start platform
    50  
    51  .. attention::
    52  
    53      In-place upgrades incur approximately 10-30 minutes of downtime for deployed applications, the router mesh
    54      and the platform control plane.  Please plan your maintenance windows accordingly.
    55  
    56  .. note::
    57  
    58      When upgrading an AWS cluster older than Deis v1.6, a :ref:`migration_upgrade` is
    59      preferable.
    60  
    61      On AWS, Deis enables the :ref:`PROXY protocol <proxy_protocol>` by default.
    62      If an in-place upgrade is required, run ``deisctl config router set proxyProtocol=1``,
    63      enable PROXY protocol for ports 80 and 443 on the ELB, add a ``TCP 443:443`` listener, and
    64      change existing targets and health checks from HTTP to TCP.
    65  
    66  Upgrade Deis clients
    67  ^^^^^^^^^^^^^^^^^^^^
    68  As well as upgrading ``deisctl``, make sure to upgrade the :ref:`deis client <install-client>` to
    69  match the new version of Deis.
    70  
    71  Graceful Upgrade
    72  ----------------
    73  
    74  Alternatively, an experimental feature exists to provide the ability to perform a graceful upgrade. This process is
    75  available for version 1.9.1 moving foward and is intended to facilitate upgrades within a major version (for example,
    76  from 1.9.1 to 1.9.1 or 1.10.0). Upgrading between major versions is not supported (for example, from 1.9.1 to a
    77  future 2.0.0). Unlike the in-place process above, this process keeps the platform's routers and publishers up during
    78  the upgrade process. This means that there should only be a maximum of around 1-2 seconds of downtime while the
    79  routers boot up. Many times, there will be no downtime at all.
    80  
    81  .. note::
    82  
    83      Your loadbalancer configuration is the determining factor for how much downtime will occur during a successful upgrade.
    84      If your loadbalancer is configured to quickly reactivate failed hosts to its pool of active hosts, its quite possible to
    85      achieve zero downtime upgrades. If your loadbalancer is configured to be more pessimistic, such as requiring multiple
    86      successful healthchecks before reactiving a node, then the chance for downtime increases. You should review your
    87      loadbalancers configuration to determine what to expect during the upgrade process.
    88  
    89  The process involves two ``deisctl`` subcommands, ``upgrade-prep`` and ``upgrade-takeover``, in coordination with a few other important commands.
    90  
    91  First, a new ``deisctl`` version should be installed to a temporary location, reflecting the desired version to upgrade
    92  to. Care should be taken not to overwrite the existing ``deisctl`` version.
    93  
    94  .. code-block:: console
    95  
    96      $ mkdir /tmp/upgrade
    97      $ curl -sSL http://deis.io/deisctl/install.sh | sh -s 1.10.0 /tmp/upgrade
    98      $ /tmp/upgrade/deisctl --version  # should match the desired platform
    99      1.10.0
   100      $ /tmp/upgrade/deisctl refresh-units
   101      $ /tmp/upgrade/deisctl config platform set version=v1.10.0
   102  
   103  .. note::
   104  
   105      Deis version 1.10.0 does not exist at the time of this writing, but since
   106      the upgrade feature is only available for upgrading from Deis version
   107      1.9.1 and higher, the snippet above is a realistic portrayal of how
   108      this feature can be used in the future.
   109  
   110  Now it is possible to prepare the cluster for the upgrade using the old ``deisctl`` binary. This command will shutdown
   111  and uninstall all components of the cluster except the router and publisher. This means your services should still be
   112  serving traffic afterwords, but nothing else in the cluster will be functional.
   113  
   114  .. code-block:: console
   115  
   116      $ /opt/bin/deisctl upgrade-prep
   117  
   118  Finally, the rest of the components are brought up by the new binary. First, a rolling restart is done on the routers,
   119  replacing them one by one. Then the rest of the components are brought up. The end result should be an upgraded cluster.
   120  
   121  .. code-block:: console
   122  
   123      $ /tmp/upgrade/deisctl upgrade-takeover
   124  
   125  It is recommended to move the newer ``deisctl`` into ``/opt/bin`` once the procedure is complete.
   126  
   127  If the process were to fail, the old version can be restored manually by reinstalling and starting the old components.
   128  
   129  .. code-block:: console
   130  
   131      $ /tmp/upgrade/deisctl stop platform
   132      $ /tmp/upgrade/deisctl uninstall platform
   133      $ /tmp/upgrade/deisctl config platform set version=v1.9.1
   134      $ /opt/bin/deisctl refresh-units
   135      $ /opt/bin/deisctl install platform
   136      $ /opt/bin/deisctl start platform
   137  
   138  Upgrade Deis clients
   139  ^^^^^^^^^^^^^^^^^^^^
   140  As well as upgrading ``deisctl``, make sure to upgrade the :ref:`deis client <install-client>` to
   141  match the new version of Deis.
   142  
   143  
   144  .. _migration_upgrade:
   145  
   146  Migration Upgrade
   147  -----------------
   148  
   149  This upgrade method provisions a new cluster running in parallel to the old one. Applications are
   150  migrated to this new cluster one-by-one, and DNS records are updated to cut over traffic on a
   151  per-application basis. This results in a no-downtime controlled upgrade, but has the caveat that no
   152  data from the old cluster (users, releases, etc.) is retained. Future ``deisctl`` tooling will have
   153  facilities to export and import this platform data.
   154  
   155  .. note::
   156  
   157      Migration upgrades are useful for moving Deis to a new set of hosts,
   158      but should otherwise be avoided due to the amount of manual work involved.
   159  
   160  .. important::
   161  
   162      In order to migrate applications, your new cluster must have network access
   163      to the registry component on the old cluster
   164  
   165  Enumerate Existing Applications
   166  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   167  Each application will need to be deployed to the new cluster manually.
   168  Log in to the existing cluster as an admin user and use the ``deis`` client to
   169  gather information about your deployed applications.
   170  
   171  List all applications with:
   172  
   173  .. code-block:: console
   174  
   175      $ deis apps:list
   176  
   177  Gather each application's version with:
   178  
   179  .. code-block:: console
   180  
   181      $ deis apps:info -a <app-name>
   182  
   183  Provision servers
   184  ^^^^^^^^^^^^^^^^^
   185  Follow the Deis documentation to provision a new cluster using your desired target release.
   186  Be sure to use a new etcd discovery URL so that the new cluster doesn't interfere with the running one.
   187  
   188  Upgrade Deis clients
   189  ^^^^^^^^^^^^^^^^^^^^
   190  If changing versions, make sure you upgrade your ``deis`` and ``deisctl`` clients
   191  to match the cluster's release.
   192  
   193  Register and login to the new controller
   194  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   195  Register an account on the new controller and login.
   196  
   197  .. code-block:: console
   198  
   199      $ deis register http://deis.newcluster.example.org
   200      $ deis login http://deis.newcluster.example.org
   201  
   202  Migrate applications
   203  ^^^^^^^^^^^^^^^^^^^^
   204  The ``deis pull`` command makes it easy to migrate existing applications from
   205  one cluster to another.  However, you must have network access to the existing
   206  cluster's registry component.
   207  
   208  Migrate a single application with:
   209  
   210  .. code-block:: console
   211  
   212      $ deis create <app-name>
   213      $ deis pull registry.oldcluster.example.org:5000/<app-name>:<version>
   214  
   215  This will move the application's Docker image across clusters, ensuring the application
   216  is migrated bit-for-bit with an identical build and configuration.
   217  
   218  Now each application is running on the new cluster, but they are still running (and serving traffic)
   219  on the old cluster.  Use ``deis domains:add`` to tell Deis that this application can be accessed
   220  by its old name:
   221  
   222  .. code-block:: console
   223  
   224      $ deis domains:add oldappname.oldcluster.example.org
   225  
   226  Repeat for each application.
   227  
   228  Test applications
   229  ^^^^^^^^^^^^^^^^^
   230  Test to make sure applications work as expected on the new Deis cluster.
   231  
   232  Update DNS records
   233  ^^^^^^^^^^^^^^^^^^
   234  For each application, create CNAME records to point the old application names to the new. Note that
   235  once these records propagate, the new cluster is serving live traffic. You can perform cutover on a
   236  per-application basis and slowly retire the old cluster.
   237  
   238  If an application is named 'happy-bandit' on the old Deis cluster and 'jumping-cuddlefish' on the
   239  new cluster, you would create a DNS record that looks like the following:
   240  
   241  .. code-block:: console
   242  
   243      happy-bandit.oldcluster.example.org.        CNAME       jumping-cuddlefish.newcluster.example.org
   244  
   245  Retire the old cluster
   246  ^^^^^^^^^^^^^^^^^^^^^^
   247  Once all applications have been validated, the old cluster can be retired.
   248  
   249  
   250  .. _upgrading-coreos:
   251  
   252  Upgrading CoreOS
   253  ----------------
   254  
   255  By default, Deis disables CoreOS automatic updates. This is partially because in the case of a
   256  machine reboot, Deis components will be scheduled to a new host and will need a few minutes to start
   257  and restore to a running state. This results in a short downtime of the Deis control plane,
   258  which can be disruptive if unplanned.
   259  
   260  Additionally, because Deis customizes the CoreOS cloud-config file, upgrading the CoreOS host to
   261  a new version without accounting for changes in the cloud-config file could cause Deis to stop
   262  functioning properly.
   263  
   264  .. important::
   265  
   266    Enabling updates for CoreOS will result in the machine upgrading to the latest CoreOS release
   267    available in a particular channel. Sometimes, new CoreOS releases make changes that will break
   268    Deis. It is always recommended to provision a Deis release with the CoreOS version specified
   269    in that release's provision scripts or documentation.
   270  
   271  While typically not recommended, it is possible to trigger an update of a CoreOS machine. Some
   272  Deis releases may recommend a CoreOS upgrade - in these cases, the release notes for a Deis release
   273  will point to this documentation.
   274  
   275  Checking the CoreOS version
   276  ^^^^^^^^^^^^^^^^^^^^^^^^^^^
   277  
   278  You can check the CoreOS version by running the following command on the CoreOS machine:
   279  
   280  .. code-block:: console
   281  
   282      $ cat /etc/os-release
   283  
   284  Or from your local machine:
   285  
   286  .. code-block:: console
   287  
   288      $ ssh core@<server ip> 'cat /etc/os-release'
   289  
   290  
   291  Triggering an upgrade
   292  ^^^^^^^^^^^^^^^^^^^^^
   293  
   294  To upgrade CoreOS, run the following commands:
   295  
   296  .. code-block:: console
   297  
   298      $ ssh core@<server ip>
   299      $ sudo su
   300      $ echo GROUP=stable > /etc/coreos/update.conf
   301      $ systemctl unmask update-engine.service
   302      $ systemctl start update-engine.service
   303      $ update_engine_client -update
   304      $ systemctl stop update-engine.service
   305      $ systemctl mask update-engine.service
   306      $ reboot
   307  
   308  .. warning::
   309  
   310    You should only upgrade one host at a time. Removing multiple hosts from the cluster
   311    simultaneously can result in failure of the etcd cluster. Ensure the recently-rebooted host
   312    has returned to the cluster with ``fleetctl list-machines`` before moving on to the next host.
   313  
   314  After the host reboots, ``update-engine.service`` should be unmasked and started once again:
   315  
   316  .. code-block:: console
   317  
   318      $ systemctl unmask update-engine.service
   319      $ systemctl start update-engine.service
   320  
   321  It may take a few minutes for CoreOS to recognize that the update has been applied successfully, and
   322  only then will it update the boot flags to use the new image on subsequent reboots. This can be confirmed
   323  by watching the ``update-engine`` journal:
   324  
   325  .. code-block:: console
   326  
   327      $ journalctl -fu update-engine
   328  
   329  Seeing a message like ``Updating boot flags...`` means that the update has finished, and the service
   330  should be stopped and masked once again:
   331  
   332  .. code-block:: console
   333  
   334      $ systemctl stop update-engine.service
   335      $ systemctl mask update-engine.service
   336  
   337  The update is now complete.
   338  
   339  .. note::
   340  
   341      Users have reported that some cloud providers do not allow the boot partition to be updated,
   342      resulting in CoreOS reverting to the originally installed version on a reboot.