github.com/rvaralda/deis@v1.4.1/docs/managing_deis/backing_up_data.rst (about) 1 :title: Backing Up and Restoring Data 2 :description: Backing up stateful data on Deis. 3 4 .. _backing_up_data: 5 6 Backing Up and Restoring Data 7 ============================= 8 9 While applications deployed on Deis follow the Twelve-Factor methodology and are thus stateless, 10 Deis maintains platform state in the :ref:`Store` component. 11 12 The store component runs `Ceph`_, and is used by the :ref:`Database`, :ref:`Registry`, 13 :ref:`Controller`, and :ref:`Logger` components as a data store. Database and registry 14 use store-gateway and controller and logger use store-volume. Being backed by the store component 15 enables these components to move freely around the cluster while their state is backed by store. 16 17 The store component is configured to still operate in a degraded state, and will automatically 18 recover should a host fail and then rejoin the cluster. Total data loss of Ceph is only possible 19 if all of the store containers are removed. However, backup of Ceph is fairly straightforward, and 20 is recommended before :ref:`Upgrading Deis <upgrading-deis>`. 21 22 Data stored in Ceph is accessible in two places: on the CoreOS filesystem at ``/var/lib/deis/store`` 23 and in the store-gateway component. Backing up this data is straightforward - we can simply tarball 24 the filesystem data, and use any S3-compatible blob store tool to download all files in the 25 store-gateway component. 26 27 Setup 28 ----- 29 30 The ``deis-store-gateway`` component exposes an S3-compatible API, so we can use a tool like `s3cmd`_ 31 to work with the object store. First, `download s3cmd`_ and install it (you'll need at least version 32 1.5.0 for Ceph support). 33 34 We'll need the generated access key and secret key for use with the gateway. We can get these using 35 ``deisctl``, either on one of the cluster machines or on a remote machine with ``DEISCTL_TUNNEL`` set: 36 37 .. code-block:: console 38 39 $ deisctl config store get gateway/accessKey 40 $ deisctl config store get gateway/secretKey 41 42 Back on the local machine, run ``s3cmd --configure`` and enter your access key and secret key. 43 44 When prompted with the ``Use HTTPS protocol`` option, answer ``No``. Other settings can be left at 45 the defaults. If the configure script prompts to test the credentials, skip that step - it will 46 try to authenticate against Amazon S3 and fail. 47 48 You'll need to change two configuration settings - edit ``~/.s3cfg`` and change 49 ``host_base`` and ``host_bucket`` to match ``deis-store.<your domain>``. For example, for my local 50 Vagrant setup, I've changed the lines to: 51 52 .. code-block:: console 53 54 host_base = deis-store.local3.deisapp.com 55 host_bucket = deis-store.local3.deisapp.com 56 57 We can now use ``s3cmd`` to back up and restore data from the store-gateway. 58 59 Backing up 60 ---------- 61 62 Database backups and registry data 63 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 64 65 The store-gateway component stores database backups and is used to store data for the registry. 66 On our local machine, we can use ``s3cmd sync`` to copy the objects locally: 67 68 .. code-block:: console 69 70 $ s3cmd sync s3://db_wal . 71 $ s3cmd sync s3://registry . 72 73 Log data 74 ~~~~~~~~ 75 76 The store-volume service mounts a filesystem which is used by the controller and logger components 77 to store and retrieve application and component logs. 78 79 Since this is just a POSIX filesystem, you can simply tarball the contents of this directory 80 and rsync it to a local machine: 81 82 .. code-block:: console 83 84 $ ssh core@<hostname> 'cd /var/lib/deis/store && sudo tar cpzf ~/store_file_backup.tar.gz .' 85 tar: /var/lib/deis/store/logs/deis-registry.log: file changed as we read it 86 $ rsync -avhe ssh core@<hostname>:~/store_file_backup.tar.gz . 87 88 Note that you'll need to specify the SSH port when using Vagrant: 89 90 .. code-block:: console 91 92 $ rsync -avhe 'ssh -p 2222' core@127.0.0.1:~/store_file_backup.tar.gz . 93 94 Note the warning - in a running cluster the log files are constantly being written to, so we are 95 preserving a specific moment in time. 96 97 Database data 98 ~~~~~~~~~~~~~ 99 100 While backing up the Ceph data is sufficient (as database ships backups and WAL logs to store), 101 we can also back up the PostgreSQL data using ``pg_dumpall`` so we have a text dump of the database. 102 103 We can identify the machine running database with ``deisctl list``, and from that machine: 104 105 .. code-block:: console 106 107 core@deis-1 ~ $ docker exec deis-database sudo -u postgres pg_dumpall > dump_all.sql 108 core@deis-1 ~ $ docker cp deis-database:/app/dump_all.sql . 109 110 Restoring 111 --------- 112 113 .. note:: 114 115 Restoring data is only necessary when deploying a new cluster. Most users will use the normal 116 in-place upgrade workflow which does not require a restore. 117 118 We want to restore the data on a new cluster before the rest of the Deis components come up and 119 initialize. So, we will install the whole platform, but only start the store components: 120 121 .. code-block:: console 122 123 $ deisctl install platform 124 $ deisctl start store-monitor 125 $ deisctl start store-daemon 126 $ deisctl start store-metadata 127 $ deisctl start store-gateway 128 $ deisctl start store-volume 129 130 We'll also need to start a router so we can access the gateway: 131 132 .. code-block:: console 133 134 $ deisctl start router@1 135 136 The default maximum body size on the router is too small to support large uploads to the gateway, 137 so we need to increase it: 138 139 .. code-block:: console 140 141 $ deisctl config router set bodySize=100m 142 143 The new cluster will have generated a new access key and secret key, so we'll need to get those again: 144 145 .. code-block:: console 146 147 $ deisctl config store get gateway/accessKey 148 $ deisctl config store get gateway/secretKey 149 150 Edit ``~/.s3cfg`` and update the keys. 151 152 Now we can restore the data! 153 154 Database backups and registry data 155 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 156 157 Because neither the database nor registry have started, the bucket we need to restore to will not 158 yet exist. So, we'll need to create those buckets: 159 160 .. code-block:: console 161 162 $ s3cmd mb s3://db_wal 163 $ s3cmd mb s3://registry 164 165 Now we can restore the data: 166 167 .. code-block:: console 168 169 $ s3cmd sync basebackups_005 s3://db_wal 170 $ s3cmd sync wal_005 s3://db_wal 171 $ s3cmd sync registry s3://registry 172 173 Log data 174 ~~~~~~~~ 175 176 Once we copy the tarball back to one of the CoreOS machines, we can extract it: 177 178 .. code-block:: console 179 180 $ rsync -avhe ssh store_file_backup.tar.gz core@<hostname>:~/store_file_backup.tar.gz 181 $ ssh core@<hostname> 'cd /var/lib/deis/store && sudo tar -xzpf ~/store_file_backup.tar.gz --same-owner' 182 183 Note that you'll need to specify the SSH port when using Vagrant: 184 185 .. code-block:: console 186 187 $ rsync -avhe 'ssh -p 2222' store_file_backup.tar.gz core@127.0.0.1:~/store_file_backup.tar.gz 188 189 Finishing up 190 ~~~~~~~~~~~~ 191 192 Now that the data is restored, the rest of the cluster should come up normally with a ``deisctl start platform``. 193 194 The last task is to instruct the controller to re-write user keys, application data, and domains to etcd. 195 Log into the machine which runs deis-controller and run the following. Note that the IP address to 196 use in the ``export`` command should correspond to the IP of the host machine which runs this container. 197 198 .. code-block:: console 199 200 $ nse deis-controller 201 $ cd /app 202 $ export ETCD=172.17.8.100:4001 203 ./manage.py shell <<EOF 204 from api.models import * 205 [k.save() for k in Key.objects.all()] 206 [a.save() for a in App.objects.all()] 207 [d.save() for d in Domain.objects.all()] 208 EOF 209 $ exit 210 211 .. note:: 212 213 The database keeps track of running application containers. Since this is a fresh cluster, it is 214 advisable to ``deis scale <proctype>=0`` and then ``deis scale`` back up to the desired number of 215 containers for an application. This ensures the database has an accurate view of the cluster. 216 217 That's it! The cluster should be fully restored. 218 219 .. _`Ceph`: http://ceph.com 220 .. _`download s3cmd`: http://s3tools.org/download 221 .. _`s3cmd`: http://s3tools.org/