github.com/rpdict/ponzu@v0.10.1-0.20190226054626-477f29d6bf5e/docs/src/Running-Backups/Backups.md (about)

     1  title: Running Backups on Ponzu systems
     2  
     3  Both the databases `system.db` & `analytics.db`, and the `/uploads` directory can be backed up over HTTP using `wget`, `curl`, etc. All of which are located at the `/admin/backup` route and require HTTP Basic Auth. In order to enable backups, you must add a user/password pair inside the CMS Configuration at `/admin/configure` near the bottom of the page.
     4  
     5  All backups are made using a `GET` request to the `/admin/backup` path with a query parameter of `?source={system,analytics,uploads}` (only one source can be included in the URL).
     6  
     7  Here are some full backup scripts to use or modify to fit your needs:
     8  [https://github.com/rpdict/backup-scripts](https://github.com/rpdict/backup-scripts)
     9  
    10  ## System & Analytics
    11  The `system.db` & `analytics.db` data files are sent uncompressed in their original form as they exist on your server. No temporary copy is stored on the origin server, and it is possible that the backup could fail so checking for successful backups is recommended. See https://github.com/boltdb/bolt#database-backups for more information about how BoltDB handles HTTP backups.
    12  
    13  An example backup request for the `system.db` data file would look like:
    14  ```bash
    15  $ curl --user user:pass "https://example.com/admin/backup?source=system" > system.db.bak
    16  ```
    17  
    18  ## Uploads
    19  The `uploads` directory is gzip compressed and archived as a tar file, stored in the temporary directory (typically `/tmp` on Linux) on your origin server with a timestamp in the file name. It is removed after the HTTP response for the backup has been written.
    20  
    21  An example backup request for the `/uploads` directory would look like:
    22  ```bash
    23  $ curl --user user:pass "https://example.com/admin/backup?source=uploads" > uploads.tar.gz
    24  # unarchive the tarball with gzip 
    25  $ tar xzf uploads.tar.gz
    26  ```
    27  
    28  ## Search Indexes
    29  The `search` directory, which is created to store the various search indexes for your content types (only if they implement `search.Searchable`), is backed up in the same fashion as [Uploads](/Running-Backups/Backups/#uploads). 
    30  
    31  An example backup request for the `/search` directory would look like:
    32  ```bash
    33  $ curl --user user:pass "https://example.com/admin/backup?source=search" > search.tar.gz
    34  # unarchive the tarball with gzip 
    35  $ tar xzf search.tar.gz
    36  ```