github.com/kikitux/packer@v0.10.1-0.20160322154024-6237df566f9f/website/source/intro/getting-started/remote-builds.html.md (about)

     1  ---
     2  description: |
     3      Up to this point in the guide, you have been running Packer on your local
     4      machine to build and provision images on AWS and DigitalOcean. However, you can
     5      use Atlas by HashiCorp to both run Packer builds remotely and store the output
     6      of builds.
     7  layout: intro
     8  next_title: Next Steps
     9  next_url: '/intro/getting-started/next.html'
    10  page_title: Remote Builds and Storage
    11  prev_url: '/intro/getting-started/vagrant.html'
    12  ...
    13  
    14  # Remote Builds and Storage
    15  
    16  Up to this point in the guide, you have been running Packer on your local
    17  machine to build and provision images on AWS and DigitalOcean. However, you can
    18  use [Atlas by HashiCorp](https://atlas.hashicorp.com) to run Packer builds
    19  remotely and store the output of builds.
    20  
    21  ## Why Build Remotely?
    22  
    23  By building remotely, you can move access credentials off of developer machines,
    24  release local machines from long-running Packer processes, and automatically
    25  start Packer builds from trigger sources such as `vagrant push`, a version
    26  control system, or CI tool.
    27  
    28  ## Run Packer Builds Remotely
    29  
    30  To run Packer remotely, there are two changes that must be made to the Packer
    31  template. The first is the addition of the `push`
    32  [configuration](https://www.packer.io/docs/templates/push.html), which sends the
    33  Packer template to Atlas so it can run Packer remotely. The second modification
    34  is updating the variables section to read variables from the Atlas environment
    35  rather than the local environment. Remove the `post-processors` section for now
    36  if it is still in your template.
    37  
    38  ``` {.javascript}
    39  {
    40    "variables": {
    41      "aws_access_key": "{{env `aws_access_key`}}",
    42      "aws_secret_key": "{{env `aws_secret_key`}}"
    43    },
    44    "builders": [{
    45      "type": "amazon-ebs",
    46      "access_key": "{{user `aws_access_key`}}",
    47      "secret_key": "{{user `aws_secret_key`}}",
    48      "region": "us-east-1",
    49      "source_ami": "ami-9eaa1cf6",
    50      "instance_type": "t2.micro",
    51      "ssh_username": "ubuntu",
    52      "ami_name": "packer-example {{timestamp}}"
    53    }],
    54    "provisioners": [{
    55      "type": "shell",
    56      "inline": [
    57        "sleep 30",
    58        "sudo apt-get update",
    59        "sudo apt-get install -y redis-server"
    60      ]
    61    }],
    62    "push": {
    63      "name": "ATLAS_USERNAME/packer-tutorial"
    64    }
    65  }
    66  ```
    67  
    68  To get an Atlas username, [create an account
    69  here](https://atlas.hashicorp.com/account/new?utm_source=oss&utm_medium=getting-started&utm_campaign=packer).
    70  Replace "ATLAS\_USERNAME" with your username, then run
    71  `packer push -create example.json` to send the configuration to Atlas, which
    72  automatically starts the build.
    73  
    74  This build will fail since neither `aws_access_key` or `aws_secret_key` are set
    75  in the Atlas environment. To set environment variables in Atlas, navigate to
    76  the [Builds tab](https://atlas.hashicorp.com/builds), click the
    77  "packer-tutorial" build configuration that was just created, and then click
    78  'variables' in the left navigation. Set `aws_access_key` and `aws_secret_key`
    79  with their respective values. Now restart the Packer build by either clicking
    80  'rebuild' in the Atlas UI or by running `packer push example.json` again. Now
    81  when you click on the active build, you can view the logs in real-time.
    82  
    83  -> **Note:** Whenever a change is made to the Packer template, you must
    84  `packer push` to update the configuration in Atlas.
    85  
    86  ## Store Packer Outputs
    87  
    88  Now we have Atlas building an AMI with Redis pre-configured. This is great, but
    89  it's even better to store and version the AMI output so it can be easily
    90  deployed by a tool like [Terraform](https://www.terraform.io). The `atlas`
    91  [post-processor](/docs/post-processors/atlas.html) makes this process simple:
    92  
    93  ``` {.javascript}
    94  {
    95    "variables": ["..."],
    96    "builders": ["..."],
    97    "provisioners": ["..."],
    98    "push": ["..."],
    99    "post-processors": [{
   100      "type": "atlas",
   101      "artifact": "ATLAS_USERNAME/packer-tutorial",
   102      "artifact_type": "amazon.image"
   103    }]
   104  }
   105  ```
   106  
   107  Update the `post-processors` block with your Atlas username, then
   108  `packer push example.json` and watch the build kick off in Atlas! When the build
   109  completes, the resulting artifact will be saved and stored in Atlas.