github.com/gorgonia/agogo@v0.1.1/deploy/README.md (about)

     1  #  cluster/image/player deployment
     2  
     3  ## what does all this do?
     4  * create a kubernetes cluster
     5  * create a docker image and push it to amazon ecr
     6  * create a job on the cluster using the image from ecr
     7  * copy random data to s3!
     8  
     9  ## requirements/tested with:
    10  
    11  * aws access key/secret
    12  
    13  * docker 18.03.1-ce
    14  * kops 1.9.0
    15  * kubectl 1.10.1
    16  * awscli 1.15.10
    17  
    18  
    19  ## set these vars/run these commands:
    20  export vars:
    21  ```
    22  export AWS_DEFAULT_REGION=ap-southeast-2
    23  export AWS_ACCESS_KEY_ID="<your key here>"
    24  export AWS_SECRET_ACCESS_KEY="<your secret here>"
    25  ```
    26  get container repository login:
    27  ```
    28  aws ecr get-login --no-include-email
    29  ```
    30  copy/paste output, should look like the following:
    31  ```
    32  docker login -u AWS -p xxxxx https://954347443578.dkr.ecr.ap-southeast-2.amazonaws.com
    33  ```
    34  edit vars.sh, set values
    35  ```
    36  vi vars.sh :)
    37  ```
    38  load env vars and deploy kubernetes cluster:
    39  ```
    40  source vars.sh
    41  ./cluster-up.sh
    42  ```
    43  wait for the cluster to come up:
    44  ```
    45  kops validate cluster
    46  ```
    47  replace user:pass in this line in selfplay/Dockerfile (while the repo is private):
    48  ```
    49  RUN git clone https://user:pass@github.com/chewxy/agogo.git
    50  ```
    51  build/push the docker image:
    52  ```
    53  make cpu push
    54  ```
    55  deploy the cpu player/s3 random data generator!
    56  ```
    57  cd selfplay
    58  ./deploy-cpu-player.sh
    59  ```
    60  afterwards, kill the cluster:
    61  ```
    62  cd ..
    63  ./cluster-down.sh
    64  ```
    65  unset the vars:
    66  ```
    67  ./unset-vars.sh
    68  ```