Commit 513ea281 authored by Benjamin Beyret's avatar Benjamin Beyret
Browse files

add AWS training + docker_training false dopamine

parent cefa3b06
# Training on AWS
Training an agent requires rendering the environment on a screen, which prevents training on out of the box cloud compute
instances. You will need to follow either one of the methods provided below in order to perform training on the cloud.
Both methods were tested on [AWS p2.xlarge]( using a standard
[Deep Learning Base AMI]( We leave to participants the task of adapting
the information found here to different cloud providers and/or instance type.
**WARNING: using cloud services will incur costs, carefully read your provider terms of service**
## Pre-requisite: setup an AWS p2.xlarge instance
Start by creating an account on [AWS](, and then open the [console](
Compute engines on AWS are called `EC2` and offer a vast range of configurations in terms of number and type of CPUs, GPUs,
memory and storage. You can find more details about the different types and prices [here](
In our case, we will use a `p2.xlarge instance`, in the console select `EC2`:
by default you will have a limit restriction on the number of instances you can create. Check your limits by selecting `Limits` on the top
left menu:
Request an increase for `p2.xlarge` if needed. Once you have at least a limit of 1, go back to the EC2 console and select launch instance:
You can then select various images, type in `Deep learning` to see what is on offer, for now we recommend to select `AWS Marketplace` on the left panel:
and select either `Deep Learning Base AMI (Ubuntu)` if you want a basic Ubuntu install with CUDA capabilities, or `Deep Learning AMI (Ubuntu)` if you
require deep learning libraries installed as well (tensorflow, pytorch...). On the next page select `p2.xlarge`:
Click review and launch, and then launch. You will then be asked to create or select existing key pairs which will be used to ssh to your instance.
Once your instances is started, it will appear on the EC2 console. To ssh into your instance, right click the line, select connect and follow the instructions.
We can now configure our instance for training. **Don't forget to shutdown your instance once you're done using it as you get charged as long as it runs**.
## The easy way: docker
Basic Deep Learning Ubuntu images provide [NVIDIA docker](
pre-installed, which allows to use CUDA within a container. SSH into your AWS instance, clone this repo and follow the instructions below.
In the [submission guide]( we describe how to build a docker container for submission. The same process
can be used to create a docker for training an agent. The [dockerfile provided](../examples/submission/Dockerfile) can
be adapted to include all the libraries and code needed for training.
For example, should you wish to train a standard Dopamine agent provided in `animalai-train` out of the box, using GPU compute, add the following
lines to your docker in the `YOUR COMMANDS GO HERE` part, below the line installing `animalai-train`:
RUN git clone --single-branch --branch submission
RUN pip uninstall --yes tensorflow
RUN pip install tensorflow-gpu==1.12.2
RUN apt-get install unzip
RUN wget
RUN mv AnimalAI-Olympics/env/
RUN unzip AnimalAI-Olympics/env/ -d AnimalAI-Olympics/env/
WORKDIR /aaio/AnimalAI-Olympics/examples
sed -i 's/docker_training=False/docker_training=True/g'
Build your docker, from the `examples/submission` folder run:
docker buil --tag=test-training .
Once built, you can start training straight away by running:
docker run test-training python /aaio/AnimalAI-Olympics/examples/
You should see the following tensorflow line in the output which confirms you are training using the GPU:
\ No newline at end of file
......@@ -18,6 +18,7 @@ def create_env_fn():
return env
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment