Commit 9d5d3f6d authored by Benjamin Beyret's avatar Benjamin Beyret
Browse files
parents f83c81a6 0aeb2861
......@@ -78,6 +78,11 @@ We offer two packages for this competition:
Or you can install it from the source, head to `animalai/` folder and run `pip install -e .`
In case you wish to create a conda environment you can do so by running the below command from the `animalai` folder:
conda env create -f conda_isntall.yaml
- We also provide a package that can be used as a starting point for training, and which is required to run most of the
example scripts found in the `examples/` folder. At the moment **we only support Linux and Max** for the training examples. It contains an extension of
[ml-agents' training environment]( that relies on
......@@ -102,6 +107,10 @@ You can now unzip the content of the archive to the `env` folder and you're read
`AnimalAI.*` is in `env/`. On linux you may have to make the file executable by running `chmod +x env/AnimalAI.x86_64`.
Head over to [Quick Start Guide](documentation/ for a quick overview of how the environment works.
The Unity source files for the environment can be find on the [AnimalAI-Environment repository](
Due to a lack of resources we cannot provide support on this part of the project at the moment. We recommend reading the documentation on the
[ML-Agents repo]( too.
## Manual Control
If you launch the environment directly from the executable or through the VisualizeArena script it will launch in player
......@@ -146,9 +155,6 @@ Deshraj Yadav, Rishabh Jain, Harsh Agrawal, Prithvijit Chattopadhyay, Taranjeet
In play mode pressing `R` or `C` does nothing sometimes. This is due to the fact that we have synchronized these
features with the agent's frames in order to have frames in line with the configuration files for elements such as blackouts. **Solution**: press the key again, several times if needed.
~~When a lot of objects are spawned randomly, extremely rarely, the agent will fall through the floor.~~ (fixed in
- [x] Add custom resolutions
......@@ -161,6 +167,11 @@ v0.6.1)
## Version History
- v1.0.4
- Adds customisable resolution during evaluation
- Update `animalai-train` to tf 1.14 to fix `gin` broken dependency
- Release source code for the environment (no support to be provided on this for now)
- v1.0.3
- Adds inference mode to Gym environment
- Adds seed to Gym Environment
name: animalai
- defaults
- conda-forge
- python=3.6.*
- pip
- pillow>=4.2.1,<=5.4.1
- numpy>=1.13.3,<=1.14.5
- protobuf>=3.6,<3.7
- grpcio>=1.11.0,<1.12.0
- pyyaml>=5.1
- jsonpickle>=1.2
- pyglet>=1.2.0
- scipy
- cloudpickle=1.2.*
- future
- pip:
- gym
- animalai
......@@ -11,7 +11,7 @@ We leave participants the task of adapting the information found here to differe
Start by creating an account on [AWS](, and then open the [console](
Compute engines on AWS are called `EC2` and offer a vast range of configurations in terms of number and type of CPUs, GPUs,
memory and storage. You can find more details about the different types and prices [here](
In our case, we will use a `p2.xlarge instance`, in the console select `EC2`:
In our case, we will use a `p2.xlarge` instance, in the console select `EC2`:
......@@ -34,7 +34,7 @@ and select either `Deep Learning Base AMI (Ubuntu)` if you want a basic Ubuntu i
Click `Next` twice (first Next: Configure Instance Deatils, then Next: Add Storage) and add at least 15 Gb of storage to the current size (so at least 65 total with a default of 50). Click `review and launch`, and then `launch`. You will then be asked to create or select existing key pairs which will be used to ssh to your instance.
Once your instances is started, it will appear on the EC2 console. To ssh into your instance, right click the line, select connect and follow the instructions.
Once your instance is started, it will appear on the EC2 console. To ssh into your instance, right click the line, select connect and follow the instructions.
We can now configure our instance for training. **Don't forget to shutdown your instance once you're done using it as you get charged as long as it runs**.
## Simulating a screen
......@@ -58,7 +58,7 @@ lines to your docker in the `YOUR COMMANDS GO HERE` part, below the line install
RUN git clone
RUN pip uninstall --yes tensorflow
RUN pip install tensorflow-gpu==1.12.2
RUN pip install tensorflow-gpu==1.14
RUN apt-get install unzip wget
RUN wget
RUN mv AnimalAI-Olympics/env/
......@@ -16,6 +16,12 @@ experiments we will run, therefore we provide the agent with the length of the e
are the ones returned by the Gym environment `AnimalAIEnv` from `animalai.envs.environment`. If you wish to directly
work on the ML Agents `BrainInfo` you can access them via `info['brain_info']`
**NEW (v1.0.4)**: you can now select the resolution of the observation your agent takes as input, this argument will be passed to the environment directly (must be between 4 and 256). To do so add the line below to the `__init__` constructor of your agent:
self.resolution = 84 # can be between 4 and 256
Make sure any data loaded in the docker is referred to using **absolute paths** in the container or the form `/aaio/data/...` (see below). An example that you can modify is provided [here](
## Create an EvalAI account and add submission details
......@@ -2,7 +2,7 @@ from setuptools import setup
description='Animal AI competition training library',
author='Benjamin Beyret',
......@@ -22,7 +22,7 @@ setup(
......@@ -9,13 +9,17 @@ class Agent(object):
Load your agent here and initialize anything needed
# You can specify the resolution your agent takes as input, for example set resolution=128 to
# have visual inputs of size 128*128*3 (if this attribute is omitted it defaults to 84)
self.resolution = 84
# Load the configuration and model using ABSOLUTE PATHS
self.configuration_file = '/aaio/data/trainer_config.yaml'
self.model_path = '/aaio/data/1-Food/Learner'
self.brain = BrainParameters(brain_name='Learner',
camera_resolutions=[{'height': 84, 'width': 84, 'blackAndWhite': False}],
{'height': self.resolution, 'width': self.resolution, 'blackAndWhite': False}],
vector_action_descriptions=['', ''],
vector_action_space_size=[3, 3],
......@@ -26,6 +26,15 @@ def main():
print('Your agent could not be reset:')
raise e
resolution = submitted_agent.resolution
assert 4 <= resolution <= 256
except AttributeError:
resolution = 84
except AssertionError:
print('Resolution must be between 4 and 256')
env = AnimalAIEnv(
......@@ -33,6 +42,7 @@ def main():
print('Running 5 episodes')
......@@ -45,7 +55,7 @@ def main():
obs, reward, done, info = env.step([0, 0])
for i in range(arena_config_in.arenas[0].t):
action = submitted_agent.step(obs, reward, done, info)
obs, reward, done, info = env.step(action)
cumulated_reward += reward
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment