README.md 16.3 KB
Newer Older
Benjamin Beyret's avatar
Benjamin Beyret committed
1
2
# Animal-AI Olympics

3
## The competition is now over
Matthew Crosby's avatar
Matthew Crosby committed
4
5
6
7
8
9
10
11
12
13
14

Congratulations to our winners!

1st: Trrrrr
2nd: ironbar
3rd: sirius

Results [here](http://animalaiolympics.com)

Full paper, release of testing configurations, results analsysis and write-up and WBA prize announcment to come.

15
We will also reopen submissions in the coming days in order to allow you to keep working on the Animal-AI Olympics until we disclose the test sets, these new submissions will not count as part of the competition.
16

Matthew Crosby's avatar
Matthew Crosby committed
17
18
19
20
<p align="center">
  <img height="300" src="documentation/PrefabsPictures/steampunkFOURcrop.png">
</p>

21
**July 1st - November 1st**
22

Matthew Crosby's avatar
Matthew Crosby committed
23
The Animal-AI Olympics is an AI competition with tests inspired by animal cognition. Participants are given a small environment with just seven different classes of objects that can be placed inside. In each test, the agent needs to retrieve the food in the environment, but to do so there are obstacles to overcome, ramps to climb, boxes to push, and areas that must be avoided. The real challenge is that we don't provide the tests in advance. It's up to you to explore the possibilities with the environment and build interesting configurations that can help create an agent that understands how the environment's physics work and the affordances that it has. The final submission should be an agent capable of robust food retrieval behaviour similar to that of many kinds of animals. We know the animals can pass these tests, it's time to see if AI can too.
24

25
26
27
28
29
30
31
32
## Prizes $32,000 (equivalent value)

* Overall Prizes
  * 1st place overall: **$7,500 total value** - $6,500 with up to $1,000 travel to speak at NeurIPS 2019.
  * 2nd place overall: **$6,000 total value** - $5,000 with up to $1,000 travel to speak at NeurIPS 2019.
  * 3rd place overall: **$1,500**.
* WBA-Prize: **$5,000 total value** - $4,000 with up to $1,000 travel to speak at NeurIPS 2019
* Category Prizes: **$200** For best performance in each category (cannot combine with other prizes - max 1 per team).
Benjamin Beyret's avatar
Benjamin Beyret committed
33
* ~~**Mid-way AWS Research Credits**: The top 20 entries as of **September 1st** will be awarded **$500** of AWS credits.~~ (already awarded)
34
35
36

See [competition launch page](https://mdcrosby.com/blog/animalailaunch.html) and official rules for further details.

Matthew Crosby's avatar
Matthew Crosby committed
37
**Important** Please check the competition rules [here](http://animalaiolympics.com/rules.html). **To submit to the competition and be considered for prizes you must also fill in [this form](https://forms.gle/PKCgp2JAWvjf4c9i6)**. Entry to the competition ([via EvalAI](https://evalai.cloudcv.org/web/challenges/challenge-page/396/overview)) constitutes agreement with all competition rules. 
38

Benjamin Beyret's avatar
Benjamin Beyret committed
39
40
## Overview

41
Here you will find all the code needed to compete in this new challenge. This repo contains **the training environment** (v1.0) that will be used for the competition. Information for entering can be found in the [submission documentation](documentation/submission.md). Please check back during the competition for minor bug-fixes and updates, but as of v1.0 the major features and contents are set in place.
42
43

For more information on the competition itself and to stay updated with any developments, head to the 
44
[Competition Website](http://www.animalaiolympics.com/) and follow [@MacroPhilosophy](https://twitter.com/MacroPhilosophy) and [@BenBeyret](https://twitter.com/BenBeyret) on twitter.
45
46

The environment contains an agent enclosed in a fixed sized arena. Objects can spawn in this arena, including positive 
47
and negative rewards (green, yellow and red spheres) that the agent must obtain (or avoid). All of the hidden tests that will appear in the competition are made using the objects in the training environment. We have provided some sample environment configurations that should be useful for training (see examples/configs), but part of the challenge is to experiment and design new configurations.
Benjamin Beyret's avatar
Benjamin Beyret committed
48
49

To get started install the requirements below, and then follow the [Quick Start Guide](documentation/quickstart.md). 
50
More in depth documentation can be found on the [Documentation Page](documentation/README.md).
Benjamin Beyret's avatar
Benjamin Beyret committed
51

52
53
54
55
## Evaluation

The competition has 300 tests, split over ten categories. The categories range from the very simple (e.g. **food retrieval**, **preferences**, and **basic obstacles**) to the more complex (e.g. **spatial reasoning**, **internal models**, **object permanence**, and **causal reasoning**). We have included example config files for the first seven categories. Note that the example config files are just simple examples to be used as a guide. An agent that solves even all of these perfectly may still not be able to solve all the tests in the category, but it would be off to a good start.

Matthew Crosby's avatar
Matthew Crosby committed
56
57
58
The submission website allows you to submit an agent that will be run on all 300 tests and it returns the overall score (number of tests passed) and score per category. We cannot offer infinite compute, so instances will be timed out after ~90 minutes and only tests performed up to that point counted (all others will be considered failed). See the [submission documentation](documentation/submission.md) for more information. 

For the mid-way and final evaluation we will (resources permitting) run more extensive testing with 3 variations per test (so 900 tests total). The variations will include minor perturbations to the configurations. The agent will have to pass all 3 variations to pass each individual test, giving a total score out of 300. This means that **your final test score might be lower than the score achieved during the competition** and that **the competition leaderboard on EvalAI may not exactly match the final results**. 
59

mdcrosby's avatar
mdcrosby committed
60
61
## Development Blog

62
You can read the launch posts - with information about prizes and the categories in the competition here:
63

64
65
[Animal-AI: AWS Prizes and Evaluation: Aug 12th](https://www.mdcrosby.com/blog/animalaiprizes1.html) - with updated submission and test information.

Matthew Crosby's avatar
Matthew Crosby committed
66
[Animal-AI Evaluation: July 8th](https://mdcrosby.com/blog/animalaieval.html) - with collated information about the evaluation.
67
68

[Animal-AI Launch: July 1st](https://mdcrosby.com/blog/animalailaunch.html) - with information about the prizes and  introduction to all 10 categories.
69

70
71
You can read the development blog [here](https://mdcrosby.com/blog). It covers further details about the competition as 
well as part of the development process.
mdcrosby's avatar
mdcrosby committed
72
73
74
75
76

1. [Why Animal-AI?](https://mdcrosby.com/blog/animalai1.html)

2. [The Syllabus (Part 1)](https://mdcrosby.com/blog/animalai2.html)

77
78
3. [The Syllabus (Part 2): Lights Out](https://mdcrosby.com/blog/animalai3.html)

Benjamin Beyret's avatar
Benjamin Beyret committed
79
80
## Requirements

81
The Animal-AI package works on Linux, Mac and Windows, as well as most Cloud providers. Note that for submission to the competition we only support linux-based Docker files.  
82
<!--, for cloud engines check out [this cloud documentation](documentation/cloud.md).-->
Benjamin Beyret's avatar
Benjamin Beyret committed
83

84
We recommend using a virtual environment specifically for the competition. You will need `python3.6` installed (we currently only support **python3.6**). Clone this repository to run the examples we provide.
Benjamin Beyret's avatar
Benjamin Beyret committed
85

86
87
88
We offer two packages for this competition:

- The main package is an API for interfacing with the Unity environment. It contains both a 
89
90
91
[gym environment](https://github.com/openai/gym) as well as an extension of Unity's 
[ml-agents environments](https://github.com/Unity-Technologies/ml-agents/tree/master/ml-agents-envs). You can install it
 via pip:
92
93
94
95
    ```
    pip install animalai
    ```
    Or you can install it from the source, head to `animalai/` folder and run `pip install -e .`
96

97
    In case you wish to create a conda environment you can do so by running the below command from the `animalai` folder:
Guikarist's avatar
Guikarist committed
98
    ```
99
    conda env create -f conda_isntall.yaml
Guikarist's avatar
Guikarist committed
100
101
    ```

102
- We also provide a package that can be used as a starting point for training, and which is required to run most of the 
Matthew Crosby's avatar
Matthew Crosby committed
103
example scripts found in the `examples/` folder. At the moment **we only support Linux and Max** for the training examples. It contains an extension of 
104
105
106
107
108
[ml-agents' training environment](https://github.com/Unity-Technologies/ml-agents/tree/master/ml-agents) that relies on 
[OpenAI's PPO](https://openai.com/blog/openai-baselines-ppo/), as well as 
[Google's dopamine](https://github.com/google/dopamine) which implements 
[Rainbow](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/download/17204/16680) (among others). You can also install 
this package using pip:
109
110
111
112
    ```
    pip install animalai-train
    ```
    Or you can install it from source, head to `examples/animalai_train` and run `pip install -e .`
Benjamin Beyret's avatar
Benjamin Beyret committed
113

Benjamin Beyret's avatar
Benjamin Beyret committed
114
Finally download the environment for your system:
Benjamin Beyret's avatar
Benjamin Beyret committed
115
116
117

| OS | Environment link |
| --- | --- |
Benjamin Beyret's avatar
Benjamin Beyret committed
118
119
120
| Linux |  [download v1.0.0](https://www.doc.ic.ac.uk/~bb1010/animalAI/env_linux_v1.0.0.zip) |
| MacOS |  [download v1.0.0](https://www.doc.ic.ac.uk/~bb1010/animalAI/env_mac_v1.0.0.zip) |
| Windows | [download v1.0.0](https://www.doc.ic.ac.uk/~bb1010/animalAI/env_windows_v1.0.0.zip)  |
Benjamin Beyret's avatar
Benjamin Beyret committed
121

122
123
124
You can now unzip the content of the archive to the `env` folder and you're ready to go! Make sure the executable 
`AnimalAI.*` is in `env/`. On linux you may have to make the file executable by running `chmod +x env/AnimalAI.x86_64`. 
Head over to [Quick Start Guide](documentation/quickstart.md) for a quick overview of how the environment works.
Benjamin Beyret's avatar
Benjamin Beyret committed
125

Benjamin Beyret's avatar
Benjamin Beyret committed
126
127
128
129
The Unity source files for the environment can be find on the [AnimalAI-Environment repository](https://github.com/beyretb/AnimalAI-Environment). 
Due to a lack of resources we cannot provide support on this part of the project at the moment. We recommend reading the documentation on the 
[ML-Agents repo](https://github.com/Unity-Technologies/ml-agents) too.

Benjamin Beyret's avatar
Benjamin Beyret committed
130
131
132
133
134
135
136
137
138
139
140
141
142
143
## Manual Control

If you launch the environment directly from the executable or through the VisualizeArena script it will launch in player 
mode. Here you can control the agent with the following:

| Keyboard Key  | Action    |
| --- | --- |
| W   | move agent forwards |
| S   | move agent backwards|
| A   | turn agent left     |
| D   | turn agent right    |
| C   | switch camera       |
| R   | reset environment   |

mdcrosby's avatar
mdcrosby committed
144
## Citing
145
If you use the Animal-AI environment in your work you can cite the environment paper:
mdcrosby's avatar
mdcrosby committed
146

147
148
149
150
151
152
153
154
155
156
Beyret, B., Hernández-Orallo, J., Cheke, L., Halina, M., Shanahan, M., Crosby, M. [The Animal-AI Environment: Training and Testing Animal-Like Artificial Cognition](https://arxiv.org/abs/1909.07483), arXiv preprint

```
@inproceedings{Beyret2019TheAE,
  title={The Animal-AI Environment: Training and Testing Animal-Like Artificial Cognition},
  author={Benjamin Beyret and Jos'e Hern'andez-Orallo and Lucy Cheke and Marta Halina and Murray Shanahan and Matthew Crosby},
  year={2019}
}
```

157
Paper with all the details of the test battery will be released after the competition has finished.
158

Benjamin Beyret's avatar
Benjamin Beyret committed
159
160
161
162
163
## Unity ML-Agents

The Animal-AI Olympics was built using [Unity's ML-Agents Toolkit.](https://github.com/Unity-Technologies/ml-agents)

The Python library located in [animalai](animalai) is almost identical to 
164
165
166
[ml-agents v0.7](https://github.com/Unity-Technologies/ml-agents/tree/master/ml-agents-envs). We only added the 
possibility to change the configuration of arenas between episodes. The documentation for ML-Agents can be found 
[here](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Python-API.md).
Benjamin Beyret's avatar
Benjamin Beyret committed
167

168
169
Juliani, A., Berges, V., Vckay, E., Gao, Y., Henry, H., Mattar, M., Lange, D. (2018). [Unity: A General Platform for 
Intelligent Agents.](https://arxiv.org/abs/1809.02627) *arXiv preprint arXiv:1809.02627*
Benjamin Beyret's avatar
Benjamin Beyret committed
170

171
172
## EvalAI

Benjamin Beyret's avatar
Benjamin Beyret committed
173
The competition is kindly hosted on [EvalAI](https://github.com/Cloud-CV/EvalAI), an open source web application for AI competitions. Special thanks to [Rishabh Jain](https://rishabhjain.xyz/) for his help in settting this up.
174
175
176

Deshraj Yadav, Rishabh Jain, Harsh Agrawal, Prithvijit Chattopadhyay, Taranjeet Singh, Akash Jain, Shiv Baran Singh, Stefan Lee and Dhruv Batra (2019) [EvalAI: Towards Better Evaluation Systems for AI Agents](https://arxiv.org/abs/1902.03570)

177
## Known Issues
178

179
In play mode pressing `R` or `C` does nothing sometimes. This is due to the fact that we have synchronized these 
180
features with the agent's frames in order to have frames in line with the configuration files for elements such as blackouts. **Solution**: press the key again, several times if needed.
181
182
183

## TODO

Benjamin Beyret's avatar
Benjamin Beyret committed
184
- [x] Add custom resolutions
Benjamin Beyret's avatar
Benjamin Beyret committed
185
- [x] Add inference viewer to the environment
186
- [x] Offer a gym wrapper for training
Benjamin Beyret's avatar
Benjamin Beyret committed
187
- [x] Improve the way the agent spawns
Benjamin Beyret's avatar
Benjamin Beyret committed
188
- [x] Add lights out configurations.
mdcrosby's avatar
mdcrosby committed
189
190
- [x] Improve environment framerates
- [x] Add moving food
mdcrosby's avatar
mdcrosby committed
191

Benjamin Beyret's avatar
Benjamin Beyret committed
192
## Version History
Benjamin Beyret's avatar
Benjamin Beyret committed
193

Benjamin Beyret's avatar
Benjamin Beyret committed
194
195
196
- v1.1.1
    - Hotfix curriculum loading in the wrong order
    
Benjamin Beyret's avatar
Benjamin Beyret committed
197
- v1.1.0
198
199
    - Add curriculum learning to `animalai-train` to use yaml configurations

Benjamin Beyret's avatar
Benjamin Beyret committed
200
- v1.0.5
201
    - ~~Adds customisable resolution during evaluation~~ (removed, evaluation is only `84x84`)
Benjamin Beyret's avatar
Benjamin Beyret committed
202
203
    - Update `animalai-train` to tf 1.14 to fix `gin` broken dependency
    - Release source code for the environment (no support to be provided on this for now)
Benjamin Beyret's avatar
Benjamin Beyret committed
204
205
    - Fixes some legacy dependencies and typos in both libraries
    
Benjamin Beyret's avatar
Benjamin Beyret committed
206
207
- v1.0.3
    - Adds inference mode to Gym environment
208
    - Adds seed to Gym Environment
209
210
211
    - Submission example folder containing a trained agent
    - Provide submission details for the competition
    - Documentation for training on AWS
Benjamin Beyret's avatar
Benjamin Beyret committed
212

213
214
- v1.0.2
    - Adds custom resolution for docker training as well
215
    - Fix version checker
216
217

- v1.0.0
Benjamin Beyret's avatar
Benjamin Beyret committed
218
    - Adds custom resolution to both Unity and Gym environments
Benjamin Beyret's avatar
Benjamin Beyret committed
219
    - Adds inference mode to the environment to visualize trained agents
Benjamin Beyret's avatar
Benjamin Beyret committed
220
221
    - Prizes announced
    - More details about the competition
222
223
224
225
226

- v0.6.1 (Environment only) 
    - Fix rare events of agent falling through the floor or objects flying in the air when resetting an arena

- v0.6.0 
227
    - Adds score in playmode (current and previous scores)
Benjamin Beyret's avatar
Benjamin Beyret committed
228
    - Playmode now incorporates lights off directly (in `examples` try: `python visualizeArena.py configs/lightsOff.yaml`)
229
230
231
    - To simplify the environment several unnecessary objects have been removed [see here](documentation/definitionsOfObjects.md)
    - **Several object properties have been changed** [also here](documentation/definitionsOfObjects.md)
    - Frames per action reduced from 5 to 3 (*i.e.*: for each action you send we repeat it for a certain number of frames 
Benjamin Beyret's avatar
Benjamin Beyret committed
232
233
234
    to ensure smooth physics)
    - Add versions compatibility check between the environment and API
    - Remove `step_number` argument from `animalai.environment.step`
Benjamin Beyret's avatar
Benjamin Beyret committed
235

Benjamin's avatar
Benjamin committed
236
- v0.5 Package `animalai`, gym compatible, dopamine example, bug fixes
237
238
239
240
241
242
243
244
    - Separate environment API and training API in Python
    - Release both as `animalai` and `animalai-train` PyPI packages (for `pip` installs)
    - Agent speed in play-mode constant across various platforms
    - Provide Gym environment
    - Add `trainBaselines,py` to train using `dopamine` and the Gym wrapper
    - Create the `agent.py` interface for agents submission
    - Add the `HotZone` object (equivalent to the red zone but without death)

Benjamin Beyret's avatar
Benjamin Beyret committed
245
246
247
- v0.4 - Lights off moved to Unity, colors configurations, proportional goals, bugs fixes
    - The light is now directly switched on/off within Unity, configuration files stay the same
    - Blackouts now work with infinite episodes (`t=0`)
248
249
250
251
    - The `rand_colors` configurations have been removed and the user can now pass `RGB` values, see 
    [here](documentation/configFile.md#objects)
    - Rewards for goals are now proportional to their size (except for the `DeathZone`), see 
    [here](documentation/definitionsOfObjects.md#rewards)
Benjamin Beyret's avatar
Benjamin Beyret committed
252
    - The agent is now a ball rather than a cube
Benjamin Beyret's avatar
Benjamin Beyret committed
253
    - Increased safety for spawning the agent to avoid infinite loops
Benjamin Beyret's avatar
Benjamin Beyret committed
254
255
    - Bugs fixes
    
Benjamin Beyret's avatar
Benjamin Beyret committed
256
- v0.3 - Lights off, remove Beams and add cylinder
257
258
    - We added the possibility to switch the lights off at given intervals, see 
    [here](documentation/configFile.md#blackouts)
Benjamin Beyret's avatar
Benjamin Beyret committed
259
    - visualizeLightsOff.py displays an example of lights off, from the agent's point of view
Benjamin Beyret's avatar
Benjamin Beyret committed
260
261
    - Beams objects have been removed
    - A `Cylinder` object has been added (similar behaviour to the `Woodlog`)
Benjamin Beyret's avatar
Benjamin Beyret committed
262
    - The immovable `Cylinder` tunnel has been renamed `CylinderTunnel`
Benjamin Beyret's avatar
Benjamin Beyret committed
263
    - `UnityEnvironment.reset()` parameter `config` renamed to `arenas_configurations_input`
Benjamin Beyret's avatar
Benjamin Beyret committed
264
    
mdcrosby's avatar
mdcrosby committed
265
266
- v0.2 - New moving food rewards, improved Unity performance and bug fixes 
    - Moving rewards have been added, two for each type of reward, see 
Benjamin Beyret's avatar
Benjamin Beyret committed
267
    [the details here](documentation/definitionsOfObjects.md#rewards).
mdcrosby's avatar
mdcrosby committed
268
269
270
    - Added details for the maze generator.
    - Environment performance improved.
    - [Issue #7](../../issues/7) (`-inf` rewards for `t: 0` configuration) is fixed.
Benjamin Beyret's avatar
Benjamin Beyret committed
271
272

- v0.1 - Initial Release
Benjamin Beyret's avatar
Benjamin Beyret committed
273