README.md 4.68 KB
Newer Older
Benjamin Beyret's avatar
Benjamin Beyret committed
1
2
3
4
5
# Animal-AI Olympics

## Overview

Welcome to the repository for the Animal-AI Olympics competition where you will find all the code needed to compete in 
6
this new challenge. Note that this is **only the training environment** (v0.1) that will be used for the competition and **does not contain any competition tests or information for entering**. If everything goes well the competition will be live on June 30th. Until then we will be continually updating with bug fixes and small changes to environment. However, the general structure will stay the same so it's not too early to start using the environment. For more information on the competition itself and to stay updated with any developments, head to the [Competition Website](http://www.animalaiolympics.com/) and follow [@MacroPhilosophy](https://twitter.com/MacroPhilosophy) and [@BenBeyret](https://twitter.com/BenBeyret) on twitter.
Benjamin Beyret's avatar
Benjamin Beyret committed
7

8
9
10
The environment contains an agent enclosed in a fixed sized arena. Objects can spawn in this arena, including positive and negative rewards (green, yellow and red spheres). All of the hidden tests that will appear in the competition are made using the objects in the training environment. We have provided some sample environment configurations that should be useful for training, but part of the challenge will be experimenting and designing new configurations.

The goal of this first release is to **seek feedback from the community** as well as to provide the environment for research prior to the launch of the competition itself. The competition version of the environment will be similar to this one, however we are open to suggestion (for minor changes) and especially bugs report! Head over the the [issues page](https://github.com/beyretb/AnimalAI/issues) and open a ticket using the `suggestion` or `bug` labels 
Benjamin Beyret's avatar
Benjamin Beyret committed
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
respectively.

To get started install the requirements below, and then follow the [Quick Start Guide](documentation/quickstart.md). 
A more in depth documentation <!--, including a primer on animal cognition,--> can be found on the 
[Documentation Page](documentation/documentation.md).

## Requirements

The Animal-AI package works on most platforms. Below is the basic installation on Ubuntu. Description for 
other platforms coming soon. <!--, for cloud engines check out [this cloud documentation](documentation/cloud.md).-->

First of all your will need `python3.6` installed. You will find a list of requirements in the `requirements*.txt` files. 
Using `pip` you can run:

on Linux and mac:
```
pip install -r requirementsOthers.txt
```

on windows:
```
pip install -r requirementsWindows.txt
```

You will need to download the environment for your system:

| OS | Environment link |
| --- | --- |
| Linux |  [download here](https://www.doc.ic.ac.uk/~bb1010/animalAI/env_linux.zip) |
40
| MacOS |  [download here](https://www.doc.ic.ac.uk/~bb1010/animalAI/env_mac.zip) |
Benjamin Beyret's avatar
Benjamin Beyret committed
41
42
| Windows | [download here](https://www.doc.ic.ac.uk/~bb1010/animalAI/env_windows.zip)  |

43
44
45
You can now unzip the content of the archive to the `env` folder and you're ready to go! Make sure the executable 
`AnimalAI.*` is in `env/`. On linux you may have to make the file executable by running `chmod +x env/AnimalAI.x86_64`. 
Head over to [Quick Start Guide](documentation/quickstart.md) for a quick overview of how the environment works.
Benjamin Beyret's avatar
Benjamin Beyret committed
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65

## Manual Control

If you launch the environment directly from the executable or through the VisualizeArena script it will launch in player 
mode. Here you can control the agent with the following:

| Keyboard Key  | Action    |
| --- | --- |
| W   | move agent forwards |
| S   | move agent backwards|
| A   | turn agent left     |
| D   | turn agent right    |
| C   | switch camera       |
| R   | reset environment   |

## Unity ML-Agents

The Animal-AI Olympics was built using [Unity's ML-Agents Toolkit.](https://github.com/Unity-Technologies/ml-agents)

The Python library located in [animalai](animalai) is almost identical to 
66
[ml-agents v0.7](https://github.com/Unity-Technologies/ml-agents/tree/master/ml-agents-envs). We only added the possibility to change the configuration of arenas between episodes. The documentation for ML-Agents can be found [here](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Python-API.md).
Benjamin Beyret's avatar
Benjamin Beyret committed
67

68
69
Juliani, A., Berges, V., Vckay, E., Gao, Y., Henry, H., Mattar, M., Lange, D. (2018). [Unity: A General Platform for 
Intelligent Agents.](https://arxiv.org/abs/1809.02627) *arXiv preprint arXiv:1809.02627*
Benjamin Beyret's avatar
Benjamin Beyret committed
70

71
72
73
## Known Bugs

Occasionally will spawn an empty arena in play mode. Temporary fix: just press R to respawn.
mdcrosby's avatar
mdcrosby committed
74

75
76
77
78
79
80
Occasional slow frame rates in play mode. Temporary fix: reduce screen size. 

## TODO

Improve framerates for training and visualisation.

Benjamin Beyret's avatar
Benjamin Beyret committed
81
82
83
## Version History
v0.1 - Initial Release