Commit fbabffe3 authored by Benjamin's avatar Benjamin
Browse files

reorganize readme

parent 8dc5cd29
......@@ -4,10 +4,15 @@
<img height="300" src="documentation/PrefabsPictures/steampunkFOURcrop.png">
</p>
| ![](examples/notebook_data/animal-cyl-fail.gif) | ![](examples/notebook_data/agent-cyl-fail.gif) |
|---|---|
| ![](examples/notebook_data/animal-cyl-pass.gif) | ![](examples/notebook_data/agent-cyl-pass.gif) |
## Overview
The [Animal-AI](http://animalaiolympics.com/AAI) is a project which introduces the study of animal cognition to the world of AI.
The aim is to provide an environment for testing agents on tasks taken from or inspired by the animal cognition literature.
The [Animal-AI](http://animalaiolympics.com/AAI) introduces the study of animal cognition to the world of AI.
It provides an environment for testing agents on tasks taken from or inspired by the animal cognition literature.
Decades of research in this field allow us to train and test for cognitive skills in Artificial Intelligence agents.
This repo contains the [training environment](animalai), a [training library](animalai_train) as well as [900 tasks](competition_configurations) for testing and/or training agents.
......@@ -18,8 +23,15 @@ We ran a competition using this environment and the associated tests, more detai
The environment is built using [Unity ml-agents](https://github.com/Unity-Technologies/ml-agents/tree/master/docs) and contains an agent enclosed in a fixed sized arena. Objects can spawn in this arena, including positive
and negative rewards (green, yellow and red spheres) that the agent must obtain (or avoid). All of the hidden tests that will appear in the competition are made using the objects in the training environment.
To get started install the requirements below, and then follow the jupyter notebook tutorials in the [examples folder](examples).
More in depth documentation can be found on the [Documentation Page](documentation/README.md).
# TL;DR
Just want to get started? Then:
- Clone this repo
- In the `examples` folder run `pip install -r requirements.txt`
- Get the [environment executable](#requirements) for your platform
- In the `examples` folder, start `jupyter notebook` and go through the [environment](examples/environment_tutorial.ipynb)
and the [training](examples/training_tutorial.ipynb) notebooks!
## Development Blog
......@@ -42,9 +54,17 @@ well as part of the development process.
## Requirements
The Animal-AI package works on Linux, Mac and Windows, as well as most Cloud providers, and requires python 3.
First **download the environment** for your system:
| OS | Environment link |
| --- | --- |
| Linux | [download v2.0.0](https://www.doc.ic.ac.uk/~bb1010/animalAI/env_linux_v2.0.0.zip) |
| MacOS | [download v2.0.0](https://www.doc.ic.ac.uk/~bb1010/animalAI/env_mac_v2.0.0.zip) |
| Windows | [download v2.0.0](https://www.doc.ic.ac.uk/~bb1010/animalAI/env_windows_v2.0.0.zip) |
Unzip the **entire content** of the archive to the `examples/env` folder. On linux you may have to make the file executable by running `chmod +x env/AnimalAI.x86_64`.
We offer two packages:
The Animal-AI packages works on Linux, Mac and Windows and requires python 3.
- The main package is an API for interfacing with the Unity environment. It contains both a
[gym environment](https://github.com/openai/gym) as well as an extension of Unity's
......@@ -55,11 +75,6 @@ We offer two packages:
```
Or you can install it from the source by running `pip install -e animalai` from the repo folder
<!--In case you wish to create a conda environment you can do so by running the below command from the `animalai` folder:
```
conda env create -f conda_isntall.yaml
```-->ni
- We also provide a package that can be used as a starting point for training, and which is required to run most of the
example scripts found in the `examples/` folder. It contains an extension of
[ml-agents' training environment](https://github.com/Unity-Technologies/ml-agents/tree/master/ml-agents) that relies on
......@@ -69,24 +84,12 @@ example scripts found in the `examples/` folder. It contains an extension of
```
Or you can install it from source by running `pip install -e animalai_train` from the repo folder
Finally **download the environment** for your system:
| OS | Environment link |
| --- | --- |
| Linux | [download v2.0.0](https://www.doc.ic.ac.uk/~bb1010/animalAI/env_linux_v2.0.0.zip) |
| MacOS | [download v2.0.0](https://www.doc.ic.ac.uk/~bb1010/animalAI/env_mac_v2.0.0.zip) |
| Windows | [download v2.0.0](https://www.doc.ic.ac.uk/~bb1010/animalAI/env_windows_v2.0.0.zip) |
You can now unzip the content of the archive to the `env` folder and you're ready to go! Make sure the executable
`AnimalAI.*` is in `env/`. On linux you may have to make the file executable by running `chmod +x env/AnimalAI.x86_64`.
Head over to [Quick Start Guide](documentation/quickstart.md) for a quick overview of how the environment works.
The Unity source files for the environment can be found on our [ml-agents fork](https://github.com/beyretb/ml-agents).
**The Unity source files** for the environment can be found on our [ml-agents fork](https://github.com/beyretb/ml-agents).
## Manual Control
If you launch the environment directly from the executable or through the VisualizeArena script it will launch in player
mode. Here you can control the agent with the following:
If you launch the environment directly from the executable or through the `load_config_and_play,py` script it will launch
in player mode. Here you can control the agent with the following:
| Keyboard Key | Action |
| --- | --- |
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment