Commit 3725cb7a authored by Benjamin's avatar Benjamin
Browse files

update quickstart.md with new structure

parent fbb6a704
...@@ -37,7 +37,7 @@ well as part of the development process. ...@@ -37,7 +37,7 @@ well as part of the development process.
## Requirements ## Requirements
The Animal-AI package works on most platforms. The Animal-AI package works on Linux, Mac and Windows, as well as most Cloud providers.
<!--, for cloud engines check out [this cloud documentation](documentation/cloud.md).--> <!--, for cloud engines check out [this cloud documentation](documentation/cloud.md).-->
First of all your will need `python3.6` installed, we recommend using virtual environments. We provide two packages for First of all your will need `python3.6` installed, we recommend using virtual environments. We provide two packages for
...@@ -47,10 +47,10 @@ this competition: ...@@ -47,10 +47,10 @@ this competition:
[gym environment](https://github.com/openai/gym) as well as an extension of Unity's [gym environment](https://github.com/openai/gym) as well as an extension of Unity's
[ml-agents environments](https://github.com/Unity-Technologies/ml-agents/tree/master/ml-agents-envs). You can install it [ml-agents environments](https://github.com/Unity-Technologies/ml-agents/tree/master/ml-agents-envs). You can install it
via pip: via pip:
``` ```
pip install animalai pip install animalai
``` ```
Or you can install it from the source, head to `animalai/` folder and run `pip install -e .`. Or you can install it from the source, head to `animalai/` folder and run `pip install -e .`
- We also provide a package that can be used as a starting point for training, and which is required to run most of the - We also provide a package that can be used as a starting point for training, and which is required to run most of the
example scripts found in the `examples/` folder. It contains an extension of example scripts found in the `examples/` folder. It contains an extension of
...@@ -59,10 +59,10 @@ example scripts found in the `examples/` folder. It contains an extension of ...@@ -59,10 +59,10 @@ example scripts found in the `examples/` folder. It contains an extension of
[Google's dopamine](https://github.com/google/dopamine) which implements [Google's dopamine](https://github.com/google/dopamine) which implements
[Rainbow](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/download/17204/16680) (among others). You can also install [Rainbow](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/download/17204/16680) (among others). You can also install
this package using pip: this package using pip:
``` ```
pip install animalai-train pip install animalai-train
``` ```
Or you can install it from source, head to `examples/animalai_train` and run `pip install -e .`. Or you can install it from source, head to `examples/animalai_train` and run `pip install -e .`
Finally download the environment for your system: Finally download the environment for your system:
......
# Quick Start Guide # Quick Start Guide
You can run the Animal AI environment in three different ways: The format of this competition is rather different to what you might be used to. We do provide a single training set that
- running the standalone `AnimalAI` executable you can train on out of the box, instead you are invited to include the design of a training environment as part of the
- running a configuration file via `visualizeArena.py` whole training process. To make this new step as smooth as possible, we created tools you can use to easily setup your
- start training using `train.py` training environment and visualize what these configurations look like.
## Running the standalone arena ## Running the standalone arena
Running the executable `AnimalAI` that you should have separately downloaded and added to the `envs` folder starts a The basic environment is made of a single agent in an enclosed arena, that resembles the environment we would use for
playable environment with default configurations in a single arena. You can toggle the camera between First Person and experimenting with animals. In this environment you can add objects the agents can interact with, as well as goals or
Bird's eye view using the `C` key on your keyboard. The agent can then be controlled using `W,A,S,D` on your keyboard. rewards the agent must collect or avoid. To see what this looks like, run the executable environment you downloaded, you
The objects present in the configuration are randomly sampled from the list of objects that can be spawned, their will spawn in an arena with lots of objects randomly spawned.
location is random too. Hitting `R` or collecting rewards will reset the arena.
You can toggle the camera between First Person and Bird's eye view using the `C` key on your keyboard. The agent can
then be controlled using `W,A,S,D` on your keyboard. Hitting `R` or collecting rewards will reset the arena.
**Note**: on some platforms, running the standalone arena in full screen makes the environment slow, keep the **Note**: on some platforms, running the standalone arena in full screen makes the environment slow, keep the
environment in window mode for better performance. environment in window mode for better performance.
## Running a specific configuration file ## Running a specific configuration file
The `visualizeArena.py` script found in the main folder allows you to visualize an arena configuration file. We provide Once you are familiarized with the environment and its physics, you can start building and visualizing your own. Assuming
sample configuration files for you to experiment with. To make your own environment configuration file we advise to read you followed the [installation instruction](../README.md#requirements), go to the `examples/` folder and run
thoroughly the [configuration file documentation page](configFile.md). You will find a detailed list of all the objects on the [definitions of objects page](definitionsOfObjects.md). Running this script only allows for a single arena to be visualized at once, as there can only be a single agent you control. `python visualizeArena.py configs/exampleConfig.yaml`. This loads the `configs/exampleConfig.yaml` configuration for the
arena and lets you play as the agent.
For example, to run an environment that contains the agent, a goal, and some randomly placed walls use:
``` Have a look at the [configuration file](configs/exampleConfig.yaml) for a first look behind the scene. You can select
python visualizeArena.py configs/obstacles.yaml objects, their size, location, rotation and color, randomizing any of these parameters as you like. We provide
``` documentation section that we recommend you read thoroughly:
- The [configuration file documentation page](configFile.md) which explains how to write these configuration files.
- The [definitions of objects page](definitionsOfObjects.md) which contains a detailed list of all the objects and their
characteristics.
## Start training your agent ## Start training your agent
Once you're happy with your arena configuration you can start training your agent. This can be done in a way very similar Once you're happy with your arena configurations you can start training your agent. The `animalai` presents several features
to a regular [gym](https://github.com/openai/gym) environment. We provide a template training file `train.py` you can run that we think will improve training speed and performance:
out of the box, it uses the [ML agents' PPO](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Training-PPO.md)
for training. We added the ability for participants to **change the environment configuration between episodes**. You can - Participants can **change the environment configuration between episodes** (allowing for techniques such as curriculum
find more details about that in the [training documentation](training.md). learning)
- You can choose the length of length of each episode as part of the configuration files, even having infinite episodes
- You can have several arenas in a single environment instance, each with an agent you control independently from the other,
and each with its own configuration allowing for collecting observations faster
We provide examples of training using the `animalai-train` package, you can of course start from scratch and submit agents
that do not rely on this library. To understand how training an `animalai` environment we provide scripts in the
`examples/` folder:
- `trainDopamine.py` uses the `dopamine` implementation of Rainbow to train a single agent using the gym interface. This
is a good starting point if you want to try another training algorithm that works as a plug-and-play with Gym. **Note that
as such it only allows for training on environment with a single agent.** We do offer to train with several agents in a
gym environment but this will require modifying your code to accept more than one observation at a time.
- `trainMLAgents.py` uses the `ml-agents` implementation of PPO to train one or more agents at a time, using the
`UnityEnvironment`. This is a great starting point if you don't mind reading some code as it directly allows to use the
functionalities described above, out of the box.
You can find more details about this in the [training documentation](training.md).
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment