Unverified Commit 469c166b authored by Benjamin Beyret's avatar Benjamin Beyret Committed by GitHub
Browse files

Merge pull request #86 from beyretb/dev-v2.0.0b0

Dev v2.0.0b0
parents aa371042 6d973af4
env/*
!env/README.md
examples/submission/test_submission/env/*
!examples/submission/test_submission/env/README.md
examples/env/*
!examples/env/README.md
# Tensorflow Model Info
models/
summaries/
/.idea
__pycache__/
UnitySDK.log
venv/
*/venv
/dev
build/
dist/
logs/
# Environemnt logfile
*Project.log
# Builds
*.apk
*.unitypackage
*.app
*.exe
*.x86_64
*.x86
# Mac hidden files
*.DS_Store
*/.ipynb_checkpoints
*/.idea
*.pyc
*.idea/misc.xml
*.idea/modules.xml
*.idea/
*.iml
*.cache
*/build/
*/dist/
*.egg-info*
*.eggs*
*.gitignore.swp
# VSCode hidden files
*.vscode/
.DS_Store
.ipynb_checkpoints
# pytest cache
*.pytest_cache/
# Ignore PyPi build files.
dist/
build/
# Python virtual environment
venv/
.mypy_cache/
# Animal-AI Olympics
## The competition is now over
Congratulations to our winners!
1st: Trrrrr
2nd: ironbar
3rd: sirius
Results [here](http://animalaiolympics.com)
Full paper, release of testing configurations, results analsysis and write-up and WBA prize announcment to come.
We will also reopen submissions in the coming days in order to allow you to keep working on the Animal-AI Olympics until we disclose the test sets, these new submissions will not count as part of the competition.
# Animal-AI 2.0.0 (beta)
<p align="center">
<img height="300" src="documentation/PrefabsPictures/steampunkFOURcrop.png">
</p>
**July 1st - November 1st**
The Animal-AI Olympics is an AI competition with tests inspired by animal cognition. Participants are given a small environment with just seven different classes of objects that can be placed inside. In each test, the agent needs to retrieve the food in the environment, but to do so there are obstacles to overcome, ramps to climb, boxes to push, and areas that must be avoided. The real challenge is that we don't provide the tests in advance. It's up to you to explore the possibilities with the environment and build interesting configurations that can help create an agent that understands how the environment's physics work and the affordances that it has. The final submission should be an agent capable of robust food retrieval behaviour similar to that of many kinds of animals. We know the animals can pass these tests, it's time to see if AI can too.
## Prizes $32,000 (equivalent value)
* Overall Prizes
* 1st place overall: **$7,500 total value** - $6,500 with up to $1,000 travel to speak at NeurIPS 2019.
* 2nd place overall: **$6,000 total value** - $5,000 with up to $1,000 travel to speak at NeurIPS 2019.
* 3rd place overall: **$1,500**.
* WBA-Prize: **$5,000 total value** - $4,000 with up to $1,000 travel to speak at NeurIPS 2019
* Category Prizes: **$200** For best performance in each category (cannot combine with other prizes - max 1 per team).
* ~~**Mid-way AWS Research Credits**: The top 20 entries as of **September 1st** will be awarded **$500** of AWS credits.~~ (already awarded)
See [competition launch page](https://mdcrosby.com/blog/animalailaunch.html) and official rules for further details.
**Important** Please check the competition rules [here](http://animalaiolympics.com/rules.html). **To submit to the competition and be considered for prizes you must also fill in [this form](https://forms.gle/PKCgp2JAWvjf4c9i6)**. Entry to the competition ([via EvalAI](https://evalai.cloudcv.org/web/challenges/challenge-page/396/overview)) constitutes agreement with all competition rules.
## Overview
Here you will find all the code needed to compete in this new challenge. This repo contains **the training environment** (v1.0) that will be used for the competition. Information for entering can be found in the [submission documentation](documentation/submission.md). Please check back during the competition for minor bug-fixes and updates, but as of v1.0 the major features and contents are set in place.
The [Animal-AI](http://animalaiolympics.com/AAI) is a project which introduces the study of animal cognition to the world of AI.
The aim is to provide an environment for testing agents on tasks taken from or inspired by the animal cognition literature.
Decades of research in this field allow us to train and test for cognitive skills in Artificial Intelligence agents.
For more information on the competition itself and to stay updated with any developments, head to the
[Competition Website](http://www.animalaiolympics.com/) and follow [@MacroPhilosophy](https://twitter.com/MacroPhilosophy) and [@BenBeyret](https://twitter.com/BenBeyret) on twitter.
This repo contains the [training environment](animalai), a [training library](animalai_train) as well as [900 tasks](competition_configurations) for testing and/or training agents.
The experiments are divided into categories meant to reflect various cognitive skills, the details can be found on the [website](http://animalaiolympics.com/AAI/testbed).
The environment contains an agent enclosed in a fixed sized arena. Objects can spawn in this arena, including positive
and negative rewards (green, yellow and red spheres) that the agent must obtain (or avoid). All of the hidden tests that will appear in the competition are made using the objects in the training environment. We have provided some sample environment configurations that should be useful for training (see examples/configs), but part of the challenge is to experiment and design new configurations.
We ran a competition using this environment and the associated tests, more details about the results can be found [here](http://animalaiolympics.com/AAI/2019)
The environment is built using [Unity ml-agents](https://github.com/Unity-Technologies/ml-agents/tree/master/docs) and contains an agent enclosed in a fixed sized arena. Objects can spawn in this arena, including positive
and negative rewards (green, yellow and red spheres) that the agent must obtain (or avoid). All of the hidden tests that will appear in the competition are made using the objects in the training environment.
To get started install the requirements below, and then follow the [Quick Start Guide](documentation/quickstart.md).
More in depth documentation can be found on the [Documentation Page](documentation/README.md).
## Evaluation
The competition has 300 tests, split over ten categories. The categories range from the very simple (e.g. **food retrieval**, **preferences**, and **basic obstacles**) to the more complex (e.g. **spatial reasoning**, **internal models**, **object permanence**, and **causal reasoning**). We have included example config files for the first seven categories. Note that the example config files are just simple examples to be used as a guide. An agent that solves even all of these perfectly may still not be able to solve all the tests in the category, but it would be off to a good start.
The submission website allows you to submit an agent that will be run on all 300 tests and it returns the overall score (number of tests passed) and score per category. We cannot offer infinite compute, so instances will be timed out after ~90 minutes and only tests performed up to that point counted (all others will be considered failed). See the [submission documentation](documentation/submission.md) for more information.
For the mid-way and final evaluation we will (resources permitting) run more extensive testing with 3 variations per test (so 900 tests total). The variations will include minor perturbations to the configurations. The agent will have to pass all 3 variations to pass each individual test, giving a total score out of 300. This means that **your final test score might be lower than the score achieved during the competition** and that **the competition leaderboard on EvalAI may not exactly match the final results**.
## Development Blog
You can read the launch posts - with information about prizes and the categories in the competition here:
......@@ -78,12 +44,9 @@ well as part of the development process.
## Requirements
The Animal-AI package works on Linux, Mac and Windows, as well as most Cloud providers. Note that for submission to the competition we only support linux-based Docker files.
<!--, for cloud engines check out [this cloud documentation](documentation/cloud.md).-->
The Animal-AI package works on Linux, Mac and Windows, as well as most Cloud providers, and requires python 3.
We recommend using a virtual environment specifically for the competition. You will need `python3.6` installed (we currently only support **python3.6**). Clone this repository to run the examples we provide.
We offer two packages for this competition:
We offer two packages:
- The main package is an API for interfacing with the Unity environment. It contains both a
[gym environment](https://github.com/openai/gym) as well as an extension of Unity's
......@@ -92,7 +55,7 @@ We offer two packages for this competition:
```
pip install animalai
```
Or you can install it from the source, head to `animalai/` folder and run `pip install -e .`
Or you can install it from the source by running `pip install -e animalai` from the repo folder
In case you wish to create a conda environment you can do so by running the below command from the `animalai` folder:
```
......@@ -100,32 +63,27 @@ We offer two packages for this competition:
```
- We also provide a package that can be used as a starting point for training, and which is required to run most of the
example scripts found in the `examples/` folder. At the moment **we only support Linux and Max** for the training examples. It contains an extension of
example scripts found in the `examples/` folder. It contains an extension of
[ml-agents' training environment](https://github.com/Unity-Technologies/ml-agents/tree/master/ml-agents) that relies on
[OpenAI's PPO](https://openai.com/blog/openai-baselines-ppo/), as well as
[Google's dopamine](https://github.com/google/dopamine) which implements
[Rainbow](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/download/17204/16680) (among others). You can also install
this package using pip:
[OpenAI's PPO](https://openai.com/blog/openai-baselines-ppo/) and [BAIR's SAC](https://bair.berkeley.edu/blog/2018/12/14/sac/). You can also install this package using pip:
```
pip install animalai-train
```
Or you can install it from source, head to `examples/animalai_train` and run `pip install -e .`
Or you can install it from source by running `pip install -e animalai_train` from the repo folder
Finally download the environment for your system:
Finally **download the environment** for your system:
| OS | Environment link |
| --- | --- |
| Linux | [download v1.0.0](https://www.doc.ic.ac.uk/~bb1010/animalAI/env_linux_v1.0.0.zip) |
| MacOS | [download v1.0.0](https://www.doc.ic.ac.uk/~bb1010/animalAI/env_mac_v1.0.0.zip) |
| Windows | [download v1.0.0](https://www.doc.ic.ac.uk/~bb1010/animalAI/env_windows_v1.0.0.zip) |
| Linux | [download v2.0.0b0](https://www.doc.ic.ac.uk/~bb1010/animalAI/env_linux_v2.0.0b0.zip) |
| MacOS | [download v2.0.0b0](https://www.doc.ic.ac.uk/~bb1010/animalAI/env_mac_v2.0.0b0.zip) |
| Windows | [download v2.0.0b0](https://www.doc.ic.ac.uk/~bb1010/animalAI/env_windows_v2.0.0b0.zip) |
You can now unzip the content of the archive to the `env` folder and you're ready to go! Make sure the executable
`AnimalAI.*` is in `env/`. On linux you may have to make the file executable by running `chmod +x env/AnimalAI.x86_64`.
Head over to [Quick Start Guide](documentation/quickstart.md) for a quick overview of how the environment works.
The Unity source files for the environment can be find on the [AnimalAI-Environment repository](https://github.com/beyretb/AnimalAI-Environment).
Due to a lack of resources we cannot provide support on this part of the project at the moment. We recommend reading the documentation on the
[ML-Agents repo](https://github.com/Unity-Technologies/ml-agents) too.
The Unity source files for the environment can be found on our [ml-agents fork](https://github.com/beyretb/ml-agents).
## Manual Control
......@@ -160,37 +118,33 @@ Paper with all the details of the test battery will be released after the compet
The Animal-AI Olympics was built using [Unity's ML-Agents Toolkit.](https://github.com/Unity-Technologies/ml-agents)
The Python library located in [animalai](animalai) is almost identical to
[ml-agents v0.7](https://github.com/Unity-Technologies/ml-agents/tree/master/ml-agents-envs). We only added the
possibility to change the configuration of arenas between episodes. The documentation for ML-Agents can be found
[here](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Python-API.md).
The Python library located in [animalai](animalai) extends [ml-agents v0.15.0](https://github.com/Unity-Technologies/ml-agents/tree/0.15.0). Mainly, we add the
possibility to change the configuration of arenas between episodes.
Juliani, A., Berges, V., Vckay, E., Gao, Y., Henry, H., Mattar, M., Lange, D. (2018). [Unity: A General Platform for
Intelligent Agents.](https://arxiv.org/abs/1809.02627) *arXiv preprint arXiv:1809.02627*
## EvalAI
The competition is kindly hosted on [EvalAI](https://github.com/Cloud-CV/EvalAI), an open source web application for AI competitions. Special thanks to [Rishabh Jain](https://rishabhjain.xyz/) for his help in settting this up.
The competition was kindly hosted on [EvalAI](https://github.com/Cloud-CV/EvalAI), an open source web application for AI competitions. Special thanks to [Rishabh Jain](https://rishabhjain.xyz/) for his help in setting this up.
We will aim to reopen submissions with new hidden files in order to keep some form of competition going.
Deshraj Yadav, Rishabh Jain, Harsh Agrawal, Prithvijit Chattopadhyay, Taranjeet Singh, Akash Jain, Shiv Baran Singh, Stefan Lee and Dhruv Batra (2019) [EvalAI: Towards Better Evaluation Systems for AI Agents](https://arxiv.org/abs/1902.03570)
## Known Issues
In play mode pressing `R` or `C` does nothing sometimes. This is due to the fact that we have synchronized these
features with the agent's frames in order to have frames in line with the configuration files for elements such as blackouts. **Solution**: press the key again, several times if needed.
## TODO
- [x] Add custom resolutions
- [x] Add inference viewer to the environment
- [x] Offer a gym wrapper for training
- [x] Improve the way the agent spawns
- [x] Add lights out configurations.
- [x] Improve environment framerates
- [x] Add moving food
## Version History
- v2.0.0b0 (beta)
- Bump ml-agents from 0.7 to 0.15 which:
- allows multiple parallel environments for training
- adds Soft actor critic (SAC) trainer
- has a new kind of actions/observations loop (on demand decisions)
- removes brains and some protobufs
- adds side-channels to replace some protobufs
- refactoring of the codebase
- GoodGoalMulti are now yellow with the same texture (light emitting) as GoodGoal and BadGoal
- The whole project including the Unity source is now available on [our ml-agents fork](https://github.com/beyretb/ml-agents)
- v1.1.1
- Hotfix curriculum loading in the wrong order
......
class Agent(object):
def __init__(self):
"""
Load your agent here and initialize anything needed
WARNING: any path to files you wish to access on the docker should be ABSOLUTE PATHS
"""
pass
def reset(self, t=250):
"""
Reset is called before each episode begins
Leave blank if nothing needs to happen there
:param t the number of timesteps in the episode
"""
def step(self, obs, reward, done, info):
"""
A single step the agent should take based on the current state of the environment
We will run the Gym environment (AnimalAIEnv) and pass the arguments returned by env.step() to
the agent.
Note that should if you prefer using the BrainInfo object that is usually returned by the Unity
environment, it can be accessed from info['brain_info'].
:param obs: agent's observation of the current environment
:param reward: amount of reward returned after previous action
:param done: whether the episode has ended.
:param info: contains auxiliary diagnostic information, including BrainInfo.
:return: the action to take, a list or size 2
"""
action = [0, 0]
return action
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2017 Unity Technologies
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
\ No newline at end of file
# AnimalAI Python API
This package provides the Python API used for training agents for the Animal AI Olympics competition. It is mostly an
extension of [Unity's MLAgents env](https://github.com/Unity-Technologies/ml-agents/tree/master/ml-agents-envs).
It contains two ways of interfacing with the Unity environments:
- `animalai.envs.environment` contains the `UnityEnvironment` which is similar to the one found in `mlagents` but with
a few adaptations to allow for more custom communications between Python and Unity.
- `animalai.envs.gym.environment` contains the `AnimalAIEnv` which provides a gym environment to use directly with
baselines.
For more details and documentation have a look at the [AnimalAI documentation](../documentation)
\ No newline at end of file
from .agent_action_proto_pb2 import *
from .agent_info_proto_pb2 import *
from .arena_parameters_proto_pb2 import *
from .brain_parameters_proto_pb2 import *
from .command_proto_pb2 import *
from .demonstration_meta_proto_pb2 import *
from .engine_configuration_proto_pb2 import *
from .header_pb2 import *
from .arenas_configurations_proto_pb2 import *
from .arena_configuration_proto_pb2 import *
from .items_to_spawn_proto_pb2 import *
from .vector_proto_pb2 import *
from .__init__ import *
from .resolution_proto_pb2 import *
from .space_type_proto_pb2 import *
from .unity_input_pb2 import *
from .unity_message_pb2 import *
from .unity_output_pb2 import *
from .unity_rl_initialization_input_pb2 import *
from .unity_rl_initialization_output_pb2 import *
from .unity_rl_input_pb2 import *
from .unity_rl_output_pb2 import *
from .unity_rl_reset_input_pb2 import *
from .unity_rl_reset_output_pb2 import *
from .unity_to_external_pb2_grpc import *
from .unity_to_external_pb2 import *
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: animalai/communicator_objects/agent_action_proto.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='animalai/communicator_objects/agent_action_proto.proto',
package='communicator_objects',
syntax='proto3',
serialized_options=_b('\252\002\034MLAgents.CommunicatorObjects'),
serialized_pb=_b('\n6animalai/communicator_objects/agent_action_proto.proto\x12\x14\x63ommunicator_objects\"a\n\x10\x41gentActionProto\x12\x16\n\x0evector_actions\x18\x01 \x03(\x02\x12\x14\n\x0ctext_actions\x18\x02 \x01(\t\x12\x10\n\x08memories\x18\x03 \x03(\x02\x12\r\n\x05value\x18\x04 \x01(\x02\x42\x1f\xaa\x02\x1cMLAgents.CommunicatorObjectsb\x06proto3')
)
_AGENTACTIONPROTO = _descriptor.Descriptor(
name='AgentActionProto',
full_name='communicator_objects.AgentActionProto',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='vector_actions', full_name='communicator_objects.AgentActionProto.vector_actions', index=0,
number=1, type=2, cpp_type=6, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='text_actions', full_name='communicator_objects.AgentActionProto.text_actions', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='memories', full_name='communicator_objects.AgentActionProto.memories', index=2,
number=3, type=2, cpp_type=6, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='value', full_name='communicator_objects.AgentActionProto.value', index=3,
number=4, type=2, cpp_type=6, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=80,
serialized_end=177,
)
DESCRIPTOR.message_types_by_name['AgentActionProto'] = _AGENTACTIONPROTO
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
AgentActionProto = _reflection.GeneratedProtocolMessageType('AgentActionProto', (_message.Message,), {
'DESCRIPTOR' : _AGENTACTIONPROTO,
'__module__' : 'animalai.communicator_objects.agent_action_proto_pb2'
# @@protoc_insertion_point(class_scope:communicator_objects.AgentActionProto)