A customisable 3D platform for agent-based AI research

Overview

DeepMind Lab

DeepMind Lab is a 3D learning environment based on id Software's Quake III Arena via ioquake3 and other open source software.

DeepMind Lab provides a suite of challenging 3D navigation and puzzle-solving tasks for learning agents. Its primary purpose is to act as a testbed for research in artificial intelligence, especially deep reinforcement learning.

About

Disclaimer: This is not an official Google product.

If you use DeepMind Lab in your research and would like to cite the DeepMind Lab environment, we suggest you cite the DeepMind Lab paper.

You can reach us at [email protected].

Getting started on Linux

$ git clone https://github.com/deepmind/lab
$ cd lab

For a live example of a random agent, run

lab$ bazel run :python_random_agent --define graphics=sdl -- \
               --length=10000 --width=640 --height=480

Here is some more detailed build documentation, including how to install dependencies if you don't have them.

To enable compiler optimizations, pass the flag --compilation_mode=opt, or -c opt for short, to each bazel build, bazel test and bazel run command. The flag is omitted from the examples here for brevity, but it should be used for real training and evaluation where performance matters.

Play as a human

To test the game using human input controls, run

lab$ bazel run :game -- --level_script=tests/empty_room_test --level_setting=logToStdErr=true
# or:
lab$ bazel run :game -- -l tests/empty_room_test -s logToStdErr=true

Leave the logToStdErr setting off to disable most log output.

The values of observations that the environment exposes can be printed at every step by adding a flag --observation OBSERVATION_NAME for each observation of interest.

lab$ bazel run :game -- --level_script=lt_chasm --observation VEL.TRANS --observation VEL.ROT

Train an agent

DeepMind Lab ships with an example random agent in python/random_agent.py which can be used as a starting point for implementing a learning agent. To let this agent interact with DeepMind Lab for training, run

lab$ bazel run :python_random_agent

The Python API is used for agent-environment interactions. We also provide bindings to DeepMind's "dm_env" general API for reinforcement learning, as well as a way to build a self-contained PIP package; see the separate documentation for details.

DeepMind Lab ships with different levels implementing different tasks. These tasks can be configured using Lua scripts, as described in the Lua API.


Upstream sources

DeepMind Lab is built from the ioquake3 game engine, and it uses the tools q3map2 and bspc for map creation. Bug fixes and cleanups that originate with those projects are best fixed upstream and then merged into DeepMind Lab.

  • bspc is taken from github.com/TTimo/bspc, revision d9a372db3fb6163bc49ead41c76c801a3d14cf80. There are virtually no local modifications, although we integrate this code with the main ioq3 code and do not use their copy in the deps directory. We expect this code to be stable.

  • q3map2 is taken from github.com/TTimo/GtkRadiant, revision d3d00345c542c8d7cc74e2e8a577bdf76f79c701. A few minor local modifications add synchronization. We also expect this code to be stable.

  • ioquake3 is taken from github.com/ioquake/ioq3, revision 29db64070aa0bae49953bddbedbed5e317af48ba. The code contains extensive modifications and additions. We aim to merge upstream changes occasionally.

We are very grateful to the maintainers of these repositories for all their hard work on maintaining high-quality code bases.

External dependencies, prerequisites and porting notes

DeepMind Lab currently ships as source code only. It depends on a few external software libraries, which we ship in several different ways:

  • The zlib, glib, libxml2, jpeg and png libraries are referenced as external Bazel sources, and Bazel BUILD files are provided. The dependent code itself should be fairly portable, but the BUILD rules we ship are specific to Linux on x86. To build on a different platform you will most likely have to edit those BUILD files.

  • Message digest algorithms are included in this package (in //third_party/md), taken from the reference implementations of their respective RFCs. A "generic reinforcement learning API" is included in //third_party/rl_api, which has also been created by the DeepMind Lab authors. This code is portable.

  • EGL headers are included in this package (in //third_party/GL/{EGL,KHR}), taken from the Khronos OpenGL/OpenGL ES XML API Registry at www.khronos.org/registry/EGL. The headers have been modified slightly to remove the dependency of EGL on X.

  • Several additional libraries are required but are not shipped in any form; they must be present on your system:

    • SDL 2
    • gettext (required by glib)
    • OpenGL: A hardware driver and library are needed for hardware-accelerated human play. The headless library that machine learning agents will want to use can use either hardware-accelerated rendering via EGL or GLX or software rendering via OSMesa, depending on the --define headless=... build setting.
    • Python 2.7 (other versions might work, too) with NumPy, PIL (a few tests require a NumPy version of at least 1.8), or Python 3 (at least 3.5) with NumPy and Pillow.

The build rules are using a few compiler settings that are specific to GCC. If some flags are not recognized by your compiler (typically those would be specific warning suppressions), you may have to edit those flags. The warnings should be noisy but harmless.

Comments
  • Unable to build: Python include path missing

    Unable to build: Python include path missing

    Hi

    I'm on Ubuntu 16.04 and following the guidelines from https://github.com/deepmind/lab/blob/master/docs/build.md . I'm unable to run any of the examples

    Here are the major library versions:

    • Bazel version: 0.4.3
    • Lua: 5.1.5
    • Python: 2.7
    • OpenGL version: 4.5.0
    • GCC: 5.4.0

    Error while trying to run a random agent

    lab$ bazel run :game -- --level_script tests/demo_map --verbose_failures
    WARNING: Output base '/home/ml/hsatij/.cache/bazel/_bazel_hsatij/2a466b12af1ad7b6dcfd974c8d06585b' is on NFS. This may lead to surprising failures and undetermined behavior.
    INFO: Found 1 target...
    ERROR: /home/ml/hsatij/.cache/bazel/_bazel_hsatij/2a466b12af1ad7b6dcfd974c8d06585b/external/jpeg_archive/BUILD:74:1: Executing genrule @jpeg_archive//:configure failed: linux-sandbox failed: error executing command /home/ml/hsatij/.cache/bazel/_bazel_hsatij/2a466b12af1ad7b6dcfd974c8d06585b/execroot/lab/_bin/linux-sandbox ... (remaining 5 argument(s) skipped).
    src/main/tools/linux-sandbox-pid1.cc:398: "remount(NULL, /home/ml/hsatij/.cache/bazel/_bazel_hsatij/2a466b12af1ad7b6dcfd974c8d06585b/bazel-sandbox/905e0eef-0788-42e6-8852-7b444149d38c-13/tmp/home/ndg/projects, NULL, 2101281, NULL)": No such file or directory
    Target //:game failed to build
    Use --verbose_failures to see the command lines of failed build steps.
    INFO: Elapsed time: 2.569s, Critical Path: 1.20s
    ERROR: Build failed. Not running target.
    

    On trying to build the Python interface to DeepMind Lab with OpenGL

    lab$ bazel build :deepmind_lab.so --define headless=glx --verbose_failures
    WARNING: Output base '/home/ml/hsatij/.cache/bazel/_bazel_hsatij/2a466b12af1ad7b6dcfd974c8d06585b' is on NFS. This may lead to surprising failures and undetermined behavior.
    INFO: Found 1 target...
    ERROR: /home/ml/hsatij/code/libs/lab/BUILD:972:1: C++ compilation of rule '//:dmlablib' failed: linux-sandbox failed: error executing command 
      (cd /home/ml/hsatij/.cache/bazel/_bazel_hsatij/2a466b12af1ad7b6dcfd974c8d06585b/bazel-sandbox/ff30fcd2-9759-4ca4-8fa3-b82956431988-1/execroot/lab && \
      exec env - \
      /home/ml/hsatij/.cache/bazel/_bazel_hsatij/2a466b12af1ad7b6dcfd974c8d06585b/execroot/lab/_bin/linux-sandbox @/home/ml/hsatij/.cache/bazel/_bazel_hsatij/2a466b12af1ad7b6dcfd974c8d06585b/bazel-sandbox/ff30fcd2-9759-4ca4-8fa3-b82956431988-1/linux-sandbox.params -- /usr/bin/gcc -U_FORTIFY_SOURCE '-D_FORTIFY_SOURCE=1' -fstack-protector -Wall -Wl,-z,-relro,-z,now -B/usr/bin -B/usr/bin -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-canonical-system-headers -fno-omit-frame-pointer '-std=c++0x' -MD -MF bazel-out/local-fastbuild/bin/_objs/dmlablib/public/dmlab_so_loader.pic.d '-frandom-seed=bazel-out/local-fastbuild/bin/_objs/dmlablib/public/dmlab_so_loader.pic.o' -fPIC -iquote . -iquote bazel-out/local-fastbuild/genfiles -iquote external/bazel_tools -iquote bazel-out/local-fastbuild/genfiles/external/bazel_tools -isystem external/bazel_tools/tools/cpp/gcc3 '-DDMLAB_SO_LOCATION="libdmlab.so"' -fno-canonical-system-headers -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -c public/dmlab_so_loader.cc -o bazel-out/local-fastbuild/bin/_objs/dmlablib/public/dmlab_so_loader.pic.o).
    src/main/tools/linux-sandbox-pid1.cc:398: "remount(NULL, /home/ml/hsatij/.cache/bazel/_bazel_hsatij/2a466b12af1ad7b6dcfd974c8d06585b/bazel-sandbox/ff30fcd2-9759-4ca4-8fa3-b82956431988-1/tmp/home/ndg/projects, NULL, 2101281, NULL)": No such file or directory
    Target //:deepmind_lab.so failed to build
    INFO: Elapsed time: 2.294s, Critical Path: 1.11s
    

    I'm new to bazel and unfortunately the error logs are too cryptic for me. Any help to resolve this will be appreciated !

    solved 
    opened by hercky 30
  • How to use lab as a python module

    How to use lab as a python module

    Hi,

    I built lab on Ubuntu 14.04. The python module tests all pass. I'm a little confused about importing deepmind_lab and running experiments with it in python. I apologize if this is due to my lack of general python/bazel knowledge. My understanding is that the process of experimentation with deepmind_lab is:

    1. create a python file experiment.py. import deepmind_lab and use it in the experiment.
    2. add a py_binary entry in the BUILD file for bazel named "experiment"
    3. perform bazel run :experiment

    Is this correct? And, is there any way to instead run experiments directly with python experiment.py?

    fixed-in-next-verison 
    opened by johnholl 22
  • Lab + python multiprocessing

    Lab + python multiprocessing

    I have implemented A3C with multiprocessing (+ pytorch) as opposed to using threads, however bazel run seems to break silently and clearly without any visible trace. This is what I do:

    $ bazel run :struct_runner --define headless=false
    [...]
    $ echo $?  # this is the error code of the previous process
    8
    

    struct_runner.py initialises a lab environment, then creates a bunch of processes in which more envs are created. In particular, the silent crash happens when I create a process p and do p.start() - it also appears to be non-deterministic with respect to the number of processes I manage to spawn before bazel kills them and quits.

    I know that @miyosuda has implemented A3C using threads here, however multiprocessing is supported very well by pytorch and it would be a shame to have to deal with threads management.

    opened by edran 15
  • Random agent compiling error

    Random agent compiling error

    Hey on Ubuntu 18.04, Bazel 0.20.0, Tensorflow 1.2 GPU I am trying to run the random agent and I am getting the following error. Can you please help me on how to solve this error? Thanks

    bazel run :python_random_agent --define graphics=sdl -- \

               --length=10000 --width=640 --height=480
    

    Starting local Bazel server and connecting to it... INFO: Invocation ID: 69a115a9-45b9-4642-87c8-8fc2170651ac INFO: SHA256 (https://github.com/abseil/abseil-cpp/archive/master.zip) = d3bb4e5578f06ddf3e0e21def6aabf3b4ae81d68b909f06cfcb727734584cab3 INFO: Analysed target //:python_random_agent (53 packages loaded, 3598 targets configured). INFO: Found 1 target... ERROR: /home/neuwong/lab/BUILD:790:1: C++ compilation of rule '//:game_lib_sdl' failed (Exit 1) gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -MD -MF bazel-out/k8-fastbuild/bin/_objs/game_lib_sdl/sdl_input.pic.d ... (remaining 103 argument(s) skipped)

    Use --sandbox_debug to see verbose messages from the sandbox engine/code/sdl/sdl_input.c:26:11: fatal error: SDL.h: No such file or directory

    include <SDL.h>

           ^~~~~~~
    

    compilation terminated. Target //:python_random_agent failed to build Use --verbose_failures to see the command lines of failed build steps. INFO: Elapsed time: 33.266s, Critical Path: 7.03s INFO: 69 processes: 69 linux-sandbox. FAILED: Build did NOT complete successfully FAILED: Build did NOT complete successfully

    opened by behroozmrd47 13
  • Pip and Python3

    Pip and Python3

    Informed by #32, #92 and #52 I tried to configure Bazel on a conda environment, with python 3. bazel build //:deepmind_lab.so appears to run successfully, however bazel run //:python_random_agent is not able to import deepmind_lab

    INFO: Running command line: bazel-out/k8-py3-fastbuild/bin/python_random_agent
    ImportError: numpy.core.multiarray failed to import
    Traceback (most recent call last):
      File "/home/florin/.cache/bazel/_bazel_florin/329274937dbc60c7c6c49b959689f873/execroot/org_deepmind_lab/bazel-out/k8-py3-fastbuild/bin/python_random_agent.runfiles/org_deepmind_lab/python/random_agent.py", line 26, in <module>
        import deepmind_lab
    ImportError: numpy.core.multiarray failed to import
    ERROR: Non-zero return code '1' from command: Process exited with status 1
    

    You can find the changes bellow. @tkoeppe Does this seem right? Also, are there any log files I can provide you with in order to help with the debugging?

    diff --git a/BUILD b/BUILD
    index 9e274c5..3972e07 100644
    --- a/BUILD
    +++ b/BUILD
    @@ -984,6 +984,7 @@ py_binary(
         data = [":deepmind_lab.so"],
         main = "python/random_agent.py",
         visibility = ["//python/tests:__subpackages__"],
    +    default_python_version = "PY3"
     )
     
     LOAD_TEST_SCRIPTS = [
    diff --git a/WORKSPACE b/WORKSPACE
    index fa8da47..967c712 100644
    --- a/WORKSPACE
    +++ b/WORKSPACE
    @@ -91,5 +91,5 @@ new_local_repository(
     new_local_repository(
         name = "python_system",
         build_file = "python.BUILD",
    -    path = "/usr",
    +    path = "/home/florin/Tools/miniconda3/envs/torch30",
     )
    diff --git a/python.BUILD b/python.BUILD
    index f0b3f9a..b6a6fa8 100644
    --- a/python.BUILD
    +++ b/python.BUILD
    @@ -5,7 +5,11 @@
     
     cc_library(
         name = "python",
    -    hdrs = glob(["include/python2.7/*.h"]),
    -    includes = ["include/python2.7"],
    +    hdrs = glob(["include/python3.6m/*.h",
    +                 "lib/python3.6/site-packages/numpy/core/include/**/*.h"
    +    ]),
    +    includes = ["include/python3.6m",
    +                "lib/python3.6/site-packages/numpy/core/include"
    +    ],
         visibility = ["//visibility:public"],
     )
    
    opened by floringogianu 13
  • Pip package not working with python 3

    Pip package not working with python 3

    Hi,

    I am having some problems when trying to build/install Deepmind Lab as a pip package with Python 3. Currently I am capable of generating the package .whl file (DeepMind_Lab-1.0-py3-none-any.whl) and installing in a conda environment, but I am getting the following error when importing the deepmind_lab module:

    deepmind_lab.so: undefined symbol: PyCObject_Type
    

    Any suggestions about how to fix this?

    Thanks,

    opened by camigord 13
  • I meet the error when bazel build -c opt //:deepmind_lab.so

    I meet the error when bazel build -c opt //:deepmind_lab.so

    bazel build -c opt //:deepmind_lab.so ERROR: /home/ubuntu/zz/projects/lab/WORKSPACE:146:1: //external:tree_archive: no such attribute 'repo_mapping' in 'http_archive' rule ERROR: Skipping '//:deepmind_lab.so': error loading package 'external': Package 'external' contains errors WARNING: Target pattern parsing failed. ERROR: error loading package 'external': Package 'external' contains errors

    opened by XqWang3 12
  • Sorry i don't know what i'm doing wrong

    Sorry i don't know what i'm doing wrong

    [email protected]:~$ cd lab [email protected]:~/lab$ sudo bazel run :game -- --level_script=tests/empty_room_test --level_setting=logToStdErr=true --sandbox_debug Extracting Bazel installation... Starting local Bazel server and connecting to it... INFO: SHA256 (https://github.com/abseil/abseil-cpp/archive/master.zip) = f63fa171a79bfd38f995b899989e74144255fcea57ad74079792385841db64dd DEBUG: Rule 'com_google_absl' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "f63fa171a79bfd38f995b899989e74144255fcea57ad74079792385841db64dd" INFO: Analysed target //:game (51 packages loaded, 3679 targets configured). INFO: Found 1 target... ERROR: /home/user/.cache/bazel/_bazel_root/851aceb1df3dfff730804dc58954a97b/external/glib_archive/BUILD.bazel:1:1: Executing genrule @glib_archive//:gen_configure failed (Exit 1) bash failed: error executing command /bin/bash -c ... (remaining 1 argument(s) skipped)

    Use --sandbox_debug to see verbose messages from the sandbox configure: error: *** You must have either have gettext support in your C library, or use the *** GNU gettext library. (http://www.gnu.org/software/gettext/gettext.html)

    Target //:game failed to build Use --verbose_failures to see the command lines of failed build steps. INFO: Elapsed time: 254.198s, Critical Path: 46.81s INFO: 154 processes: 154 processwrapper-sandbox. FAILED: Build did NOT complete successfully FAILED: Build did NOT complete successfully

    opened by ZombiePm 10
  • dm-lab in Keras

    dm-lab in Keras

    Heya,

    I'm currently re-implementing the vector-based navigation architecture of deep-mind's nature paper for tensorflow 2. However, it seems there is an issue regarding the environment of the python API when used with the Keras model API. In particular, I do something like:

    ...
    observations = ['RGB', 'DEBUG.POS.ROT', 'DEBUG.POS.TRANS']
    env = deepmind_lab.Lab(level, observations,
                               config=lab_config, renderer='hardware')
    env.reset()
    ...
        def generate_batch(env, batch_size=10):
            while True:
                ...
                obs = env.observations()
                ...
                x, y = ... some computation 
                env.reset()
                ...
                yield x, y
    
    
    model.fit_generator(
    generate_batch(env, batch_size=10),
    steps_per_epoch=1000,
    verbose=1,
    use_multiprocessing=False
    )
    
    

    depending whether i use env.reset() or not it sometimes drops error (related to: https://github.com/deepmind/lab/issues/134). Even if it does not explicitely complains, it stops always within the first batch. Since Tensorflow 2 seems to convert the generate_batch function into some eager-execution/autograph stuff, I am quite confused whether the env object gets either copied or something else nasty going on under the hood. I hope someone has more clue than I do :)

    The agent I am using is performing random actions (so it should not be the same state-sequence all the time, especially because the rnd seed is not statically set)

    opened by uahic 9
  • Failed to find function dmlab_connect in library!

    Failed to find function dmlab_connect in library!

    Hi,

    After running bazel build -c opt //:deepmind_lab.so, I run bazel run :python_random_agent --define headless=osmesa and get the following output:

    Failed to find function dmlab_connect in library!
    Traceback (most recent call last):
      File "/network/home/alversaf/.cache/bazel/_bazel_alversaf/c78c672c9c47c1e9c2d9a08f489f56c4/execroot/org_deepmind_lab/bazel-out/k8-fastbuild/bin/python_random_agent.runfiles/org_deepmind_lab/python/random_agent.py", line 206, in <module>
        args.record, args.demo, args.demofiles, args.video)
      File "/network/home/alversaf/.cache/bazel/_bazel_alversaf/c78c672c9c47c1e9c2d9a08f489f56c4/execroot/org_deepmind_lab/bazel-out/k8-fastbuild/bin/python_random_agent.runfiles/org_deepmind_lab/python/random_agent.py", line 155, in run
        env = deepmind_lab.Lab(level, ['RGB_INTERLEAVED'], config=config)
    RuntimeError: Failed to connect RL API
    

    What should I do? Thanks! (By the way, I am running bazel through a singularity container.)

    solved 
    opened by alversafa 9
  • Can't build to simple gym-like interface

    Can't build to simple gym-like interface

    Walking through the examples, I haven't understood how can I build the code, and after build have an easy, gym-like environment I could instantiate / reset / step / etc? Everything that I managed was building and starting stuff directly through bazel, similar to python_random_agent, but I don't see how to call step/reset outside of bazel. Am I missing something or there is no description of how to build+run such an environment?

    opened by bjg2 9
  • Complete installation script for Ubuntu 20.04

    Complete installation script for Ubuntu 20.04

    Not an issue, just thought I'd share a full installation script for the DMLab Python package because it took a while to put all the pieces together and might be helpful for others. Tested on Ubuntu 20.04, might also work on newer versions:

    #!/bin/sh
    set -eu
    
    # Dependencies
    apt-get update && apt-get install -y \
        build-essential curl freeglut3 gettext git libffi-dev libglu1-mesa \
        libglu1-mesa-dev libjpeg-dev liblua5.1-0-dev libosmesa6-dev \
        libsdl2-dev lua5.1 pkg-config python-setuptools python3-dev \
        software-properties-common unzip zip zlib1g-dev g++
    pip3 install numpy
    
    # Bazel
    apt-get install -y apt-transport-https curl gnupg
    curl -fsSL https://bazel.build/bazel-release.pub.gpg | gpg --dearmor > bazel.gpg
    mv bazel.gpg /etc/apt/trusted.gpg.d/
    echo "deb [arch=amd64] https://storage.googleapis.com/bazel-apt stable jdk1.8" | tee /etc/apt/sources.list.d/bazel.list
    apt-get update && apt-get install -y bazel
    
    # Build
    git clone https://github.com/deepmind/lab.git
    cd lab
    echo 'build --cxxopt=-std=c++17' > .bazelrc
    bazel build -c opt //python/pip_package:build_pip_package
    ./bazel-bin/python/pip_package/build_pip_package /tmp/dmlab_pkg
    pip3 install --force-reinstall /tmp/dmlab_pkg/deepmind_lab-*.whl
    cd ..
    rm -rf lab
    
    # Stimuli
    mkdir dmlab_data
    cd dmlab_data
    pip3 install Pillow
    curl https://bradylab.ucsd.edu/stimuli/ObjectsAll.zip -o ObjectsAll.zip
    unzip ObjectsAll.zip
    cd OBJECTSALL
    python3 << EOM
    import os
    from PIL import Image
    files = [f for f in os.listdir('.') if f.lower().endswith('jpg')]
    for i, file in enumerate(sorted(files)):
      print(file)
      im = Image.open(file)
      im.save('../%04d.png' % (i+1))
    EOM
    cd ..
    rm -rf __MACOSX OBJECTSALL ObjectsAll.zip
    
    apt-get clean
    
    opened by danijar 3
  • Change the speed of the agent dynamically

    Change the speed of the agent dynamically

    Hi, I'm wondering if it is possible to modify the speed in which the agent walks/runs dynamically/programmatically ? For example, speed up/down according to a function while the agent is interacting in the environment.

    Thank you very much for your support,

    opened by kevinNejad 0
  • Disable Velocity

    Disable Velocity

    Is it possible to disable velocity kinematics/dynamics of the agent? In other words, if a "forward" action is taken, the agent moves forward by a fixed amount and stops immediately with no residual velocity carrying it forward?

    opened by Alvinosaur 1
  • Access camera params (extrinsics, intrinsics, near/far plane) for pointcloud calculation

    Access camera params (extrinsics, intrinsics, near/far plane) for pointcloud calculation

    Hello, Is there a way to get either access to a point-cloud observation or access to the camera params (extrinsics, intrinsics, near/far plane) such that I can calculate the point-cloud from z-buffer observation? Thanks!

    opened by stepjam 1
Releases(release-2020-12-07)
  • release-2020-12-07(Dec 7, 2020)

    New Levels:

    1. Psychlab.

      1. contributed/psychlab/memory_suite_01/explore_goal_locations_extrapolate
      2. contributed/psychlab/memory_suite_01/explore_goal_locations_holdout_extrapolate
      3. contributed/psychlab/memory_suite_01/explore_goal_locations_holdout_interpolate
      4. contributed/psychlab/memory_suite_01/explore_goal_locations_holdout_large
      5. contributed/psychlab/memory_suite_01/explore_goal_locations_holdout_small
      6. contributed/psychlab/memory_suite_01/explore_goal_locations_interpolate
      7. contributed/psychlab/memory_suite_01/explore_goal_locations_train_large
      8. contributed/psychlab/memory_suite_01/explore_goal_locations_train_small
    2. Language binding tasks.

      1. contributed/fast_mapping/fast_mapping
      2. contributed/fast_mapping/slow_mapping

    New Features:

    1. A property system has been added that allows dynamic querying and modifying of environment state. Level scripts can register and consume custom properties.
    2. A new Python module, dmenv_module, is provided that exposes the DeepMind dm_env API.

    Minor Improvements:

    1. Quake console commands can now be issued via a write-only property.
    2. New numeric "accumulate" operations for TensorView and the Lua Tensor types: sum, product, sum-of-squares, and dot product of two tensors.

    EnvCApi Changes:

    1. "Properties" have been added to the EnvCApi. Properties may be queried, set, and enumerated.
    2. The new API version is 1.4 (up from 1.3).
    3. The EnvCApi function fps is now deprecated; environments should instead use the new property system to communicate this information.

    Bug Fixes:

    1. Fix observation 'VEL.ROT' to allow non-zero values when combined with pixel observations. Previously, the presence of pixel observations caused the angular velocity information to be lost due to a logic error.
    Source code(tar.gz)
    Source code(zip)
  • release-2019-10-07(Oct 7, 2019)

    New Levels:

    1. Psychlab.

      1. contributed/psychlab/cued_temporal_production
      2. contributed/psychlab/memory_suite_01/arbitrary_visuomotor_mapping_train
      3. contributed/psychlab/memory_suite_01/arbitrary_visuomotor_mapping_holdout_interpolate
      4. contributed/psychlab/memory_suite_01/arbitrary_visuomotor_mapping_holdout_extrapolate
      5. contributed/psychlab/memory_suite_01/change_detection_train
      6. contributed/psychlab/memory_suite_01/change_detection_holdout_interpolate
      7. contributed/psychlab/memory_suite_01/change_detection_holdout_extrapolate
      8. contributed/psychlab/memory_suite_01/continuous_recognition_train
      9. contributed/psychlab/memory_suite_01/continuous_recognition_holdout_interpolate
      10. contributed/psychlab/memory_suite_01/continuous_recognition_holdout_extrapolate
      11. contributed/psychlab/memory_suite_01/what_then_where_train
      12. contributed/psychlab/memory_suite_01/what_then_where_holdout_interpolate
      13. contributed/psychlab/ready_set_go
      14. contributed/psychlab/temporal_bisection
      15. contributed/psychlab/temporal_discrimination
      16. contributed/psychlab/visuospatial_suite/memory_guided_saccade
      17. contributed/psychlab/visuospatial_suite/odd_one_out
      18. contributed/psychlab/visuospatial_suite/pathfinder
      19. contributed/psychlab/visuospatial_suite/pursuit
      20. contributed/psychlab/visuospatial_suite/visual_match
      21. contributed/psychlab/visuospatial_suite/visually_guided_antisaccade
      22. contributed/psychlab/visuospatial_suite/visually_guided_prosaccade

    Minor Improvements:

    1. The game demo executable can now print observations at each step.

    EnvCApi Changes:

    1. The meaning of major and minor versions and the resulting notions of stability are clarified. The new API version is 1.3 (up from 1.2).
    2. The EnvCApi act function is now deprecated in favour of two finer-grained functions: A call to act should be replaced by a call act_discrete to set discrete actions, followed by an optional call to act_continuous to set continuous actions. (DeepMind Lab does not use continuous actions.)
    3. New support for "text actions", which can be set with the new act_text API function. (DeepMind Lab does not use text actions.)

    Bug Fixes:

    1. Observation 'DEBUG.CAMERA_INTERLEAVED.TOP_DOWN' is now correct for levels dmlab30/explore_object_rewards_{few,many}.

      An error is now raised if there is not enough space to place every possible room (regardless of whether the random generation actually produces a room of excessive size) and if a non-zero number of rooms was requested.

      The affected levels have been updated and will generate layouts similar to before, but the whole maze is offset by 100 units, and object placements will change.

    2. Fix top-down camera for language levels.

    3. Correct typo in bot Leonis, skill level 1, based on OpenArena's bot code gargoyle_c.c.

    4. Tensor scalar operations using arrays now work similar to the way they do with single values.

    Source code(tar.gz)
    Source code(zip)
  • release-2019-02-04(Feb 4, 2019)

    New Levels:

    1. Psychlab.

      1. contributed/psychlab/harlow

    Minor Improvements:

    1. Improve documentation of how to configure non-hermetic dependencies (Lua, Python, NumPy).
    2. Add 'allowHoldOutLevels' setting to allow running of levels that should not be trained on, but held out for evaluation.
    3. Add logging library 'common.log', which provides the ability to control which log messages are emitted via the setting 'logLevel'.
    4. Update the ioq3 upstream code to the latest state.
    5. Lua 5.1 is now downloaded and built from source, and is thus no longer a required local dependency.
    6. A minimal version of the "realpath" utility is now bundled with the code, and thus "realpath" is no longer a required local dependency.

    Bug Fixes:

    1. Prevent missing sounds from causing clients to disconnect.
    2. Fix a bug in the call of the theme callback 'placeFloorModels', which had caused an "'index' is missing" error during compilation of text levels with texture sets that use floor models, such as MINESWEEPER, GO, and PACMAN.
    3. Fix bug where levels 'keys_doors_medium', 'keys_doors_random' and 'rooms_keys_doors_puzzle' would not accept the common 'logLevel' setting.
    4. Expose a 'demofiles' command line flag for the Python random agent, without which the agent was not able to record or play back demos.
    5. Fix a memory deallocation order error introduced by an earlier commit.
    Source code(tar.gz)
    Source code(zip)
  • release-2018-06-20(Jun 20, 2018)

    New Levels:

    1. Psychlab.

      1. contributed/psychlab/glass_pattern_detection
      2. contributed/psychlab/landoltC_identification
      3. contributed/psychlab/motion_discrimination{,_easy}
      4. contributed/psychlab/multiple_object_tracking{,_easy}
      5. contributed/psychlab/odd_one_out

    Bug Fixes:

    1. Let Python level cache set to None mean the same as not setting it at all.
    2. Change Python module initialization in Python-3 mode to make PIP packages work in Python 3.

    Minor Improvements:

    1. Add support for absl::variant to lua::Push and lua::Read.
    2. The demo :game has a new flag --start_index to start at an episode index other than 0.
    3. Add a console command dm_pickup to pick up an item identified by its id.
    4. More Python demos and tests now work with Python 3.
    5. Add a shader for rendering decals with transparency.
    Source code(tar.gz)
    Source code(zip)
  • release-2018-05-15(May 15, 2018)

    New Levels:

    1. DMLab-30.

      1. contributed/dmlab30/psychlab_arbitrary_visuomotor_mapping
      2. contributed/dmlab30/psychlab_continuous_recognition
    2. Psychlab.

      1. contributed/psychlab/arbitrary_visuomotor_mapping
      2. contributed/psychlab/continuous_recognition

    New Features:

    1. Support for level caching for improved performance in the Python module.
    2. Add the ability to spawn pickups dynamically at arbitrary locations.
    3. Add implementations to read datasets including Cifar10 and Stimuli.
    4. Add the ability to specify custom actions via 'customDiscreteActionSpec' and 'customDiscreteAction' callbacks.

    Bug Fixes:

    1. Fix playerId and otherPlayerId out by one errors in 'game_rewards.lua'.
    2. Require playerId passed to game:addScore to be one indexed instead of zero indexed and allow game:addScore to be used without a playerId.
    3. game:renderCustomView now renders the view with top-left as the origin. The previous behaviour can be achieved by calling reverse(1) on the returned tensor.
    4. Fix a bug in image.scale whereby the offset into the data was erroneously ignored.
    5. Fix a typo in a require statement in visual_search_factory.lua.
    6. Fix a few erroneous dependencies on Lua dictionary iteration order.
    7. game:AddScore now works even on the final frame of an episode.

    Minor Improvements:

    1. Moved .map files into assets/maps/src and .bsp files into assets/maps/built. Added further pre-built maps, which removes the need for the expensive :map_assets build step.
    2. Allow game to be rendered with top-left as origin instead of bottom-left.
    3. Add 'mixerSeed' setting to change behaviour of all random number generators.
    4. Support for BGR_INTERLEAVED and BGRD_INTERLEAVED observation formats.
    5. Add a Lua API to load PNGs from file contents.
    6. Add 'eyePos' to playerInfo() for a more accurate eye position of player. Used in place of player pos + height.
    7. Add support for absl::string_view to lua::Push and lua::Read.
    8. Allow player model to be overridden via 'playerModel' callback.
    9. Add ability to specify custom actions via 'customDiscreteActionSpec' and 'customDiscreteAction' callbacks.
    10. Add game:console command to issue Quake 3 console commands directly.
    11. Add clamp to tensor operations.
    12. Add new callback api:newClientInfo, allowing each client to intercept when players are loading.
    13. Skymaze level generation is now restricted to produce only 100000 distinct levels. This allows for caching to avoid expensive recompilations.
    14. Add cvars 'cg_drawScriptRectanglesAlways' and 'cg_drawScriptTextAlways' to enable script rendering when reducedUI or minimalUI is enabled.
    15. All pickup types can now choose their movement type separately, and in particular, all pickup types can be made static. Two separate table entries are now specified for an item, 'typeTag' and 'moveType'.

    Deprecated Features:

    1. Observation format names RGB_INTERLEAVED and RGBD_INTERLEAVED replace RGB_INTERLACED and RGBD_INTERLACED, respectively. The old format names are deprecated and will be removed in a future release.
    2. The pickup item's tag member is now called moveType. The old name is deprecated and will be removed in a future release.
    Source code(tar.gz)
    Source code(zip)
  • release-2018-02-07(Feb 7, 2018)

    New Levels:

    1. DMLab-30.

      1. contributed/dmlab30/rooms_collect_good_objects_{test,train}
      2. contributed/dmlab30/rooms_exploit_deferred_effects_{test,train}
      3. contributed/dmlab30/rooms_select_nonmatching_object
      4. contributed/dmlab30/rooms_watermaze
      5. contributed/dmlab30/rooms_keys_doors_puzzle
      6. contributed/dmlab30/language_select_described_object
      7. contributed/dmlab30/language_select_located_object
      8. contributed/dmlab30/language_execute_random_task
      9. contributed/dmlab30/language_answer_quantitative_question
      10. contributed/dmlab30/lasertag_one_opponent_small
      11. contributed/dmlab30/lasertag_three_opponents_small
      12. contributed/dmlab30/lasertag_one_opponent_large
      13. contributed/dmlab30/lasertag_three_opponents_large
      14. contributed/dmlab30/natlab_fixed_large_map
      15. contributed/dmlab30/natlab_varying_map_regrowth
      16. contributed/dmlab30/natlab_varying_map_randomized
      17. contributed/dmlab30/skymaze_irreversible_path_hard
      18. contributed/dmlab30/skymaze_irreversible_path_varied
      19. contributed/dmlab30/psychlab_sequential_comparison
      20. contributed/dmlab30/psychlab_visual_search
      21. contributed/dmlab30/explore_object_locations_small
      22. contributed/dmlab30/explore_object_locations_large
      23. contributed/dmlab30/explore_obstructed_goals_small
      24. contributed/dmlab30/explore_obstructed_goals_large
      25. contributed/dmlab30/explore_goal_locations_small
      26. contributed/dmlab30/explore_goal_locations_large
      27. contributed/dmlab30/explore_object_rewards_few
      28. contributed/dmlab30/explore_object_rewards_many

    New Features:

    1. Basic support for demo recording and playback.

    Minor Improvements:

    1. Add a mechanism to build DeepMind Lab as a PIP package.
    2. Extend basic testing to all levels under game_scripts/levels.
    3. Add settings minimalUI and reducedUI to avoid rendering parts of the HUD.
    4. Add teleported flag to game:playerInfo() to tell whether a player has teleported that frame.
    5. Add Lua functions countEntities and countVariations to the maze generation API to count the number of occurrences of a specific entity or variation, respectively

    Bug Fixes:

    1. Fix out-of-bounds access in Lua 'image' library.
    2. Fix off-by-one error in renderergl1 grid mesh rendering.
    Source code(tar.gz)
    Source code(zip)
  • release-2018-01-26(Jan 26, 2018)

    New Levels:

    1. Psychlab, a platform for implementing classical experimental paradigms from cognitive psychology.

      1. contributed/psychlab/sequential_comparison
      2. contributed/psychlab/visual_search

    New Features:

    1. Extend functionality of the built-in tensor Lua library.
    2. Add built-in image Lua library for loading and scaling PNGs.
    3. Add error handling to the env_c_api (version 1.1).
    4. Add ability to create events from Lua scripts.
    5. Add ability to retrieve game entity from Lua scripts.
    6. Add ability create pickup models during level load.
    7. Add ability to update textures from script after the level has loaded.
    8. Add Lua customisable themes. Note: This change renames helpers in maze_generation to be in lowerCamelCase (e.g. MazeGeneration -> mazeGeneration).
    9. The directory game_scripts has moved out of the assets directory, and level scripts now live separately from the library code in the levels subdirectory.

    Minor Improvements:

    1. Remove unnecessary dependency of map assets on Lua scripts, preventing time-consuming rebuilding of maps when scripts are modified.
    2. Add ability to disable bobbing of reward and goal pickups.
    3. The setting controls (with values internal, external) has been renamed to nativeApp (with values true, false, respectively). When set to true, programs linked against game_lib_sdl will use the native SDL input devices.
    4. Change LuaSnippetEmitter methods to use table call conventions.
    5. Add config variable for monochromatic lightmaps ('r_monolightmaps'). Enabled by default.
    6. Add config variable to limit texture size ('r_textureMaxSize').
    7. api:modifyTexture must now return whether the texture was modified.
    8. Add ability to adjust rewards.
    9. Add ability to raycast between different points on the map.
    10. Add ability to test whether a view vector is within an angle range within a oriented view frame.

    Bug Fixes:

    1. Increase current score storage from short to long.
    2. Fix ramp jump velocity in level lt_space_bounce_hard.
    3. Fix Lua function 'addScore' from module 'dmlab.system.game' to allow negative scores added to a player.
    4. Remove some undefined behaviour in the engine.
    5. Reduce inaccuracies related to angle conversion and normalization.
    6. Behavior of team spawn points now matches that of player spawn points. 'randomAngleRange' spawnVar must be set to 0 to match previous behavior.
    Source code(tar.gz)
    Source code(zip)
  • release-2016-12-06(Dec 8, 2016)

Owner
DeepMind
DeepMind
Monitor your el-cheapo UPS via SNMP

UPSC-SNMP-Agent UPSC-SNMP-Agent exposes your el-cheapo locally connected UPS via the SNMP network management protocol. This enables various equipment

Tom Szilagyi 32 Jul 28, 2022
Dopamine is a research framework for fast prototyping of reinforcement learning algorithms.

Dopamine Dopamine is a research framework for fast prototyping of reinforcement learning algorithms. It aims to fill the need for a small, easily grok

Google 10k Jan 07, 2023
Retro Games in Gym

Status: Maintenance (expect bug fixes and minor updates) Gym Retro Gym Retro lets you turn classic video games into Gym environments for reinforcement

OpenAI 2.8k Jan 03, 2023
ChainerRL is a deep reinforcement learning library built on top of Chainer.

ChainerRL ChainerRL is a deep reinforcement learning library that implements various state-of-the-art deep reinforcement algorithms in Python using Ch

Chainer 1.1k Dec 26, 2022
Game Agent Framework. Helping you create AIs / Bots that learn to play any game you own!

Serpent.AI - Game Agent Framework (Python) Update: Revival (May 2020) Development work has resumed on the framework with the aim of bringing it into 2

Serpent.AI 6.4k Jan 05, 2023
Open world survival environment for reinforcement learning

Crafter Open world survival environment for reinforcement learning. Highlights Crafter is a procedurally generated 2D world, where the agent finds foo

Danijar Hafner 213 Jan 05, 2023
A fork of OpenAI Baselines, implementations of reinforcement learning algorithms

Stable Baselines Stable Baselines is a set of improved implementations of reinforcement learning algorithms based on OpenAI Baselines. You can read a

Ashley Hill 3.7k Jan 01, 2023
Rethinking the Importance of Implementation Tricks in Multi-Agent Reinforcement Learning

MARL Tricks Our codes for RIIT: Rethinking the Importance of Implementation Tricks in Multi-AgentReinforcement Learning. We implemented and standardiz

404 Dec 25, 2022
A general-purpose multi-agent training framework.

MALib A general-purpose multi-agent training framework. Installation step1: build environment conda create -n malib python==3.7 -y conda activate mali

MARL @ SJTU 346 Jan 03, 2023
Tensorforce: a TensorFlow library for applied reinforcement learning

Tensorforce: a TensorFlow library for applied reinforcement learning Introduction Tensorforce is an open-source deep reinforcement learning framework,

Tensorforce 3.2k Jan 02, 2023
TensorFlow Reinforcement Learning

TRFL TRFL (pronounced "truffle") is a library built on top of TensorFlow that exposes several useful building blocks for implementing Reinforcement Le

DeepMind 3.1k Dec 29, 2022
Modular Deep Reinforcement Learning framework in PyTorch. Companion library of the book "Foundations of Deep Reinforcement Learning".

SLM Lab Modular Deep Reinforcement Learning framework in PyTorch. Documentation: https://slm-lab.gitbook.io/slm-lab/ BeamRider Breakout KungFuMaster M

Wah Loon Keng 1.1k Dec 24, 2022
TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.

TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning. TF-Agents makes implementing, de

2.4k Dec 29, 2022
A toolkit for developing and comparing reinforcement learning algorithms.

Status: Maintenance (expect bug fixes and minor updates) OpenAI Gym OpenAI Gym is a toolkit for developing and comparing reinforcement learning algori

OpenAI 29.6k Jan 01, 2023
Reinforcement Learning Coach by Intel AI Lab enables easy experimentation with state of the art Reinforcement Learning algorithms

Coach Coach is a python reinforcement learning framework containing implementation of many state-of-the-art algorithms. It exposes a set of easy-to-us

Intel Labs 2.2k Jan 05, 2023
Paddle-RLBooks is a reinforcement learning code study guide based on pure PaddlePaddle.

Paddle-RLBooks Welcome to Paddle-RLBooks which is a reinforcement learning code study guide based on pure PaddlePaddle. 欢迎来到Paddle-RLBooks,该仓库主要是针对强化学

AgentMaker 117 Dec 12, 2022
OpenAI Baselines: high-quality implementations of reinforcement learning algorithms

Status: Maintenance (expect bug fixes and minor updates) Baselines OpenAI Baselines is a set of high-quality implementations of reinforcement learning

OpenAI 13.5k Jan 07, 2023
This is the official implementation of Multi-Agent PPO.

MAPPO Chao Yu*, Akash Velu*, Eugene Vinitsky, Yu Wang, Alexandre Bayen, and Yi Wu. Website: https://sites.google.com/view/mappo This repository implem

653 Jan 06, 2023
An open source robotics benchmark for meta- and multi-task reinforcement learning

Meta-World Meta-World is an open-source simulated benchmark for meta-reinforcement learning and multi-task learning consisting of 50 distinct robotic

Reinforcement Learning Working Group 823 Jan 06, 2023