MetaDrive: Composing Diverse Scenarios for Generalizable Reinforcement Learning

Overview


MetaDrive: Composing Diverse Driving Scenarios for Generalizable RL


MetaDrive is a driving simulator with the following key features:

  • Compositional: It supports generating infinite scenes with various road maps and traffic settings for the research of generalizable RL.
  • Lightweight: It is easy to install and run. It can run up to 300 FPS on a standard PC.
  • Realistic: Accurate physics simulation and multiple sensory input including Lidar, RGB images, top-down semantic map and first-person view images.

🛠 Quick Start

Install MetaDrive via:

git clone https://github.com/decisionforce/metadrive.git
cd metadrive
pip install -e .

or

pip install metadrive-simulator

Note that the program is tested on both Linux and Windows. Some control and display issues in MacOS wait to be solved

You can verify the installation of MetaDrive via running the testing script:

# Go to a folder where no sub-folder calls metadrive
python -m metadrive.examples.profile_metadrive

Note that please do not run the above command in a folder that has a sub-folder called ./metadrive.

🚕 Examples

Run the following command to launch a simple driving scenario with auto-drive mode on. Press W, A, S, D to drive the vehicle manually.

python -m metadrive.examples.drive_in_single_agent_env

Run the following command to launch a safe driving scenario, which includes more complex obstacles and cost to be yielded.

python -m metadrive.examples.drive_in_safe_metadrive_env

You can also launch an instance of Multi-Agent scenario as follows

python -m metadrive.examples.drive_in_multi_agent_env --env roundabout

or launch and render in pygame front end

python -m metadrive.examples.drive_in_multi_agent_env --pygame_render --env roundabout

env argument could be:

  • roundabout (default)
  • intersection
  • tollgate
  • bottleneck
  • parkinglot
  • pgmap

Run the example of procedural generation of a new map as:

python -m metadrive.examples.procedural_generation

Note that the above four scripts can not be ran in a headless machine. Please refer to the installation guideline in documentation for more information about how to launch runing in a headless machine.

Run the following command to draw the generated maps from procedural generation:

python -m metadrive.examples.draw_maps

To build the RL environment in python script, you can simply code in the OpenAI gym format as:

import metadrive  # Import this package to register the environment!
import gym

env = gym.make("MetaDrive-v0", config=dict(use_render=True))
# env = metadrive.MetaDriveEnv(config=dict(environment_num=100))  # Or build environment from class
env.reset()
for i in range(1000):
    obs, reward, done, info = env.step(env.action_space.sample())  # Use random policy
    env.render()
    if done:
        env.reset()
env.close()

🏫 Documentations

Find more details in: MetaDrive

📎 References

Working in Progress!

build codecov Documentation GitHub license Codacy Badge GitHub contributors

Comments
  • Reproducibility Problem

    Reproducibility Problem

    Hello,

    I am trying to create custom scenarios. For that, I created a custom map similar to vis_a_small_town.py, and I am using the drive_in_multi_agent_env.py example. The environment is defined as follow:

    env = envs[env_cls_name](
            {
                "use_render": True, # if not args.pygame_render else False,
                "manual_control": True,
                "crash_done": False,
                #"agent_policy": ManualControllableIDMPolicy, 
                "num_agents": total_agent_number,
                #"prefer_track_agent": "agent3",
                "show_fps": True, 
                "vehicle_config": {
                    "lidar": {"num_others": total_agent_number},
                    "show_lidar": False,    
                },
                "target_vehicle_configs": {"agent{}".format(i): {
                        #"spawn_lateral": i * 2,
                        "spawn_longitude": i * 10,
                        #"spawn_lane_index":0,
                        "vehicle_model": vehicle_model_list[i],
                        #"max_engine_force":1,
                        "max_speed":100,
                    }
                                               for i in range(5)}
            }
        )
    

    I have a list that consists of steering and throttle_brake values. The scenario consists of almost 1600 steps. I am assigning these values regarding step number into env.step:

    o, r, d, info=env.step({
                    'agent0': [agent0.steering, agent0.pedal], 
                    'agent1': [agent1.steering, agent1.pedal],
                    'agent2': [agent2.steering, agent2.pedal],
                    'agent3': [agent3.steering, agent3.pedal],
                    'agent4': [agent4.steering, agent4.pedal],
                    'agent5': [agent5.steering, agent5.pedal],
                    'agent6': [agent6.steering, agent6.pedal],
                    'agent7': [agent7.steering, agent7.pedal],
                    'agent8': [agent8.steering, agent8.pedal],
                    'agent9': [agent9.steering, agent9.pedal],
                    }
                )
    

    At the end of the list for vehicle commands, the counter for loop set the zero, and commands are reused. Also, vehicles' locations, speeds, and headings are set to initial values stored in the dictionary:

    def initilize_vehicles(env):
    
        global total_agent_number
    
        for i in range (total_agent_number):
            agent_str = "agent" + str(i)
            env.vehicles[agent_str].set_heading_theta(vehicles_initial_values[agent_str]['initial_heading_theta'])
            env.vehicles[agent_str].set_position([vehicles_initial_values[agent_str]['initial_position_x'],vehicles_initial_values[agent_str]['initial_position_y']]) # it is x,y from the first block of the map
            env.vehicles[agent_str].set_velocity(env.vehicles[agent_str].velocity_direction, vehicles_initial_values[agent_str]['initial_velocity']) 
    

    I want to reproduce the scenario and test my main algorithm. However, the problem is vehicles do not act in the same way in every run of scenario. I checked my commands for vehicle using:

    env.vehicles["agent0"].steering,env.vehicles["agent0"].throttle_brake

    Vehicle commands are the same for each repetition of scenarios.

    When I don't use a loop and start the MetaDrive from the terminal, I mostly see the same action from cars. I tested almost 10 times. But in loop case, cars start to act differently after the first loop.

    Reproducibility is a huge concern for me. Is it something about the physic engine? Are there any configuration parameters for the engine?

    Thanks!!

    opened by BedirhanKeskin 9
  • Add more description for Waymo dataset

    Add more description for Waymo dataset

    What changes do you make in this PR?

    • Please describe why you create this PR

    Checklist

    • [ ] I have merged the latest main branch into current branch.
    • [ ] I have run bash scripts/format.sh before merging.
    • Please use "squash and merge" mode.
    opened by pengzhenghao 6
  • Constant FPS mode

    Constant FPS mode

    Is there a way to set constant fps mode? I tried env.engine.force_fps.toggle(): then env.engine.force_fps.fps is showing 50 but visualisation is showing 10-16 fps in the top right corner. Is there any other way? Thanks in advance!

    opened by bbenja 6
  • What is neighbours_distance ?

    What is neighbours_distance ?

    Hello, What is the neighbours_distance and difference with the distance definition inside Lidar? They are inside MULTI_AGENT_METADRIVE_DEFAULT_CONFIG I guess the unit is in meters? `` Ekran görüntüsü 2022-03-31 114715

    opened by BedirhanKeskin 5
  • I encountered an error at an unknown location during runtime. Hello

    I encountered an error at an unknown location during runtime. Hello

    Successfully registered the following environments: ['MetaDrive-validation-v0', 'MetaDrive-10env-v0', 'MetaDrive-100envs-v0', 'MetaDrive-1000envs-v0', 'SafeMetaDrive-validation-v0', 'SafeMetaDrive-10env-v0', 'SafeMetaDrive-100envs-v0', 'SafeMetaDrive-1000envs-v0', 'MARLTollgate-v0', 'MARLBottleneck-v0', 'MARLRoundabout-v0', 'MARLIntersection-v0', 'MARLParkingLot-v0', 'MARLMetaDrive-v0']. Known pipe types: wglGraphicsPipe (all display modules loaded.)

    opened by shushushulian 4
  • about panda3d

    about panda3d

    when i run "python -m metadrive.examples.drive_in_safe_metadrive_env", set use_render=true the output: Successfully registered the following environments: ['MetaDrive-validation-v0', 'MetaDrive-10env-v0', 'MetaDrive-100envs-v0', 'MetaDrive-1000envs-v0', 'SafeMetaDrive-validation-v0', 'SafeMetaDrive-10env-v0', 'SafeMetaDrive-100envs-v0', 'SafeMetaDrive-1000envs-v0', 'MARLTollgate-v0', 'MARLBottleneck-v0', 'MARLRoundabout-v0', 'MARLIntersection-v0', 'MARLParkingLot-v0', 'MARLMetaDrive-v0']. Known pipe types: glxGraphicsPipe (1 aux display modules not yet loaded.)

    opened by benicioolee 4
  • RGB Camera returns time-buffered grayscale images

    RGB Camera returns time-buffered grayscale images

    Hi, I am running a vanilla MetaDriveEnv with the rgb camera sensor.

    veh_config = dict(
        image_source="rgb_camera",
        rgb_camera=(IMG_DIM, IMG_DIM))
    

    I wanted to see the images the sensor was producing, so was saving a few of them: I am using: from PIL import Image

    action = np.array([0,0])
    obs, reward, done, info = env.step(action)
    img = Image.fromarray(np.array(obs['image']*256,np.uint8))
    img.save(f"test.jpeg")
    

    I noticed that the images all looked grayscale. And upon further inspection I found the following behavior:: Suppose we want (N,N) images, which should be represented as arrays of size (N,N,3). Step 0: image[:,:,0] = zeros(N,N) ; image[:,:,1] = zeros(N,N) ; image[:,:,2] = zeros(N,N) Step 1: image[:,:,0] = zeros(N,N) ; image[:,:,1] = zeros(N,N) ; image[:,:,2] = m1 Step 2: image[:,:,0] = zeros(N,N) ; image[:,:,1] = m1 ; image[:,:,2] = m2 Step 3: image[:,:,0] = m1 ; image[:,:,1] = m2 ; image[:,:,2] = m3

    where m1, m2, m3 are (N,N) matrices.

    So, the images are in reality displaying 3 different timesteps with the color channels taking the time info (R=t-2, G=t-1, B=t) . That is why the images look mostly gray, since the values are identical almost everywhere – except where we expect some movement (contours) where we see that lines look colorful and strange.

    Apologies if this is expected behavior, and I just had some configuration incorrect.

    image

    opened by EdAlexAguilar 4
  • Fix close and reset issue

    Fix close and reset issue

    What changes do you make in this PR?

    • Please describe why you create this PR

    close #191

    Checklist

    • [x] I have merged the latest main branch into current branch.
    • [x] I have run bash scripts/format.sh before merging.
    • Please use "squash and merge" mode.
    opened by pengzhenghao 4
  • Selection of parameter in Rllib training for SAC agent in MetaDriveEnv and SafeMetaDriveEnv

    Selection of parameter in Rllib training for SAC agent in MetaDriveEnv and SafeMetaDriveEnv

    What is the proper buffer size / batch size / entropy coefficient for SAC to reproduce the results? I find it hard to reproduce results in SafeMetaDriveEnv. In https://arxiv.org/pdf/2109.12674.pdf, does the reported success rate of SAÇ in Table1 refer to the training success rate (and no collision, i.e. safe_rl_env=True)?

    opened by HenryLHH 4
  • Suggestion to run multiple instance in parallel?

    Suggestion to run multiple instance in parallel?

    First of all I would like to express my gratitude on this great project. I really like the feature-rich and lightweight nature of MetaDrive as a driving simulator for reinforcement learning.

    I am wondering what is the recommended way to run multiple MetaDrive instances in parallel (each one with a single ego-car agent)? This seem to be a common use case for reinforcement learning training. I am currently running a batch of MetaDrive simulator with each of them in wrapped in a process, which does seem to have overheads of extra resource and communication/synchronization.

    Another problem I encountered when running multiple instances (say, 60 instances on single machine) in their own process is that I will get a lot of warning like this:

    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    

    I guess this has something to do with audio. This happens even though I am running the TopDown environment, which should not involve sound. Did you see those warning when running multiple instances as well?

    Also, is there a plan to have a vectorized batch version?

    Thanks!

    opened by breakds 4
  • Rendering FPS of example script is too low

    Rendering FPS of example script is too low

    When I run python -m metadrive.examples.drive_in_single_agent_env I found the fps is about 4 fps I used the nvidia-smi command and found that my 2060gpu was not occupied.

    I also can find some warnings about: WARNING:root: It seems you don't install our cython utilities yet! Please reinstall MetaDrive via .........

    opened by feidieufo 4
  • Errors when running metadrive.tests.scripts.generate_video_for_image_obs

    Errors when running metadrive.tests.scripts.generate_video_for_image_obs

    In metadrive directory, I ran python -m metadrive.tests.scripts.generate_video_for_image_obs , then it reported an error as below: python -m metadrive.tests.scripts.generate_video_for_image_obs

    Successfully registered the following environments: ['MetaDrive-validation-v0', 'MetaDrive-10env-v0', 'MetaDrive-100envs-v0', 'MetaDrive-1000envs-v0', 'SafeMetaDrive-validation-v0', 'SafeMetaDrive-10env-v0', 'SafeMetaDrive-100envs-v0', 'SafeMetaDrive-1000envs-v0', 'MARLTollgate-v0', 'MARLBottleneck-v0', 'MARLRoundabout-v0', 'MARLIntersection-v0', 'MARLParkingLot-v0', 'MARLMetaDrive-v0']. :display(warning): Unable to load libpandagles2.so: No error. Known pipe types: (all display modules loaded.) Traceback (most recent call last): File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/Users/queenie/Documents/metadrive/metadrive/tests/scripts/generate_video_for_image_obs.py", line 157, in env.reset() File "/Users/queenie/Documents/metadrive/metadrive/envs/base_env.py", line 333, in reset self.lazy_init() # it only works the first time when reset() is called to avoid the error when render File "/Users/queenie/Documents/metadrive/metadrive/envs/base_env.py", line 234, in lazy_init self.engine = initialize_engine(self.config) File "/Users/queenie/Documents/metadrive/metadrive/engine/engine_utils.py", line 11, in initialize_engine cls.singleton = cls(env_global_config) File "/Users/queenie/Documents/metadrive/metadrive/engine/base_engine.py", line 28, in init EngineCore.init(self, global_config) File "/Users/queenie/Documents/metadrive/metadrive/engine/core/engine_core.py", line 135, in init super(EngineCore, self).init(windowType=self.mode) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 339, in init self.openDefaultWindow(startDirect = False, props=props) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 1021, in openDefaultWindow self.openMainWindow(*args, **kw) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 1056, in openMainWindow self.openWindow(*args, **kw) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 766, in openWindow win = func() File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 752, in callbackWindowDict = callbackWindowDict) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 818, in _doOpenWindow self.makeDefaultPipe() File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 648, in makeDefaultPipe "No graphics pipe is available!\n" File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/directnotify/Notifier.py", line 130, in error raise exception(errorString) Exception: No graphics pipe is available! Your Config.prc file must name at least one valid panda display library via load-display or aux-display. (drivemeta) ➜ metadrive git:(main) python -m metadrive.tests.scripts.generate_video_for_image_obs Successfully registered the following environments: ['MetaDrive-validation-v0', 'MetaDrive-10env-v0', 'MetaDrive-100envs-v0', 'MetaDrive-1000envs-v0', 'SafeMetaDrive-validation-v0', 'SafeMetaDrive-10env-v0', 'SafeMetaDrive-100envs-v0', 'SafeMetaDrive-1000envs-v0', 'MARLTollgate-v0', 'MARLBottleneck-v0', 'MARLRoundabout-v0', 'MARLIntersection-v0', 'MARLParkingLot-v0', 'MARLMetaDrive-v0']. :display(warning): Unable to load libpandagles2.so: No error. Known pipe types: (all display modules loaded.) Traceback (most recent call last): File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/Users/queenie/Documents/metadrive/metadrive/tests/scripts/generate_video_for_image_obs.py", line 157, in env.reset() File "/Users/queenie/Documents/metadrive/metadrive/envs/base_env.py", line 333, in reset self.lazy_init() # it only works the first time when reset() is called to avoid the error when render File "/Users/queenie/Documents/metadrive/metadrive/envs/base_env.py", line 234, in lazy_init self.engine = initialize_engine(self.config) File "/Users/queenie/Documents/metadrive/metadrive/engine/engine_utils.py", line 11, in initialize_engine cls.singleton = cls(env_global_config) File "/Users/queenie/Documents/metadrive/metadrive/engine/base_engine.py", line 28, in init EngineCore.init(self, global_config) File "/Users/queenie/Documents/metadrive/metadrive/engine/core/engine_core.py", line 135, in init super(EngineCore, self).init(windowType=self.mode) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 339, in init self.openDefaultWindow(startDirect = False, props=props) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 1021, in openDefaultWindow self.openMainWindow(*args, **kw) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 1056, in openMainWindow self.openWindow(*args, **kw) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 766, in openWindow win = func() File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 752, in callbackWindowDict = callbackWindowDict) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 818, in _doOpenWindow self.makeDefaultPipe() File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 648, in makeDefaultPipe "No graphics pipe is available!\n" File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/directnotify/Notifier.py", line 130, in error raise exception(errorString) Exception: No graphics pipe is available! Your Config.prc file must name at least one valid panda display library via load-display or aux-display.

    opened by YouSonicAI 2
  • opencv-python-headless in requirements seems to create conflict?

    opencv-python-headless in requirements seems to create conflict?

    Sometimes it has different version to opencv-python can cause issue. It is only used in top-down-rendering. Can we change this dependency to opencv-python?

    opened by pengzhenghao 0
Releases(MetaDrive-0.2.6.0)
Owner
DeciForce: Crossroads of Machine Perception and Autonomy
Research on Unifying Machine Perception and Autonomy in Zhou Group
DeciForce: Crossroads of Machine Perception and Autonomy
Translate darknet to tensorflow. Load trained weights, retrain/fine-tune using tensorflow, export constant graph def to mobile devices

Intro Real-time object detection and classification. Paper: version 1, version 2. Read more about YOLO (in darknet) and download weight files here. In

Trieu 6.1k Jan 04, 2023
A PyTorch implementation of "CoAtNet: Marrying Convolution and Attention for All Data Sizes".

CoAtNet Overview This is a PyTorch implementation of CoAtNet specified in "CoAtNet: Marrying Convolution and Attention for All Data Sizes", arXiv 2021

Justin Wu 268 Jan 07, 2023
The Hailo Model Zoo includes pre-trained models and a full building and evaluation environment

Hailo Model Zoo The Hailo Model Zoo provides pre-trained models for high-performance deep learning applications. Using the Hailo Model Zoo you can mea

Hailo 50 Dec 07, 2022
Very deep VAEs in JAX/Flax

Very Deep VAEs in JAX/Flax Implementation of the experiments in the paper Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on I

Jamie Townsend 42 Dec 12, 2022
dataset for ECCV 2020 "Motion Capture from Internet Videos"

Motion Capture from Internet Videos Motion Capture from Internet Videos Junting Dong*, Qing Shuai*, Yuanqing Zhang, Xian Liu, Xiaowei Zhou, Hujun Bao

ZJU3DV 98 Dec 07, 2022
Object DGCNN and DETR3D, Our implementations are built on top of MMdetection3D.

Object DGCNN & DETR3D This repo contains the implementations of Object DGCNN (https://arxiv.org/abs/2110.06923) and DETR3D (https://arxiv.org/abs/2110

Wang, Yue 539 Jan 07, 2023
The AWS Certified SysOps Administrator

The AWS Certified SysOps Administrator – Associate (SOA-C02) exam is intended for system administrators in a cloud operations role who have at least 1 year of hands-on experience with deployment, man

Aiden Pearce 32 Dec 11, 2022
A Tensorfflow implementation of Attend, Infer, Repeat

Attend, Infer, Repeat: Fast Scene Understanding with Generative Models This is an unofficial Tensorflow implementation of Attend, Infear, Repeat (AIR)

Adam Kosiorek 82 May 27, 2022
This is an official source code for implementation on Extensive Deep Temporal Point Process

Extensive Deep Temporal Point Process This is an official source code for implementation on Extensive Deep Temporal Point Process, which is composed o

Haitao Lin 8 Aug 15, 2022
MoveNet Single Pose on DepthAI

MoveNet Single Pose tracking on DepthAI Running Google MoveNet Single Pose models on DepthAI hardware (OAK-1, OAK-D,...). A convolutional neural netwo

64 Dec 29, 2022
Implementation of Restricted Boltzmann Machine (RBM) and its variants in Tensorflow

xRBM Library Implementation of Restricted Boltzmann Machine (RBM) and its variants in Tensorflow Installation Using pip: pip install xrbm Examples Tut

Omid Alemi 55 Dec 29, 2022
Mini Software that give reminder to drink water as per your weight.

Water Notification Desktop Python The Mini Software built in Python (tkinter) that will remind you to drink water on specific time span based on your

Om Jogani 5 Dec 16, 2022
A motion detection system with RaspberryPi, OpenCV, Python

Human Detection System using Raspberry Pi Functionality Activates a relay on detecting motion. You may need following components to get the expected R

Omal Perera 55 Dec 04, 2022
This is the research repository for Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition.

Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition This is the research repository for Vid2

Future Interfaces Group (CMU) 26 Dec 24, 2022
This is the code of paper ``Contrastive Coding for Active Learning under Class Distribution Mismatch'' with python.

Contrastive Coding for Active Learning under Class Distribution Mismatch Official PyTorch implementation of ["Contrastive Coding for Active Learning u

21 Dec 22, 2022
Fusion-in-Decoder Distilling Knowledge from Reader to Retriever for Question Answering

This repository contains code for: Fusion-in-Decoder models Distilling Knowledge from Reader to Retriever Dependencies Python 3 PyTorch (currently tes

Meta Research 323 Dec 19, 2022
A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines

A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines Understanding the results of deep neural networks is

Johan van den Heuvel 2 Dec 13, 2021
Code for models used in Bashiri et al., "A Flow-based latent state generative model of neural population responses to natural images".

A Flow-based latent state generative model of neural population responses to natural images Code for "A Flow-based latent state generative model of ne

Sinz Lab 5 Aug 26, 2022
DGCNN - Dynamic Graph CNN for Learning on Point Clouds

DGCNN is the author's re-implementation of Dynamic Graph CNN, which achieves state-of-the-art performance on point-cloud-related high-level tasks including category classification, semantic segmentat

Wang, Yue 1.3k Dec 26, 2022
A Kitti Road Segmentation model implemented in tensorflow.

KittiSeg KittiSeg performs segmentation of roads by utilizing an FCN based model. The model achieved first place on the Kitti Road Detection Benchmark

Marvin Teichmann 890 Jan 04, 2023