A minimalist environment for decision-making in autonomous driving

Overview

highway-env

build Documentation Status Downloads Codacy Badge Coverage GitHub contributors Environments

A collection of environments for autonomous driving and tactical decision-making tasks


An episode of one of the environments available in highway-env.

Try it on Google Colab! Open In Colab

The environments

Highway

env = gym.make("highway-v0")

In this task, the ego-vehicle is driving on a multilane highway populated with other vehicles. The agent's objective is to reach a high speed while avoiding collisions with neighbouring vehicles. Driving on the right side of the road is also rewarded.


The highway-v0 environment.

A faster variant, highway-fast-v0 is also available, with a degraded simulation accuracy to improve speed for large-scale training.

Merge

env = gym.make("merge-v0")

In this task, the ego-vehicle starts on a main highway but soon approaches a road junction with incoming vehicles on the access ramp. The agent's objective is now to maintain a high speed while making room for the vehicles so that they can safely merge in the traffic.


The merge-v0 environment.

Roundabout

env = gym.make("roundabout-v0")

In this task, the ego-vehicle if approaching a roundabout with flowing traffic. It will follow its planned route automatically, but has to handle lane changes and longitudinal control to pass the roundabout as fast as possible while avoiding collisions.


The roundabout-v0 environment.

Parking

env = gym.make("parking-v0")

A goal-conditioned continuous control task in which the ego-vehicle must park in a given space with the appropriate heading.


The parking-v0 environment.

Intersection

env = gym.make("intersection-v0")

An intersection negotiation task with dense traffic.


The intersection-v0 environment.

Racetrack

env = gym.make("racetrack-v0")

A continuous control task involving lane-keeping and obstacle avoidance.


The racetrack-v0 environment.

Examples of agents

Agents solving the highway-env environments are available in the eleurent/rl-agents and DLR-RM/stable-baselines3 repositories.

See the documentation for some examples and notebooks.

Deep Q-Network


The DQN agent solving highway-v0.

This model-free value-based reinforcement learning agent performs Q-learning with function approximation, using a neural network to represent the state-action value function Q.

Deep Deterministic Policy Gradient


The DDPG agent solving parking-v0.

This model-free policy-based reinforcement learning agent is optimized directly by gradient ascent. It uses Hindsight Experience Replay to efficiently learn how to solve a goal-conditioned task.

Value Iteration


The Value Iteration agent solving highway-v0.

The Value Iteration is only compatible with finite discrete MDPs, so the environment is first approximated by a finite-mdp environment using env.to_finite_mdp(). This simplified state representation describes the nearby traffic in terms of predicted Time-To-Collision (TTC) on each lane of the road. The transition model is simplistic and assumes that each vehicle will keep driving at a constant speed without changing lanes. This model bias can be a source of mistakes.

The agent then performs a Value Iteration to compute the corresponding optimal state-value function.

Monte-Carlo Tree Search

This agent leverages a transition and reward models to perform a stochastic tree search (Coulom, 2006) of the optimal trajectory. No particular assumption is required on the state representation or transition model.


The MCTS agent solving highway-v0.

Installation

pip install highway-env

Usage

import gym
import highway_env

env = gym.make("highway-v0")

done = False
while not done:
    action = ... # Your agent code here
    obs, reward, done, info = env.step(action)
    env.render()

Documentation

Read the documentation online.

Citing

If you use the project in your work, please consider citing it with:

@misc{highway-env,
  author = {Leurent, Edouard},
  title = {An Environment for Autonomous Driving Decision-Making},
  year = {2018},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/eleurent/highway-env}},
}

List of publications & preprints using highway-env (please open a pull request to add missing entries):

PhD theses

Master theses

Comments
  • Scaling to multiple agents

    Scaling to multiple agents

    Thanks for creating this easy to use environment for urban scenarios. I wanted to use this environment for multi agent learning. Currently, single agent learning is supported. Are there any plans for scaling it up for multi agents?

    enhancement 
    opened by Achin17 42
  • Hello!

    Hello!

    Hi, I have some question that if i want to use DQN and DDQN to train the agent in the high_env, which json file should i choose ? I notice that there is a file named "no_dueling.json" in the /scripts/configs/HighwayEnv/agents/DQNAgent, but in the model.py there is not Unknown model type to match with the "no_dueling.json", what should i do ? thanks for you help!

    question 
    opened by zhangxinchen123 23
  • Keep agent in lane in Continuous Action space

    Keep agent in lane in Continuous Action space

    I am playing around with the intersection environment with continuous actions, and I am having a hard time modelling the agent to stay in lane. It would be great to have the information of his angle/offset from the middle of the lane as part of his observation space, and learn the behaviour from there, not purely from the reward function. I am using the Kinematic Observation. For multi-agent.

    Sorry if it was in the documentation already and I did not notice.

    opened by TibiGG 20
  • Questions about parallelization, increasing timescale velocity, custom observations.

    Questions about parallelization, increasing timescale velocity, custom observations.

    Hi @eleurent,

    I'm here again and want to ask you some questions. As I wrote you, two, maybe three weeks ago, I'm building a custom parking environment. I have worked on this project with Unity (ML-agents) before starting with gym/highway-env, due to some aspects that makes it unsuitable for the continuation of my work. In ML-agents you could speed up a bit the training phase with the parallelization of the environments and speeding up the simulation time, defining the timescale as you wanted. I was wondering if even here you can do that, I didn't find much about this topic.

    The last question is about the observation, basically I want to do a porting of the reward function (which is not the problem) and observations since I have achieved some good results, I want to use multiple observations, for example lidar or greyscale and add them some features to observe from the environment, have you tryed that yet?

    Thanks in advance. Regards, Andrea

    opened by Andrealberti 20
  • Training with DQN

    Training with DQN

    Hello, thank you for sharing this great job. I am trying to replicate the behaviour shown in the examples (Deep Q-Network). Have you trained with the network provided in the rl-agents? I have tried it with 1000 episodes and when I test it, the agent only moves to the right. Maybe more episodes are needed.

    Thank you in advance.

    opened by rodrigogutierrezm 17
  • Errors while setting up

    Errors while setting up

    Hi Eleurent,

    Thanks for the amazing repository.

    When I was trying to build the project, I ended up in below errors could you please help me to fix.

    1. install with Python3 br /> When I try with pip3 install --user git+https://github.com/eleurent/highway-env I am getting the below error and installation is not successful.
    Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-sir6pe48/matplotlib/
    
    1. When I try pip install --user git+https://github.com/eleurent/highway-env (which does the installation on Python2.7) installation is successful. However I am not able to import the highway_env
    Python 2.7.12 (default, Nov 12 2018, 14:36:49) 
    [GCC 5.4.0 20160609] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import highway_env
    pygame 1.9.4
    Hello from the pygame community. https://www.pygame.org/contribute.html
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/home/kishor/.local/lib/python2.7/site-packages/highway_env/__init__.py", line 2, in <module>
        import highway_env.envs
      File "/home/kishor/.local/lib/python2.7/site-packages/highway_env/envs/__init__.py", line 1, in <module>
        from highway_env.envs.highway_env import *
    ImportError: No module named envs.highway_env
    >>> 
    

    Thank you.

    opened by kk2491 15
  • Some questions about changing policies and observations

    Some questions about changing policies and observations

    Hi, I tried to run and make some changes to the "highway-v0" environment (i.e. no right overtake, safety distance and more...). I now have a question about training. At the moment the model structure is as follows:

    model = DQN('MlpPolicy', env, gamma=0.8, learning_rate=5e-4, buffer_size=50000, exploration_fraction=0.1,
                exploration_final_eps=0.5, exploration_initial_eps=1.0, batch_size=32, double_q=True,
                target_network_update_freq=50, prioritized_replay=True, verbose=1, tensorboard_log="./dqn_two_lane_tensorboard/")
    

    and observation type is Kinematics. Results, after training sessions of 300000 steps are fluctuating, also adding layers to Mlp (64, 64, 64, 32, 20) which seems not to add anything to the standard Mlp. So I tried to use Grayscale observation and CnnPolicy, to see if there would be a performance improvement. Here is the code:

    model = DQN('CnnPolicy', env, gamma=0.8, learning_rate=5e-4, buffer_size=50000, exploration_fraction=0.1,
                exploration_final_eps=0.5, exploration_initial_eps=1.0, batch_size=32, double_q=True,
                target_network_update_freq=50, prioritized_replay=True, verbose=1, tensorboard_log="./dqn_two_lane_tensorboard/")
    
    "offscreen_rendering": True,
    "observation": {
        "type": "GrayscaleObservation",
        "weights": [0.2989, 0.5870, 0.1140],  # weights for RGB conversion
        "stack_size": 4,
        "observation_shape": (screen_width, screen_height)
    },
    "screen_width": screen_width,
    "screen_height": screen_height,
    "scaling": 1.75,
    "policy_frequency": 2,
    

    The training starts with no errors, but after some steps (around 4000) it crashes due to occupation of all the RAM memory. I tried to reduce batch size (up to 16) and screen width and height (up to 84x84 which is really small) but it doesn't change anything.

    My PC specs are the following: GPU model: NVIDIA Quadro RTX 4000 CUDA version: 10.1 RAM: 32 GB

    My question is if there is something I'm missing that causes the RAM saturation and, mostly, if using Cnn + Grayscale observation would actually result in a performance improvement or if it's a waste of time. Thanks in advance for your help

    opened by lucalazzaroni 13
  • Preventing vehicle from going off-road?

    Preventing vehicle from going off-road?

    I think having a signal indicating the vehicle has gone off the road is useful, and one can terminate the episode with the use of this and shorten the training time for continuous action environments.

    Of course, an alternative is to use a time step limit, but even with that limit put on, you can have a much more informative episode on the road, rather than off-road, in limited time steps.

    opened by mhtb32 13
  • The install of the project

    The install of the project

    Hello! When i run the command in the terminal , the terminal tell me successfully,but i can't find the project in my computer, how can i do next ? Thanks for your reply!!

    opened by zhangxinchen123 12
  • Training highway-v0 with DQN, the trained result is not as good as the one presented in the example video

    Training highway-v0 with DQN, the trained result is not as good as the one presented in the example video

    1. example code

    import gym import highway_env from stable_baselines import DQN model = DQN('MlpPolicy', "highway-fast-v0", policy_kwargs=dict(net_arch=[256, 256]), learning_rate=5e-4, buffer_size=15000, learning_starts=200, batch_size=32, gamma=0.8, train_freq=1, gradient_steps=1, target_update_interval=50, verbose=1, tensorboard_log="highway_dqn/") model.learn(int(2e4)) model.save("highway_dqn/model")

    Question: 1、highway-fast-v0 does not seem to exist 2、the result is not good as example 3、These parameters, such as gradient_steps, target_update_interval, do not exist in stable_baselines.DQN

    opened by limeng-1234 11
  • Separates collision with obstacle and collision with vehicle

    Separates collision with obstacle and collision with vehicle

    For some applications(like what I have in mind), it may be useful to separate vehicle to vehicle collision and vehicle to obstacle collision.

    Please let me know if you think there is something wrong with these changes.

    opened by mhtb32 11
  • How do I get the real-time location of the created vehicle in a custom environment

    How do I get the real-time location of the created vehicle in a custom environment

    Dear author I want to produce a landmark that changes with the state of the vehicle, but I don't know how to get the current location of the vehicle when defining the environment. In the image below, I want the landmark to always appear between car 3 and car 4, but I don't know how to get the real-time location of car 3 and car 4。Could you tell me what to do? Looking forward to your reply.

    image

    opened by lostboy233333 0
  • Save manual control actions

    Save manual control actions

    Hello, author! I have a question. when i set the manual control is true, i would like to record discrete actions. How can I do this please? I look forward to your reply!

    opened by tingtingLiuLiu 0
  • Some angles of the arc road cannot be drawn

    Some angles of the arc road cannot be drawn

    Dear Author, Thank you very much for your wonderful work! I got some problems when I was customizing the environment, hope to get some advice from you~ I found that in some cases the arc road could not be drawn. For example, in the following picture, the arc segment with an Angle of 180 to 90 could not be drawn, so my road could not be closed. I wonder if you have any good solutions? Thank you very much for taking the time to answer my questions. image_4

    opened by lostboy233333 4
  • Training Parking_her does not work.

    Training Parking_her does not work.

    Good afternoon, I was trying to train a policy for the parking-env to test against safety validation methods. When I tried to run the code on colab as it is, I was getting an error when creating the environment. AttributeError: 'ParkingEnv' object has no attribute 'np_random' This error could be solved by reinstalling the highway-env or initially installing an older version of gym and highway. After doing this, an error occurs in creating the model before training. TypeError: init() got an unexpected keyword argument 'create_eval_env'

    It would be much appreciated if you have any insight on how to solve this problem. My research focuses more on the verification side than training or developing a controller to test. I don't have as much experience in training controllers with RL.

    Training_Code_Error_Parking

    opened by JoshYank 7
  • [Feature] Allow `config` to be set in `reset(options=...)`

    [Feature] Allow `config` to be set in `reset(options=...)`

    Thanks for the project, it is very high quality. Would it be possible for the configures, in particular, .config to be set at the option parameter in reset as this is the spirit of the API?

    Im happy to add the PR if you are interested

    opened by pseudo-rnd-thoughts 1
Releases(v1.7.1)
  • v1.7.1(Dec 19, 2022)

  • v1.7(Nov 6, 2022)

  • v1.6(Aug 14, 2022)

    • fix a bug in generating discrete actions from continuous actions
    • fix more bugs related to changes in gym's latest versions
    • new intersection-env variant with continuous actions
    • add longitudinal/lateral/angular offsets to the lane as part of the kinematics observation's features
    • add more configurable options for reward function and termination conditions
    • add configurable min/max speed for continuous actions
    • bug fix for reward computation in the multi-agent setting
    • add get_available_actions for MultiAgentAction
    • fix various deprecation warnings
    • add a multi-objective version of HighwayEnv

    Huge thanks to contributors @zerongxi, @TibiGG, @KexianShen, @lorandcheng

    Source code(tar.gz)
    Source code(zip)
  • v1.5(Mar 19, 2022)

    • Add documentation on continuous actions
    • Fix various bugs or imprecision in collision checks and obstacles rendering
    • Image observations are now centered on the observer vehicle
    • Fix the lane change behaviour in some situations
    • Add TupleObservation, which is a union of several observation types
    • Improve the accuracy of the LidarObservation
    • Add support for PolyLane, and methods to save/load road networks from a config
    • Fix steering wheel / angle conversion
    • Change of the velocity term projection in the reward function
    • Add support for latest gym versions (>=0.22) which dropped the Monitor wrapper
    • Add a copy of the GoalEnv interface which was removed from gym
    Source code(tar.gz)
    Source code(zip)
  • v1.4(Sep 21, 2021)

    This release introduces additional content:

    • a new continuous control environment, racetrack-v0, where the agent must learn to steer and follow the tracks, while avoiding other vehicles
    • a new "on_road" layer in the OccupancyGrid observation type, which enables the observer to see the drivable space
    • a new "align_to_vehicle_axes" option in the OccupancyGrid observation type, which renders the observation in the local vehicle frame
    • a new DiscreteAction action type, which discretizes the original ContinuousAction type. This allows to do low-level control, but with a small discrete action space (e.g. for DQN). Note that this is different from the DiscreteMetaAction type, which implements its own low-level sub-policies.
    • new example scripts and notebooks for training agents, such as a PPO continuous control policy for racetrack-v0.
    • updated documentation
    Source code(tar.gz)
    Source code(zip)
  • v1.3(Aug 30, 2021)

    This release contains

    • A few fixes for compatibility with SB3
    • Some changes for video rendering and framerate
    • highway-fast-v0: a faster variant of highway-v0 to train/debug models more quickly
    Source code(tar.gz)
    Source code(zip)
  • v1.2(Apr 29, 2021)

  • v1.1(Mar 12, 2021)

Owner
Edouard Leurent
Research Scientist @DeepMind
Edouard Leurent
Garbage Detection system which will detect objects based on whether it is plastic waste or plastics or just garbage.

Garbage Detection using Yolov5 on Jetson Nano 2gb Developer Kit. Garbage detection system which will detect objects based on whether it is plastic was

Rishikesh A. Bondade 2 May 13, 2022
Collection of in-progress libraries for entity neural networks.

ENN Incubator Collection of in-progress libraries for entity neural networks: Neural Network Architectures for Structured State Entity Gym: Abstractio

25 Dec 01, 2022
Interactive web apps created using geemap and streamlit

geemap-apps Introduction This repo demostrates how to build a multi-page Earth Engine App using streamlit and geemap. You can deploy the app on variou

Qiusheng Wu 27 Dec 23, 2022
An example of semantic segmentation using tensorflow in eager execution.

Semantic segmentation using Tensorflow eager execution Requirement Python 2.7+ Tensorflow-gpu OpenCv H5py Scikit-learn Numpy Imgaug Train with eager e

Iñigo Alonso Ruiz 25 Sep 29, 2022
Public repo for the ICCV2021-CVAMD paper "Is it Time to Replace CNNs with Transformers for Medical Images?"

Is it Time to Replace CNNs with Transformers for Medical Images? Accepted at ICCV-2021: Workshop on Computer Vision for Automated Medical Diagnosis (C

Christos Matsoukas 80 Dec 27, 2022
we propose EfficientDerain for high-efficiency single-image deraining

EfficientDerain we propose EfficientDerain for high-efficiency single-image deraining Requirements python 3.6 pytorch 1.6.0 opencv-python 4.4.0.44 sci

Qing Guo 126 Dec 07, 2022
Data-driven reduced order modeling for nonlinear dynamical systems

SSMLearn Data-driven Reduced Order Models for Nonlinear Dynamical Systems This package perform data-driven identification of reduced order model based

Haller Group, Nonlinear Dynamics 27 Dec 13, 2022
Supplementary materials to "Spin-optomechanical quantum interface enabled by an ultrasmall mechanical and optical mode volume cavity" by H. Raniwala, S. Krastanov, M. Eichenfield, and D. R. Englund, 2022

Supplementary materials to "Spin-optomechanical quantum interface enabled by an ultrasmall mechanical and optical mode volume cavity" by H. Raniwala,

Stefan Krastanov 1 Jan 17, 2022
Download and preprocess popular sequential recommendation datasets

Sequential Recommendation Datasets This repository collects some commonly used sequential recommendation datasets in recent research papers and provid

125 Dec 06, 2022
Model Serving Made Easy

The easiest way to build Machine Learning APIs BentoML makes moving trained ML models to production easy: Package models trained with any ML framework

BentoML 4.4k Jan 08, 2023
Code for the SIGIR 2022 paper "Hybrid Transformer with Multi-level Fusion for Multimodal Knowledge Graph Completion"

MKGFormer Code for the SIGIR 2022 paper "Hybrid Transformer with Multi-level Fusion for Multimodal Knowledge Graph Completion" Model Architecture Illu

ZJUNLP 68 Dec 28, 2022
Official Code for "Constrained Mean Shift Using Distant Yet Related Neighbors for Representation Learning"

CMSF Official Code for "Constrained Mean Shift Using Distant Yet Related Neighbors for Representation Learning" Requirements Python = 3.7.6 PyTorch

4 Nov 25, 2022
Understanding Convolutional Neural Networks from Theoretical Perspective via Volterra Convolution

nnvolterra Run Code Compile first: make compile Run all codes: make all Test xconv: make npxconv_test MNIST dataset needs to be downloaded, converted

1 May 24, 2022
Implementation of SwinTransformerV2 in TensorFlow.

SwinTransformerV2-TensorFlow A TensorFlow implementation of SwinTransformerV2 by Microsoft Research Asia, based on their official implementation of Sw

Phan Nguyen 2 May 30, 2022
Res2Net for Instance segmentation and Object detection using MaskRCNN

Res2Net for Instance segmentation and Object detection using MaskRCNN Since the MaskRCNN-benchmark of facebook is deprecated, we suggest to use our mm

Res2Net Applications 55 Oct 30, 2022
Current state of supervised and unsupervised depth completion methods

Awesome Depth Completion Table of Contents About Sparse-to-Dense Depth Completion Current State of Depth Completion Unsupervised VOID Benchmark Superv

224 Dec 28, 2022
LETR: Line Segment Detection Using Transformers without Edges

LETR: Line Segment Detection Using Transformers without Edges Introduction This repository contains the official code and pretrained models for Line S

mlpc-ucsd 157 Jan 06, 2023
Adversarially Learned Inference

Adversarially Learned Inference Code for the Adversarially Learned Inference paper. Compiling the paper locally From the repo's root directory, $ cd p

Mohamed Ishmael Belghazi 308 Sep 24, 2022
A repository for interferometer controller code.

dses-interferometer-controller A repository for interferometer controller code, hardware, and simulations. See dses.science for more information on th

Eli Reed 1 Jan 17, 2022
Identifying Stroke Indicators Using Rough Sets

Identifying Stroke Indicators Using Rough Sets With the spirit of reproducible research, this repository contains all the codes required to produce th

Muhammad Salman Pathan 0 Jun 09, 2022