Deep Sea Treasure Environment for Multi-Objective Optimization Research

Overview

DeepSeaTreasure Environment

DOI animation of submarine search for treasure

Installation

In order to get started with this environment, you can install it using the following command:

python3 -m pip install  deep_sea_treasure --upgrade

Data

If you are only interested in obtaining the Pareto-front data, you can find that on the data/ folder:

Example

After installing the environment, you can get started using it like this:

import pygame
import numpy as np
import time

import deep_sea_treasure
from deep_sea_treasure import DeepSeaTreasureV0

# Make sure experiment are reproducible, so people can use the exact same versions
print(f"Using DST {deep_sea_treasure.__version__.VERSION} ({deep_sea_treasure.__version__.COMMIT_HASH})")

dst: DeepSeaTreasureV0 = DeepSeaTreasureV0.new(
	max_steps=1000,
	render_treasure_values=True
)

dst.render()

stop: bool = False
time_reward: int = 0

while not stop:
	events = pygame.event.get()

	action = (np.asarray([0, 0, 0, 1, 0, 0, 0]), np.asarray([0, 0, 0, 1, 0, 0, 0]))

	for event in events:
		if event.type == pygame.KEYDOWN:
			if event.key == pygame.K_LEFT:
				action = (np.asarray([0, 0, 1, 0, 0, 0, 0]), np.asarray([0, 0, 0, 1, 0, 0, 0]))
			elif event.key == pygame.K_RIGHT:
				action = (np.asarray([0, 0, 0, 0, 1, 0, 0]), np.asarray([0, 0, 0, 1, 0, 0, 0]))
			if event.key == pygame.K_UP:
				action = (np.asarray([0, 0, 0, 1, 0, 0, 0]), np.asarray([0, 0, 1, 0, 0, 0, 0]))
			elif event.key == pygame.K_DOWN:
				action = (np.asarray([0, 0, 0, 1, 0, 0, 0]), np.asarray([0, 0, 0, 0, 1, 0, 0]))

			if event.key in {pygame.K_ESCAPE}:
				stop = True

		if event.type == pygame.QUIT:
			stop = True

	_, reward, done, debug_info = dst.step(action)
	time_reward += int(reward[1])

	if done:
		print(f"Found treasure worth {float(reward[0]):5.2f} after {abs(time_reward)} timesteps!")
		time_reward = 0

	if not stop:
		dst.render()
		time.sleep(0.25)

	if done:
		dst.reset()

This will allow you to play around in the environment, and move the sub around, finding treasures!

API

This section will provide you with a detailed description of the API we offer, and how you can use it. In general, our API matches that of OpenAI's gym package. This library contains 1 environment (DeepSeaTreasure-v0) and 2 wrappers (FuelWrapper and VamplewWrapper). In order to aid reproducibility, we include a package version, and source-code commit hash in the python code (deep_sea_treasure.__version__.VERSION and deep_sea_treasure.__version__.COMMIT_HASH).

The environment we created is a modified version of the environment originally showcased in the paper by Vamplew et al. In our version, the agent can move around by providing an acceleration on both the x- and y- direction. By default, accelerations are discrete, going from -3 to +3. It is the agent's goal to find the largest possible treasure, in the shortest amount of time. This immediately creates a tricky situation for any agent attempting to learn in this environment, since the agent has to deal with two conflicting objectives: time and treasure. This is also reflected in the reward we return from the environment, since this is a 2-element vector (1 element for each objective).

Wrappers

When wrapping this environment, users should take care to ensure that their debug_dict always contains at least an env value, identifying the wrapper that created the dict, and an inner value, that contains the debugging information from the wrapped environment. This is important if you want to use the renderer to render debug information, since the renderer is only capable of rendering debug information from the core environment itself.

Example

from typing import Any, Dict, Tuple

class ExampleWrapper:
	def step(self, action) -> Tuple[Any, Any, bool, Dict[str, Any]]:
		obs, rew, done, debug_dict = self.env.step(action)

		new_debug_dict = {
			"env": self.__class__.__name__	# = "ExampleWrapper"
			"inner": debug_dict
		}

		return obs, rew, done, new_debug_dict

General

All environments and wrappers in this repository provide a reward_space attribute, similar to the observation_space and action_space attributes normally present in gym environments. The main purpose behind this is to allow an external observer to determine the number of objectives.

DeepSeaTreasureV0

from deep_sea_treasure import DeepSeaTreasureV0

This is the core environment, creation is handled by the new() static method. This is the recommended way of creating environments, since it provides meaningful arguments for changing the various settings for the environment. Beyond this, there is also a constructor that accepts a dictionary of settings, this was mainly done to retain compatibility with RL frameworks like RLLib. Internally, the new() method simply fills a dictionary, and delegates to the constructor. The constructor uses JSON schemas along with a couple of extra asserts to verify the correctness of a given configuration. The exact schema can be found in deep_sea_treasure.schema.json, and is also returned by DeepSeaTreasureV0.schema(). A copy of the default configuration can be obtained through DeepSeaTreasureV0.default_config(). The table below describes the various options offered by the new() method.

Option Type Meaning
treasure_values List[List[Union[List[int], float]]] This option serves two purposes: It shapes the seabed, and determines the value of each treasure. It is an array of ((x, y), treasure) tuples.
acceleration_levels List[int] This option provides a list of discrete acceleration levels. Each level should be a positive integer number. Upon creation, the numbers with inverted sign will be used for accelerating left and up, while the numbers themselves signify acceleration either right (x) or down (y).
implicit_collision_constraint bool When this option is enabled, the reward function will be reduced to below its minimum when the submarine causes a collision. This is also reflected in the reward_space.low, the values in this array will be 1 below the minimum attainable through normal actions in the environment.
max_steps int The environment has 2 ending conditions: 1. The submarine finds a treasure 2. The maximum number of steps is reached, this option sets the maximum allowed number of steps.
max_velocity float The environment limits the absolute value of the submarine's maximum velocity to this number, to prevent overflow-related physics shenanigans from occurring.
render_grid bool If this option is enabled, a grid will be shown when rendering the environment, showing the discrete spaces where the submarine can reside.
render_treasure_values bool If this option is enabled, the value of each treasure will be rendered on top of the treasure, making the rendering slightly clearer.
theme Theme We allow the customization of the rendering through the use of the Theme class, in conjunction with the DeepSeaTreasureV0Renderer, this option allows the user to specify the Theme to use when rendering, the default Theme is returned by Theme.default().

It is possible to pass the debug_info dict that the step() method returns back to the render() method, this will display some useful debugging information on screen.

Action Space

A tuple of 2, 1-hot encoded actions.

Tuple[Discrete, Discrete]

The first action is the acceleration in the X-dimensions, the second action is the acceleration in the Y-dimensions. The number of available actions for each dimension is (2 * len(acceleration_levels)) + 1, since len(acceleration_levels) defaults to 3, the default number of available action per dimension is 7. The middle action always indicates a no-op (No acceleration, existing velocity will continue to move the submarine), actions with indices lower than the middle move towards the top-left of the world, actions with indices greater than the middle move towards the bottom-right of the world. When an action would cause a collision the sub's velocity is set to 0 in both dimensions and no action occurs.

Observation Space

Box

The observation in this environment is a 2 x N matrix, with N equal to len(treasure_values) + 1. The first column in the matrix (obs[:, 0]) contains the submarine's current velocity. The next N columns contain the submarine's position, relative to each of the treasures.

Reward Space

Box

The reward for this environment is a 2-element vector. The first element (reward[0]) contains the treasure reward. This will always be 0, unless the submarine is on a treasure square. The second element (reward[1]) contains the time reward, this reward is always -1, unless the submarine is on a treasures square.

FuelWrapper

from deep_sea_treasure import FuelWrapper

The FuelWrapper is a wrapper for the DeepSeaTreasureV0 environment that adds a third objective, fuel consumption. Fuel consumption is similar to time, in the sense that the reward for this will be negative at each timestep. It differs from time in the sense that the consumed fuel depends on the action taken. Accelerating 1 in either dimension will usually cost 1 fuel, but accelerating 3 usually costs 3 fuel. The addition of this mechanic makes high accelerations followed by coasting an attractive strategy. Creation of this wrapper is handled by the new() static method. This is the recommended way of creating wrappers, since it provides meaningful arguments for changing the various settings for the environment. Beyond this, there is also a constructor that accepts a gym.Env and a dictionary of settings, this was mainly done to retain compatibility with RL frameworks like RLLib. Internally, the new() method simply fills a dictionary, and delegates to the constructor. The constructor uses JSON schemas along with a couple of extra asserts to verify the correctness of a given configuration. The exact schema can be found in fuel_wrapper.schema.json.

Option Type Meaning
fuel_cost List[int] The cost of each acceleration_level in fuel units. The no-op action is always assumed to consume no fuel. The length of this list should always match the length of the acceleration_levels list in the core environment.

Action Space

Same as DeepSeaTreasureV0.

Observation Space

Same as DeepSeaTreasureV0.

Reward Space

Box

The reward for this environment is a 3-element vector. The first 2 elements are identical to those in the DeepSeaTreasureV0 environment. The first element (reward[0]) contains the treasure reward. This will always be 0, unless the submarine is on a treasure square. The second element (reward[1]) contains the time reward, this reward is always -1, unless the submarine is on a treasures square. The third element (reward[2]) contains the fuel reward, this reward reflects the fuel consumed by the last action the agent took.

VamplewWrapper

from deep_sea_treasure import VamplewWrapper

The VamplewWrapper is a wrapper intended to undo the modifications we made to the core DeepSeaTreasureV0 environment. It wraps both the action and observation space, to make sure the environment matches the original setup by Vamplew et al. exactly. This means that the VamplewWrapper has a different action and observation space from the original DeepSeaTreasureV0 environment. The VamplewWrapper can wrap the FuelWrapper, but not the other way around, due to action-space incompatibility. Creation of this wrapper is handled by the new() static method. This is the recommended way of creating wrappers, since it provides meaningful arguments for changing the various settings for the environment. Beyond this, there is also a constructor that accepts a gym.Env and a dictionary of settings, this was mainly done to retain compatibility with RL frameworks like RLLib. Internally, the new() method simply fills a dictionary, and delegates to the constructor. The constructor uses JSON schemas along with a couple of extra asserts to verify the correctness of a given configuration. The exact schema can be found in vamplew_wrapper.schema.json.

Option Type Meaning
enable_idle bool When true, this option enables a 5th action in this environment, idle. This allows the submarine to sit still and do nothing.

Action Space

The action space for the Vamplew wrapper consists of a single 1-hot encoded action. There are 4 or 5 possible actions to take, depending on how the wrapper was configured:

  • Up
  • Right
  • Down
  • Left
  • (Idle)

Actions are specified in this order. Each action will cancel out all velocity from the previous action, and make the velocity in the desired direction 1.

Observation Space

Box

The observation in this environment is a 2-element vector, containing the submarine's current row and column.

Reward Space

Same as DeepSeaTreasureV0.

You might also like...
A research toolkit for particle swarm optimization in Python
A research toolkit for particle swarm optimization in Python

PySwarms is an extensible research toolkit for particle swarm optimization (PSO) in Python. It is intended for swarm intelligence researchers, practit

Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Optimization Algorithm,Immune Algorithm, Artificial Fish Swarm Algorithm, Differential Evolution and TSP(Traveling salesman)
Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Optimization Algorithm,Immune Algorithm, Artificial Fish Swarm Algorithm, Differential Evolution and TSP(Traveling salesman)

scikit-opt Swarm Intelligence in Python (Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Algorithm, Immune Algorithm,A

library for nonlinear optimization, wrapping many algorithms for global and local, constrained or unconstrained, optimization

NLopt is a library for nonlinear local and global optimization, for functions with and without gradient information. It is designed as a simple, unifi

Racing line optimization algorithm in python that uses Particle Swarm Optimization.
Racing line optimization algorithm in python that uses Particle Swarm Optimization.

Racing Line Optimization with PSO This repository contains a racing line optimization algorithm in python that uses Particle Swarm Optimization. Requi

Objective of the repository is to learn and build machine learning models using Pytorch. 30DaysofML Using Pytorch
Objective of the repository is to learn and build machine learning models using Pytorch. 30DaysofML Using Pytorch

30 Days Of Machine Learning Using Pytorch Objective of the repository is to learn and build machine learning models using Pytorch. List of Algorithms

Official implementation of
Official implementation of "A Unified Objective for Novel Class Discovery", ICCV2021 (Oral)

A Unified Objective for Novel Class Discovery This is the official repository for the paper: A Unified Objective for Novel Class Discovery Enrico Fini

v objective diffusion inference code for JAX.

v-diffusion-jax v objective diffusion inference code for JAX, by Katherine Crowson (@RiversHaveWings) and Chainbreakers AI (@jd_pressman). The models

Implementation of a protein autoregressive language model, but with autoregressive infilling objective (editing subsequences capability)
Implementation of a protein autoregressive language model, but with autoregressive infilling objective (editing subsequences capability)

Protein GLM (wip) Implementation of a protein autoregressive language model, but with autoregressive infilling objective (editing subsequences capabil

Paaster is a secure by default end-to-end encrypted pastebin built with the objective of simplicity.
Paaster is a secure by default end-to-end encrypted pastebin built with the objective of simplicity.

Follow the development of our desktop client here Paaster Paaster is a secure by default end-to-end encrypted pastebin built with the objective of sim

Releases(v2.0.2)
Owner
imec IDLab
imec IDLab
Deep learning for Engineers - Physics Informed Deep Learning

SciANN: Neural Networks for Scientific Computations SciANN is a Keras wrapper for scientific computations and physics-informed deep learning. New to S

SciANN 195 Jan 03, 2023
Multi-Scale Aligned Distillation for Low-Resolution Detection (CVPR2021)

MSAD Multi-Scale Aligned Distillation for Low-Resolution Detection Lu Qi*, Jason Kuen*, Jiuxiang Gu, Zhe Lin, Yi Wang, Yukang Chen, Yanwei Li, Jiaya J

DV Lab 115 Dec 23, 2022
SMORE: Knowledge Graph Completion and Multi-hop Reasoning in Massive Knowledge Graphs

SMORE: Knowledge Graph Completion and Multi-hop Reasoning in Massive Knowledge Graphs SMORE is a a versatile framework that scales multi-hop query emb

Google Research 135 Dec 27, 2022
Easy to use Audio Tagging in PyTorch

Audio Classification, Tagging & Sound Event Detection in PyTorch Progress: Fine-tune on audio classification Fine-tune on audio tagging Fine-tune on s

sithu3 15 Dec 22, 2022
Systemic Evolutionary Chemical Space Exploration for Drug Discovery

SECSE SECSE: Systemic Evolutionary Chemical Space Explorer Chemical space exploration is a major task of the hit-finding process during the pursuit of

64 Dec 16, 2022
Gans-in-action - Companion repository to GANs in Action: Deep learning with Generative Adversarial Networks

GANs in Action by Jakub Langr and Vladimir Bok List of available code: Chapter 2: Colab, Notebook Chapter 3: Notebook Chapter 4: Notebook Chapter 6: C

GANs in Action 914 Dec 21, 2022
FaceVerse: a Fine-grained and Detail-controllable 3D Face Morphable Model from a Hybrid Dataset (CVPR2022)

FaceVerse FaceVerse: a Fine-grained and Detail-controllable 3D Face Morphable Model from a Hybrid Dataset Lizhen Wang, Zhiyuan Chen, Tao Yu, Chenguang

Lizhen Wang 219 Dec 28, 2022
Implementation of paper "DeepTag: A General Framework for Fiducial Marker Design and Detection"

Implementation of paper DeepTag: A General Framework for Fiducial Marker Design and Detection. Project page: https://herohuyongtao.github.io/research/

Yongtao Hu 46 Dec 12, 2022
Learned Token Pruning for Transformers

LTP: Learned Token Pruning for Transformers Check our paper for more details. Installation We follow the same installation procedure as the original H

Sehoon Kim 52 Dec 29, 2022
Convert Mission Planner (ArduCopter) Waypoint Missions to Litchi CSV Format to execute on DJI Drones

Mission Planner to Litchi Convert Mission Planner (ArduCopter) Waypoint Surveys to Litchi CSV Format to execute on DJI Drones Litchi doesn't support S

Yaros 24 Dec 09, 2022
Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set (CVPRW 2019). A PyTorch implementation.

Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set —— PyTorch implementation This is an unofficial offici

Sicheng Xu 833 Dec 28, 2022
ECCV18 Workshops - Enhanced SRGAN. Champion PIRM Challenge on Perceptual Super-Resolution. The training codes are in BasicSR.

ESRGAN (Enhanced SRGAN) [ 🚀 BasicSR] [Real-ESRGAN] ✨ New Updates. We have extended ESRGAN to Real-ESRGAN, which is a more practical algorithm for rea

Xintao 4.7k Jan 02, 2023
Diverse Image Captioning with Context-Object Split Latent Spaces (NeurIPS 2020)

Diverse Image Captioning with Context-Object Split Latent Spaces This repository is the PyTorch implementation of the paper: Diverse Image Captioning

Visual Inference Lab @TU Darmstadt 34 Nov 21, 2022
OpenVisionAPI server

🚀 Quick start An instance of ova-server is free and publicly available here: https://api.openvisionapi.com Checkout ova-client for a quick demo. Inst

Open Vision API 93 Nov 24, 2022
Official NumPy Implementation of Deep Networks from the Principle of Rate Reduction (2021)

Deep Networks from the Principle of Rate Reduction This repository is the official NumPy implementation of the paper Deep Networks from the Principle

Ryan Chan 49 Dec 16, 2022
kapre: Keras Audio Preprocessors

Kapre Keras Audio Preprocessors - compute STFT, ISTFT, Melspectrogram, and others on GPU real-time. Tested on Python 3.6 and 3.7 Why Kapre? vs. Pre-co

Keunwoo Choi 867 Dec 29, 2022
Graph Convolutional Networks for Temporal Action Localization (ICCV2019)

Graph Convolutional Networks for Temporal Action Localization This repo holds the codes and models for the PGCN framework presented on ICCV 2019 Graph

Runhao Zeng 318 Dec 06, 2022
Repo for the ACMMM20 submission: "Personalized breath based biometric authentication with wearable multimodality".

personalized-breath Repo for the ACMMM20 submission: "Personalized breath based biometric authentication with wearable multimodality". Guideline To ex

Manh-Ha Bui 2 Nov 15, 2021
Motion planning algorithms commonly used on autonomous vehicles. (path planning + path tracking)

Overview This repository implemented some common motion planners used on autonomous vehicles, including Hybrid A* Planner Frenet Optimal Trajectory Hi

Huiming Zhou 1k Jan 09, 2023
A particular navigation route using satellite feed and can help in toll operations & traffic managemen

How about adding some info that can quanitfy the stress on a particular navigation route using satellite feed and can help in toll operations & traffic management The current analysis is on the satel

Ashish Pandey 1 Feb 14, 2022