Source code for deep symbolic optimization.

Overview

Update July 10, 2021: This repository now supports an additional symbolic optimization task: learning symbolic policies for reinforcement learning. The repository itself has also been renamed; however, Github automatically handles all web-based and git command redirects to use the new URL.

Deep symbolic optimization

Deep symbolic optimization (DSO) is a deep learning framework for symbolic optimization tasks. The package dso includes the core symbolic optimization algorithms, as well as support for two particular symbolic optimization tasks: (1) symbolic regression (recovering tractable mathematical expressions from an input dataset) and (2) discovering symbolic policies for reinforcement learning environments. In the code, these tasks are referred to as regression and control, respectively. We also include a simple interface for defining new tasks.

This repository contains code supporting the following publications:

  1. Petersen et al. 2021 Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients. ICLR 2021. Oral Paper
  2. Landajuela et al. 2021 Discovering symbolic policies with deep reinforcement learning. ICML 2021. Paper
  3. Landajuela et al. 2021 Improving exploration in policy gradient search: Application to symbolic optimization. Math-AI @ ICLR 2021. Paper
  4. Petersen et al. 2021 Incorporating domain knowledge into neural-guided search via in situ priors and constraints AutoML @ ICML 2021. Paper
  5. Kim et al. 2021 Distilling Wikipedia mathematical knowledge into neural network models. Math-AI @ ICLR 2021. Paper
  6. Kim et al. 2020 An interactive visualization platform for deep symbolic regression. IJCAI 2020. Paper

Installation

Installation - core package

The core package has been tested on Python3.6+ on Unix and OSX. To install the core package (and the default regression task), we highly recommend first creating a Python 3 virtual environment, e.g.

python3 -m venv venv3 # Create a Python 3 virtual environment
source venv3/bin/activate # Activate the virtual environment

Then, from the repository root:

pip install --upgrade pip
pip install -r requirements.txt # Install Python dependencies
export CFLAGS="-I $(python -c "import numpy; print(numpy.get_include())") $CFLAGS" # Needed on Mac to prevent fatal error: 'numpy/arrayobject.h' file not found
pip install -e ./dso # Install DSO package

Installation - regression task

There are no additional dependencies to run the regression task.

Installation - control task

There are a few additional dependencies to run the control task. Install them using:

pip install -r requirements_control.txt

Getting started

DSO relies on configuring runs via a JSON file, then launching them via a simple command-line or a few lines of Python.

Method 1: Running DSO via command-line interface

After creating your config file, simply run:

python -m dso.run path/to/config.json

After training, results are saved to a timestamped directory in the path given in the "logdir" parameter (default ./log).

Method 2: Running DSO via Python interface

The Python interface lets users instantiate and customize DSO models via Python scripts, an interactive Python shell, or an iPython notebook. The core DSO model is dso.core.DeepSymbolicOptimizer. After creating your config file, you can use:

from dso import DeepSymbolicOptimizer

# Create and train the model
model = DeepSymbolicOptimizer("path/to/config.json")
model.train()

After training, results are saved to a timestamped directory in the path given in the "logdir" parameter (default ./log).

Configuring runs

A single JSON file is used to configure each run. This file specifies the symbolic optimization task and all hyperparameters.

Each configuration JSON file has a number of top-level keys that control various parts of the DSO framework. The important top-level keys are:

  • "experiment" configures the experiment, namely the log directory and random number seed.
  • "task" configures the task, e.g. the dataset for symbolic regression, or the Gym environment for the control task. See below for task-specific configuration.
  • "training" configures training hyperparameters like "n_samples" (the total number of samples to generate) and "epsilon" (the risk factor used by the risk-seeking policy gradient).
  • "controller" configures RNN hyperparameters like "learning_rate" and "num_layers".
  • "prior" configures the priors and constraints on the search space.

Any parameters not included in your config file assume default values found in config/config_common.json, config/config_regression.json (for regression runs), and config/config_control.json (for control runs).

Configuring runs for symbolic regression

Here are simple example contents of a JSON file for the regression task:

{
  "task" : {
    "task_type" : "regression",
    "dataset" : "path/to/my_dataset.csv",
    "function_set" : ["add", "sub", "mul", "div", "sin", "cos", "exp", "log"]
  }
}

This configures DSO to learn symbolic expressions to fit your custom dataset, using the tokens specified in function_set (see dso/functions.py for a list of supported tokens).

If you want to include optimized floating-point constants in the search space, simply add "const" to the function_set list. Note that constant optimization uses an inner-optimization loop, which leads to much longer runtimes (~hours instead of ~minutes).

Configuring runs for learning symbolic control policies

Here's a simple example for the control task:

{
  "task" : {
    "task_type" : "control",
    "env" : "MountainCarContinuous-v0",
    "function_set" : ["add", "sub", "mul", "div", "sin", "cos", "exp", "log", 1.0, 5.0, 10.0]
    }
  }
}

This configures DSO to learn a symbolic policy for MountainCarContinuous-v0, using the tokens specified in function_set (see dso/functions.py for a list of supported tokens).

For environments with multi-dimensional action spaces, DSO requires a pre-trained "anchor" policy. DSO is run once per action dimension, and the "action_spec" parameter is updated each run. For an environment with N action dimesions, "action_spec" is a list of length N. A single element should be null, meaning that is the symbolic action to be learned. Any number of elements can be "anchor", meaning the anchor policy will determine those actions. Any number of elements can be expression traversals (e.g. ["add", "x1", "x2"]), meaning that fixed symbolic policy will determine those actions.

Here's an example workflow for HopperBulletEnv-v0, which has three action dimensions. First, learn a symbolic policy for the first action by running DSO with a config like:

{
  "task" : {
    "task_type" : "control",
    "name" : "HopperBulletEnv-v0",
    "function_set" : ["add", "sub", "mul", "div", "sin", "cos", "exp", "log", 1.0, 5.0, 10.0],
    "action_spec" : [null, "anchor", "anchor"],
    "anchor" : "path/to/anchor.pkl"
    }
  }
}

where "path/to/anchor.pkl" is a path to a stable_baselines model. (The environments used in the ICML paper have default values for anchor, so you do not have to specify one.) After running, let's say the best expression has traversal ["add", "x1", "x2"]. To launch the second round of DSO, update the config's action_spec to use the fixed symbolic policy for the first action, learn a symbolic policy for the second action, and use the anchor again for the third action:

"action_spec" : [["add", "x1", "x2"], null, "anchor"]

After running DSO, say the second action's traversal is ["div", "x3", "x4"]. Finally, update the action_spec to:

"action_spec" : [["add", "x1", "x2"], ["div", "x3", "x4"], null]

and rerun DSO. The final result is a fully symbolic policy.

Sklearn interface

The regression task supports an additional sklearn-like regressor interface to make it easy to try out deep symbolic regression on your own data:

from dso import DeepSymbolicRegressor

# Generate some data
np.random.seed(0)
X = np.random.random((10, 2))
y = np.sin(X[:,0]) + X[:,1] ** 2

# Create the model
model = DeepSymbolicRegressor() # Alternatively, you can pass in your own config JSON path

# Fit the model
model.fit(X, y) # Should solve in ~10 seconds

# View the best expression
print(model.program_.pretty())

# Make predictions
model.predict(2 * X)

Analyzing results

Each run of DSO saves a timestamped log directory in config["training"]["logdir"]. Inside this directory is:

  • dso_ExperimentName_0.csv: This file contains batch-wise summary statistics for each epoch. The suffix _0 means the random number seed was 0. (See "Advanced usage" for batch runs with multiple seeds.)
  • dso_ExperimnetName_0_summary.csv: This file contains summary statistics for the entire training run.
  • dso_ExperimnetName_0_hof.csv: This file contains statistics of the "hall of fame" (best sequences discovered during training). Edit `config["training"]["hof"] to set the number of hall-of-famers to record.
  • dso_ExperimnetName_0_pf.csv: This file contains statistics of the Pareto front of sequences discovered during training. This is a reward-complexity front.
  • config.json: This is a "dense" version of the configuration used for your run. It explicitly includes all parameters.

Advanced usage

Batch runs

DSO's command-line interface supports a multiprocessing-parallelized batch mode to run multiple tasks in parallel. This is recommended for large runs. Batch-mode DSO is launched with:

python -m dso.run path/to/config.json [--runs] [--n_cores_task] [--b] [--seed]

The option --runs (default 1) defines how many independent tasks (with different random number seeds) to perform. The regression task is computationally expedient enough to run multiple tasks in parallel. For the control task, we recommend running with the default --runs=1.

The option --n_cores_task (default 1) defines how many parallel processes to use across the --runs tasks. Each task is assigned a single core, so --n_cores_task should be less than or equal to --runs. (To use multiple cores within a single task, i.e. to parallelize reward computation, see the n_cores_batch configuration parameter.)

The option --seed, if provided, will override the parameter "seed" in your config.

By default, DSO will use the task specification found in the configuration JSON. The option --b (default None) is used to specify the named task(s) via command-line. For example, --b=path/to/mydata.csv runs DSO on the given dataset (regression task), and --b=MountainCarContinuous-v0 runs the environment MountainCarContinuous-v0 (control task). This is useful for running benchmark problems.

For example, to train 100 independent runs of DSR on the Nguyen-1 benchmark using 12 cores, using seeds 500 through 599:

python -m dso.run --b=Nguyen-1 --runs=100 --n_cores_task=12 --seed=500

Citing this work

To cite this work, please cite according to the most relevant task.

To cite the regression task, use:

@inproceedings{petersen2021deep,
  title={Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients},
  author={Petersen, Brenden K and Landajuela, Mikel and Mundhenk, T Nathan and Santiago, Claudio P and Kim, Soo K and Kim, Joanne T},
  booktitle={Proc. of the International Conference on Learning Representations},
  year={2021}
}

To cite the control task, use:

@inproceedings{landajuela2021discovering,
  title={Discovering symbolic policies with deep reinforcement learning},
  author={Landajuela, Mikel and Petersen, Brenden K and Kim, Sookyung and Santiago, Claudio P and Glatt, Ruben and Mundhenk, Nathan and Pettit, Jacob F and Faissol, Daniel},
  booktitle={International Conference on Machine Learning},
  pages={5979--5989},
  year={2021},
  organization={PMLR}
}

Release

LLNL-CODE-647188

Comments
  • avoid overtraining

    avoid overtraining

    Hello, I have simple 1D function to be fitted, but all the best solution are very overtrained. Example:

    Schermata da 2022-04-28 15-36-36

    the red one is proposed by deep-symbolic-optimization, the green one is just a polynominal of 9th degree.

    {
        "task" : {
            "task_type" : "regression",
            "dataset" : "mydataset.csv",
            "function_set" : ["add", "sub", "mul", "div", "log", "sin", "cos", "exp", "const"]
        },
        "controller": {"max_length": 16},
        "prior": {"length": {"max_": 16}}
    
    }
    

    mydataset.csv:

    9.00e+01, 1.22e-03
    9.50e+01, 1.39e-03
    1.00e+02, 1.58e-03
    1.05e+02, 1.77e-03
    1.10e+02, 1.95e-03
    1.15e+02, 2.11e-03
    1.20e+02, 2.23e-03
    1.25e+02, 2.28e-03
    1.30e+02, 2.24e-03
    1.35e+02, 2.12e-03
    1.40e+02, 1.93e-03
    1.45e+02, 1.67e-03
    1.50e+02, 1.36e-03
    1.60e+02, 5.32e-04
    1.70e+02, 1.58e-04
    1.80e+02, 1.05e-04
    1.90e+02, 7.05e-05
    2.00e+02, 5.51e-05
    2.10e+02, 4.54e-05
    
    
    opened by wiso 9
  • "Illegal Hardware Instruction"

    Hi Brenden,

    Sorry to bother you again! When I was trying to run the example config file, I encountered the error saying "illegal hardware instruction" as shown in the figure. Do you have any idea about why this happens? I am currently using a macbook with M1 pro chip. Thank you so much!

    hardware issue

    Jiayi

    opened by syrnluo 5
  • Creating a Custom Function that only accepts Input Variables

    Creating a Custom Function that only accepts Input Variables

    Hello again,

    I've been attempting for the past month or so to create a custom function to use in function_set to help DSO find the expression we're looking to regress into. This custom function is a simple binomial (1-x1), and our goal is to make it so that it only places an open input variable into the custom function and uses the binomial as a building block for the expression. Right now I've tried using the priors currently in place to do so, such as the relational prior and const prior, but nothing has made it so that x1 is the only thing it attempts to put in that slot. Do you have any suggestions on how to get this to work?

    If you need any clarification, let me know

    • Sean
    opened by Sean-Reilly 5
  • Problems with parallelization within a batch using `const` token

    Problems with parallelization within a batch using `const` token

    Hello Again,

    So I've been trying to get the parallelization working for this, and when I set n_cores_batch = 2 in the config.json file it keeps giving me the error below. I'm not sure what is causing this issue, and it persists with any other value for n_cores_batch other than 1. Do you have any insight into why this might be?

    multiprocessing.pool.RemoteTraceback:
    """
    Traceback (most recent call last):
      File "/home/software/anaconda/3/envs/dso-sw/lib/python3.7/multiprocessing/pool.py", line 121, in worker
        result = (True, func(*args, **kwds))
      File "/home/software/anaconda/3/envs/dso-sw/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar
        return list(map(*args))
      File "/nas0/tluchko/sandbox/deep-symbolic-optimization/dso/dso/train.py", line 28, in work
        optimized_constants = p.optimize()
      File "/nas0/tluchko/sandbox/deep-symbolic-optimization/dso/dso/program.py", line 393, in optimize
        optimized_constants = Program.const_optimizer(f, x0)
    TypeError: 'NoneType' object is not callable
    """
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "rundso.py", line 9, in <module>
        model.train()
      File "/nas0/tluchko/sandbox/deep-symbolic-optimization/dso/dso/core.py", line 90, in train
        **self.config_training))
      File "/nas0/tluchko/sandbox/deep-symbolic-optimization/dso/dso/train.py", line 278, in learn
        results = pool.map(work, programs_to_optimize)
      File "/home/software/anaconda/3/envs/dso-sw/lib/python3.7/multiprocessing/pool.py", line 268, in map
        return self._map_async(func, iterable, mapstar, chunksize).get()
      File "/home/software/anaconda/3/envs/dso-sw/lib/python3.7/multiprocessing/pool.py", line 657, in get
        raise self._value
    TypeError: 'NoneType' object is not callable
    
    
    opened by Sean-Reilly 5
  • Evaluation taking much longer than the running epochs

    Evaluation taking much longer than the running epochs

    I've adapted this incredible repo to my custom environment and eventually managed to get it to train. It does take a very long time per epoch even with 10 dedicated cores due to the complexity of the environment.

    For the sake of curiosity, I reduced the epochs down to 10 so that I could see what happens once the running phase is over. The 10 epochs under my environment took approximately 32 minutes.

    From train.py I saw that epochs run until nevals is larger than n_samples which is effectively the total epoch time to my understanding.

    image image

    Looking into the functions following that they should not have much complexity and therefore I would expect the evaluation time to take about the same if not slightly more than the running epochs time. Unfortunately, it is taking much much longer than that and this is still all that I see: image

    Have I done something wrong or is this a bug? below I will put my config file: image

    opened by robvasistha 4
  • Possible missing components or needing update cyfunc.pyx

    Possible missing components or needing update cyfunc.pyx

    Dear Petersen,

    This import cannot work since StateChecker and Polynomial are not defined in dso.library. So the cython_execute may never be used. https://github.com/brendenpetersen/deep-symbolic-optimization/blob/8023df67b283b358df2a8798368cf1391ac42c8b/dso/dso/cyfunc.pyx#L10

    I noticed that there is no this import in the previous version of cyfunc.pyx. Could you have a check, please ?

    opened by thw1021 4
  • How to select a specific set of operators for the training

    How to select a specific set of operators for the training

    Hi there,

    I have recently found this nice library for symbolic regression. Being used to sklearn interface, I was wondering whether it is possible to modify the config file (and how) such that only a few operators are chosen amongst the default set which includes everything. Using the default set gives me decent results for my regression problem, however the best expression contains exp and trig terms which I would like to avoid as much as possible as they make the results a bit harder to interpret.

    Best, Sam

    opened by sambatra 4
  • Feature : Custom cost function

    Feature : Custom cost function

    Hello,

    Just wondering, how we can use custom cost function ?

    def mycost(formuale_str)  : 
    
    
       cost = myfun( eval(formulae_str)  ,....)
    
        return cost
    

    For many problems, cost is very customized.

    opened by arita37 3
  • Error running PiecewiseFunction-1.json example, of Failed to import malformed source string: state_checker

    Error running PiecewiseFunction-1.json example, of Failed to import malformed source string: state_checker

    Hi Brendon and Team,

    Love the project, however running the PiecewiseFunction-1.json throws an error of the following:

    python -m dso.run ./config/examples/regression/PiecewiseFunction-1.json
    
    == EXPERIMENT SETUP START ===========
    Task type            : regression
    Dataset              : task/regression/data/PiecewiseFunction-1.csv
    Starting seed        : 0
    Runs                 : 1
    == EXPERIMENT SETUP END =============
    
    == TRAINING SEED 0 START ============
    Traceback (most recent call last):
      File "/home/sam/anaconda3/envs/sd2/lib/python3.7/runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "/home/sam/anaconda3/envs/sd2/lib/python3.7/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/home/sam/code/discovery/sd2/dso/dso/run.py", line 156, in <module>
        main()
      File "/home/sam/anaconda3/envs/sd2/lib/python3.7/site-packages/click/core.py", line 1128, in __call__
        return self.main(*args, **kwargs)
      File "/home/sam/anaconda3/envs/sd2/lib/python3.7/site-packages/click/core.py", line 1053, in main
        rv = self.invoke(ctx)
      File "/home/sam/anaconda3/envs/sd2/lib/python3.7/site-packages/click/core.py", line 1395, in invoke
        return ctx.invoke(self.callback, **ctx.params)
      File "/home/sam/anaconda3/envs/sd2/lib/python3.7/site-packages/click/core.py", line 754, in invoke
        return __callback(*args, **kwargs)
      File "/home/sam/code/discovery/sd2/dso/dso/run.py", line 139, in main
        result, summary_path = train_dso(config)
      File "/home/sam/code/discovery/sd2/dso/dso/run.py", line 33, in train_dso
        result = model.train()
      File "/home/sam/code/discovery/sd2/dso/dso/core.py", line 74, in train
        self.setup()
      File "/home/sam/code/discovery/sd2/dso/dso/core.py", line 67, in setup
        self.prior = self.make_prior()
      File "/home/sam/code/discovery/sd2/dso/dso/core.py", line 140, in make_prior
        prior = make_prior(Program.library, self.config_prior)
      File "/home/sam/code/discovery/sd2/dso/dso/prior.py", line 43, in make_prior
        prior_class = import_custom_source(prior_type)
      File "/home/sam/code/discovery/sd2/dso/dso/utils.py", line 214, in import_custom_source
        assert m is not None and m.end() == len(import_source), "*** Failed to import malformed source string: "+import_source
    AssertionError: *** Failed to import malformed source string: state_checker
    

    Could you let me know what I can do to fix this, or point me in the right direction, it seems it is trying to find a custom prior called state checker, that cannot be found and error's, I could be incorrect here.

    Thank you so much, really excited to use DSO for our current research applied projects,

    OS: Ubuntu 20.04 LTS “Focal Fossa,”

    Output of pip freeze is:

    absl-py @ file:///home/conda/feedstock_root/build_artifacts/absl-py_1637088766493/work
    astor @ file:///home/conda/feedstock_root/build_artifacts/astor_1593610464257/work
    atari-py==0.2.9
    attrs==21.4.0
    box2d-py==2.3.8
    cached-property @ file:///home/conda/feedstock_root/build_artifacts/cached_property_1615209429212/work
    certifi==2021.10.8
    click==8.0.4
    cloudpickle==1.2.2
    commentjson==0.9.0
    cycler==0.11.0
    Cython==0.29.28
    deap==1.3.1
    dill==0.3.4
    -e git+ssh://[email protected]/samholt/[email protected]#egg=dso&subdirectory=dso
    fonttools==4.31.2
    future==0.18.2
    gast @ file:///home/conda/feedstock_root/build_artifacts/gast_1636964356021/work
    google-pasta==0.2.0
    grpcio @ file:///home/conda/feedstock_root/build_artifacts/grpcio_1624380491840/work
    gym==0.15.4
    h5py @ file:///home/conda/feedstock_root/build_artifacts/h5py_1624405626125/work
    importlib-metadata @ file:///home/conda/feedstock_root/build_artifacts/importlib-metadata_1647210388949/work
    iniconfig==1.1.1
    joblib==1.1.0
    Keras-Applications==1.0.8
    Keras-Preprocessing @ file:///home/conda/feedstock_root/build_artifacts/keras-preprocessing_1610713559828/work
    kiwisolver==1.4.0
    lark-parser==0.7.8
    llvmlite==0.36.0
    Markdown @ file:///home/conda/feedstock_root/build_artifacts/markdown_1637220118004/work
    matplotlib==3.5.1
    mpi4py==3.1.3
    mpmath==1.2.1
    multiprocess==0.70.12.2
    numba==0.53.1
    numpy==1.19.0
    opencv-python==4.5.5.64
    packaging==21.3
    pandas==1.3.5
    pathos==0.2.8
    Pillow==9.0.1
    pluggy==1.0.0
    pox==0.3.0
    ppft==1.6.6.4
    progress==1.6
    protobuf==3.17.2
    py==1.11.0
    pybullet==3.2.1
    pyglet==1.3.2
    pyparsing==3.0.7
    pytest==7.1.1
    python-dateutil==2.8.2
    pytz==2022.1
    PyYAML==6.0
    scikit-learn==1.0.2
    scipy @ file:///home/conda/feedstock_root/build_artifacts/scipy_1626684342480/work
    seaborn==0.11.2
    six @ file:///home/conda/feedstock_root/build_artifacts/six_1620240208055/work
    stable-baselines==2.10.0
    sympy==1.10.1
    tensorboard==1.14.0
    tensorflow @ file:///home/conda/feedstock_root/build_artifacts/tensorflow_1594833314895/work/tensorflow_pkg/tensorflow-1.14.0-cp37-cp37m-linux_x86_64.whl
    tensorflow-estimator==1.14.0
    termcolor==1.1.0
    threadpoolctl==3.1.0
    tomli==2.0.1
    tqdm==4.63.0
    typing_extensions @ file:///home/conda/feedstock_root/build_artifacts/typing_extensions_1644850595256/work
    Werkzeug @ file:///home/conda/feedstock_root/build_artifacts/werkzeug_1644332431572/work
    wrapt @ file:///home/conda/feedstock_root/build_artifacts/wrapt_1610094880759/work
    zipp @ file:///home/conda/feedstock_root/build_artifacts/zipp_1643828507773/work
    

    Any help is very much appreciated, thank you,

    All the best, Sam

    opened by samholt 3
  • Trouble defining a Log Directory

    Trouble defining a Log Directory

    Hi Brenden,

    I am having trouble setting a log directory for it to save the results to. I’ve set it in my config.json Similarly to the config_common.json file you have in your repository. I have a directory named ./log but for some reason it keeps giving me the same error WARNING: logdir not provided results will not be saved to file. Could you specify exactly where the logdir needs to be set so I can check to see where the issue is? For reference I am running DSO from a python script (I.e. similarly to the scipy interface).

    opened by Sean-Reilly 3
  • Bump tensorflow from 1.14 to 2.4.0

    Bump tensorflow from 1.14 to 2.4.0

    Bumps tensorflow from 1.14 to 2.4.0.

    Release notes

    Sourced from tensorflow's releases.

    TensorFlow 2.4.0

    Release 2.4.0

    Major Features and Improvements

    • tf.distribute introduces experimental support for asynchronous training of models via the tf.distribute.experimental.ParameterServerStrategy API. Please see the tutorial to learn more.

    • MultiWorkerMirroredStrategy is now a stable API and is no longer considered experimental. Some of the major improvements involve handling peer failure and many bug fixes. Please check out the detailed tutorial on Multi-worker training with Keras.

    • Introduces experimental support for a new module named tf.experimental.numpy which is a NumPy-compatible API for writing TF programs. See the detailed guide to learn more. Additional details below.

    • Adds Support for TensorFloat-32 on Ampere based GPUs. TensorFloat-32, or TF32 for short, is a math mode for NVIDIA Ampere based GPUs and is enabled by default.

    • A major refactoring of the internals of the Keras Functional API has been completed, that should improve the reliability, stability, and performance of constructing Functional models.

    • Keras mixed precision API tf.keras.mixed_precision is no longer experimental and allows the use of 16-bit floating point formats during training, improving performance by up to 3x on GPUs and 60% on TPUs. Please see below for additional details.

    • TensorFlow Profiler now supports profiling MultiWorkerMirroredStrategy and tracing multiple workers using the sampling mode API.

    • TFLite Profiler for Android is available. See the detailed guide to learn more.

    • TensorFlow pip packages are now built with CUDA11 and cuDNN 8.0.2.

    Breaking Changes

    • TF Core:

      • Certain float32 ops run in lower precsion on Ampere based GPUs, including matmuls and convolutions, due to the use of TensorFloat-32. Specifically, inputs to such ops are rounded from 23 bits of precision to 10 bits of precision. This is unlikely to cause issues in practice for deep learning models. In some cases, TensorFloat-32 is also used for complex64 ops. TensorFloat-32 can be disabled by running tf.config.experimental.enable_tensor_float_32_execution(False).
      • The byte layout for string tensors across the C-API has been updated to match TF Core/C++; i.e., a contiguous array of tensorflow::tstring/TF_TStrings.
      • C-API functions TF_StringDecode, TF_StringEncode, and TF_StringEncodedSize are no longer relevant and have been removed; see core/platform/ctstring.h for string access/modification in C.
      • tensorflow.python, tensorflow.core and tensorflow.compiler modules are now hidden. These modules are not part of TensorFlow public API.
      • tf.raw_ops.Max and tf.raw_ops.Min no longer accept inputs of type tf.complex64 or tf.complex128, because the behavior of these ops is not well defined for complex types.
      • XLA:CPU and XLA:GPU devices are no longer registered by default. Use TF_XLA_FLAGS=--tf_xla_enable_xla_devices if you really need them, but this flag will eventually be removed in subsequent releases.
    • tf.keras:

      • The steps_per_execution argument in model.compile() is no longer experimental; if you were passing experimental_steps_per_execution, rename it to steps_per_execution in your code. This argument controls the number of batches to run during each tf.function call when calling model.fit(). Running multiple batches inside a single tf.function call can greatly improve performance on TPUs or small models with a large Python overhead.
      • A major refactoring of the internals of the Keras Functional API may affect code that is relying on certain internal details:
        • Code that uses isinstance(x, tf.Tensor) instead of tf.is_tensor when checking Keras symbolic inputs/outputs should switch to using tf.is_tensor.
        • Code that is overly dependent on the exact names attached to symbolic tensors (e.g. assumes there will be ":0" at the end of the inputs, treats names as unique identifiers instead of using tensor.ref(), etc.) may break.
        • Code that uses full path for get_concrete_function to trace Keras symbolic inputs directly should switch to building matching tf.TensorSpecs directly and tracing the TensorSpec objects.
        • Code that relies on the exact number and names of the op layers that TensorFlow operations were converted into may have changed.
        • Code that uses tf.map_fn/tf.cond/tf.while_loop/control flow as op layers and happens to work before TF 2.4. These will explicitly be unsupported now. Converting these ops to Functional API op layers was unreliable before TF 2.4, and prone to erroring incomprehensibly or being silently buggy.
        • Code that directly asserts on a Keras symbolic value in cases where ops like tf.rank used to return a static or symbolic value depending on if the input had a fully static shape or not. Now these ops always return symbolic values.
        • Code already susceptible to leaking tensors outside of graphs becomes slightly more likely to do so now.
        • Code that tries directly getting gradients with respect to symbolic Keras inputs/outputs. Use GradientTape on the actual Tensors passed to the already-constructed model instead.
        • Code that requires very tricky shape manipulation via converted op layers in order to work, where the Keras symbolic shape inference proves insufficient.
        • Code that tries manually walking a tf.keras.Model layer by layer and assumes layers only ever have one positional argument. This assumption doesn't hold true before TF 2.4 either, but is more likely to cause issues now.

    ... (truncated)

    Changelog

    Sourced from tensorflow's changelog.

    Release 2.4.0

    Major Features and Improvements

    Breaking Changes

    • TF Core:
      • Certain float32 ops run in lower precision on Ampere based GPUs, including

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 3
  • An unexpected keyword argument 'optimize'

    An unexpected keyword argument 'optimize'

    Thanks for your excellent work, I have a question about the argument: When I fixed the action of the first dimension as a symbolic policy on the "LunarLanderContinuous-v2" environment, the program reported an error: TypeError: from_str_tokens() got an unexpected keyword argument 'optimize' in control.py, line 193. I ran successfully with "optimize" removed. However, the results of the run cannot reach the results in the paper (I get lower r_avg_test than 238 many times, while the paper is 251.66). So I'm wondering how to run it successfully without removing "optimize" and get the results in the paper.

    Here is my config file: // This example contains the tuned entropy_weight and entropy_gamma // hyperparameters used to solve LunarLanderContinuous-v2 { "task" : { "task_type" : "control", "env" : "LunarLanderContinuous-v2", "action_spec" : [["exp","cos","exp","mul","div","add","sub","add","add","add","exp", "add","add","add","add","x2","x4","x4","5.0","x4","1.0","x5","x4","x4 ","5.0","x4","x4"], null], }, "training" : { // Recommended to set this to as many cores as you can use! "n_cores_batch" : 16 }, "controller" : { "entropy_weight" : 0.02, "entropy_gamma" : 0.85 }, }

    opened by 1176358062 0
  • Import ABC from collections.abc for Python 3.10 compatibility.

    Import ABC from collections.abc for Python 3.10 compatibility.

    Importing ABC from collections was deprecated since Python 3.4 and removed in Python 3.10. Use collections for Python 2 and collections.abc for Python 3.10

    opened by tirkarthi 0
Releases(v2.1.0)
Owner
Brenden Petersen
Brenden Petersen
[ICLR'21] Counterfactual Generative Networks

This repository contains the code for the ICLR 2021 paper "Counterfactual Generative Networks" by Axel Sauer and Andreas Geiger. If you want to take the CGN for a spin and generate counterfactual ima

88 Jan 02, 2023
LAnguage Model Analysis

LAMA: LAnguage Model Analysis LAMA is a probe for analyzing the factual and commonsense knowledge contained in pretrained language models. The dataset

Meta Research 960 Jan 08, 2023
PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

Don’t be Contradicted with Anything!CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System This repository contains the PyTorch im

Libo Qin 25 Sep 06, 2022
This is the repository for Learning to Generate Piano Music With Sustain Pedals

SusPedal-Gen This is the official repository of Learning to Generate Piano Music With Sustain Pedals Demo Page Dataset The dataset used in this projec

Joann Ching 12 Sep 02, 2022
A repo to show how to use custom dataset to train s2anet, and change backbone to resnext101

A repo to show how to use custom dataset to train s2anet, and change backbone to resnext101

jedibobo 3 Dec 28, 2022
A real-time speech emotion recognition application using Scikit-learn and gradio

Speech-Emotion-Recognition-App A real-time speech emotion recognition application using Scikit-learn and gradio. Requirements librosa==0.6.3 numpy sou

Son Tran 6 Oct 04, 2022
Evolutionary Population Curriculum for Scaling Multi-Agent Reinforcement Learning

Evolutionary Population Curriculum for Scaling Multi-Agent Reinforcement Learning This is the code for implementing the MADDPG algorithm presented in

97 Dec 21, 2022
Help you understand Manual and w/ Clutch point while driving.

简体中文 forza_auto_gear forza_auto_gear is a tool for Forza Horizon 5. It will help us understand the best gear shift point using Manual or w/ Clutch in

15 Oct 08, 2022
Implementation of Perceiver, General Perception with Iterative Attention in TensorFlow

Perceiver This Python package implements Perceiver: General Perception with Iterative Attention by Andrew Jaegle in TensorFlow. This model builds on t

Rishit Dagli 84 Oct 15, 2022
[CoRL 21'] TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo

TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo Lukas Koestler1*    Nan Yang1,2*,†    Niclas Zeller2,3    Daniel Cremers1

TUM Computer Vision Group 744 Jan 04, 2023
Self-supervised Product Quantization for Deep Unsupervised Image Retrieval - ICCV2021

Self-supervised Product Quantization for Deep Unsupervised Image Retrieval Pytorch implementation of SPQ Accepted to ICCV 2021 - paper Young Kyun Jang

Young Kyun Jang 71 Dec 27, 2022
Hypersim: A Photorealistic Synthetic Dataset for Holistic Indoor Scene Understanding

The Hypersim Dataset For many fundamental scene understanding tasks, it is difficult or impossible to obtain per-pixel ground truth labels from real i

Apple 1.3k Jan 04, 2023
Transfer SemanticKITTI labeles into other dataset/sensor formats.

LiDAR-Transfer Transfer SemanticKITTI labeles into other dataset/sensor formats. Content Convert datasets (NUSCENES, FORD, NCLT) to KITTI format Minim

Photogrammetry & Robotics Bonn 64 Nov 21, 2022
You Only 👀 One Sequence

You Only 👀 One Sequence TL;DR: We study the transferability of the vanilla ViT pre-trained on mid-sized ImageNet-1k to the more challenging COCO obje

Hust Visual Learning Team 666 Jan 03, 2023
Autoencoders pretraining using clustering

Autoencoders pretraining using clustering

IITiS PAN 2 Dec 16, 2021
2021 Artificial Intelligence Diabetes Datathon

A.I.D.D. 2021 2021 Artificial Intelligence Diabetes Datathon A.I.D.D. 2021은 ‘2021 인공지능 학습용 데이터 구축사업’을 통해 만들어진 학습용 데이터를 활용하여 당뇨병을 효과적으로 예측할 수 있는가에 대한 A

2 Dec 27, 2021
Sudoku solver - A sudoku solver with python

sudoku_solver A sudoku solver What is Sudoku? Sudoku (Japanese: 数独, romanized: s

Sikai Lu 0 May 22, 2022
PyTorch wrapper for Taichi data-oriented class

Stannum PyTorch wrapper for Taichi data-oriented class PRs are welcomed, please see TODOs. Usage from stannum import Tin import torch data_oriented =

86 Dec 23, 2022
Automated Attendance Project Using Face Recognition

dependencies for project: cmake 3.22.1 dlib 19.22.1 face-recognition 1.3.0 openc

Rohail Taha 1 Jan 09, 2022
DeepMind Alchemy task environment: a meta-reinforcement learning benchmark

The DeepMind Alchemy environment is a meta-reinforcement learning benchmark that presents tasks sampled from a task distribution with deep underlying structure.

DeepMind 188 Dec 25, 2022