Koopman operator identification library in Python

Overview

pykoop

DOI Documentation status

pykoop is a Koopman operator identification library written in Python. It allows the user to specify Koopman lifting functions and regressors in order to learn a linear model of a given system in the lifted space.

pykoop places heavy emphasis on modular lifting function construction and scikit-learn compatibility. The library aims to make it easy to automatically find good lifting functions and regressor hyperparameters by leveraging scikit-learn's existing cross-validation infrastructure. pykoop also gracefully handles control inputs and multi-episode datasets at every stage of the pipeline.

pykoop also includes several experimental regressors that use linear matrix inequalities to regularize or constrain the Koopman matrix from [1] and [2].

Example

Consider Tikhonov-regularized EDMD with polynomial lifting functions applied to mass-spring-damper data. Using pykoop, this can be implemented as:

import pykoop
from sklearn.preprocessing import MaxAbsScaler, StandardScaler

# Get sample mass-spring-damper data
X_msd = pykoop.example_data_msd()

# Create pipeline
kp = pykoop.KoopmanPipeline(
    lifting_functions=[
        ('ma', pykoop.SkLearnLiftingFn(MaxAbsScaler())),
        ('pl', pykoop.PolynomialLiftingFn(order=2)),
        ('ss', pykoop.SkLearnLiftingFn(StandardScaler())),
    ],
    regressor=pykoop.Edmd(alpha=0.1),
)

# Fit the pipeline
kp.fit(X_msd, n_inputs=1, episode_feature=True)

# Predict using the pipeline
X_pred = kp.predict_multistep(X_msd)

# Score using the pipeline
score = kp.score(X_msd)

Library layout

Most of the required classes and functions have been imported into the pykoop namespace. The most important object is the KoopmanPipeline, which requires a list of lifting functions and a regressor.

Some example lifting functions are

  • PolynomialLiftingFn,
  • DelayLiftingFn, and
  • BilinearInputLiftingFn.

scikit-learn preprocessors can be wrapped into lifting functions using SkLearnLiftingFn. States and inputs can be lifted independently using SplitPipeline. This is useful to avoid lifting inputs.

Some basic regressors included are

  • Edmd (includes Tikhonov regularization),
  • Dmdc, and
  • Dmd.

More advanced (and experimental) LMI-based regressors are included in the pykoop.lmi_regressors namespace. They allow for different kinds of regularization as well as hard constraints on the Koopman operator.

You can roll your own lifting functions and regressors by inheriting from KoopmanLiftingFn, EpisodeIndependentLiftingFn, EpisodeDependentLiftingFn, and KoopmanRegressor.

Some sample dynamic models are also included in the pykoop.dynamic_models namespace.

Installation and testing

pykoop can be installed from PyPI using

$ pip install pykoop

Additional LMI solvers can be installed using

$ pip install mosek
$ pip install smcp

Mosek is recommended, but is nonfree and requires a license.

The library can be tested using

$ pip install -r requirements.txt
$ pytest

Note that pytest must be run from the repository's root directory.

To skip slow unit tests, including all doctests and examples, run

$ pytest ./tests -k-slow

The documentation can be compiled using

$ cd doc
$ make html

Related packages

Other excellent Python packages for learning dynamical systems exist, summarized in the table below:

Library Unique features
pykoop
  • Modular lifting functions
  • Full scikit-learn compatibility
  • Built-in regularization
  • Multi-episode datasets
pykoopman
  • Continuous-time Koopman operator identification
  • Built-in numerical differentiation
  • Detailed DMD outputs
  • DMDc with known control matrix
PyDMD
  • Extensive library containing pretty much every variant of DMD
PySINDy
  • Python implementation of the famous SINDy method
  • Related to, but not the same as, Koopman operator approximation

References

[1] Steven Dahdah and James Richard Forbes. "Linear matrix inequality approaches to Koopman operator approximation." arXiv:2102.03613 [eess.SY] (2021). https://arxiv.org/abs/2102.03613
[2] Steven Dahdah and James Richard Forbes. "System norm regularization methods for Koopman operator approximation." arXiv:2110.09658 [eess.SY] (2021). https://arxiv.org/abs/2110.09658

Citation

If you use this software in your research, please cite it as below or see CITATION.cff.

@software{dahdah_pykoop_2021,
    title={{decarsg/pykoop}},
    doi={10.5281/zenodo.5576490},
    url={https://github.com/decarsg/pykoop},
    publisher={Zenodo},
    author={Steven Dahdah and James Richard Forbes},
    year={2021},
}

License

This project is distributed under the MIT License, except the contents of ./pykoop/_sklearn_metaestimators/, which are from the scikit-learn project, and are distributed under the BSD-3-Clause License.

Comments
  • Improve unit tests

    Improve unit tests

    Some unit tests are slow and require Mosek, so they can't be run server-side. Only a subset of the tests are currently run. Unit tests need to be reorganized so that the most possible tests can be run at each merge.

    opened by sdahdah 2
  • Refine release procedure

    Refine release procedure

    Resolves #121

    Proposed Changes

    • Remove date from CITATION.cff
    • Add GitHub action to check version consistency

    Checklist

    • [x] Write unit tests
    • [x] Add new estimators to existing scikit-learn compatibility tests
    • [x] Write examples in docstrings
    • [x] Update Sphinx documentation
    • [x] Bump version number and date in setup.py, CITATION.cff, and README.rst
    documentation 
    opened by sdahdah 1
  • Refine release procedure

    Refine release procedure

    I forgot to bump the version in the source code from v1.1.0 to v1.1.1. PyPi rejected the release, but Zenodo accepted it. When fixing this, PyPi accepted the new package, but Zenodo rejected it. So I had to draft v1.1.2 to make everything consistent again.

    I need to look into an automated check to prevent this from happening. If not that, then at least a written checklist to follow.

    documentation 
    opened by sdahdah 1
  • Automate plotting

    Automate plotting

    Fixes #73

    Proposed Changes

    • Implements new functions for quick-and-dirty plots:
      • plot_lifted_trajectory()
      • plot_bode()
      • plot_eigenvalues()
      • plot_koopman_matrix()
      • plot_svd()
      • plot_predicted_trajectory()
      • plot_bode()
      • plot_eigenvalues()
      • plot_koopman_matrix()
      • plot_svd()
    • Allows predict_trajectory() to be called with initial condition X0 and input U, or a full data matrix X.
    enhancement 
    opened by sdahdah 1
  • Add option to skip rescaling of original states

    Add option to skip rescaling of original states

    Scale only lifted states without rescaling original states? Maybe the PolynomialLiftingFn should handle its own scaling.

    Either way, it's becoming more clear that normalizing and standardizing should be done outside of the KoopmanPipeline, or within each lifting function as needed. But having a "normalizing" lifting function is a bit awkward since we want to keep the original state inside the lifted state unmodified.

    enhancement wontfix 
    opened by sdahdah 1
  • `score_trajectory()` sometimes returns `inf` instead of error score

    `score_trajectory()` sometimes returns `inf` instead of error score

    Resolves #126

    Proposed Changes

    • Add check to output of score_trajectory() in case the score becomes NaN or inf during its calculation.

    Checklist

    • [x] Write unit tests
    • [x] Add new estimators to existing scikit-learn compatibility tests
    • [x] Write examples in docstrings
    • [x] Update Sphinx documentation
    • [x] Bump version number and date in setup.py, CITATION.cff, and README.rst
    bug 
    opened by sdahdah 0
  • `KoopmanLiftingFn.transform()` does not check if output is finite

    `KoopmanLiftingFn.transform()` does not check if output is finite

    KoopmanLiftingFn.transform() can return non-finite outputs. I'm not sure if I should add a check before returning the transformed values, or if I should check for this elsewhere. Warnings are already raised by NumPy, so I think it's better to leave it alone for the time being.

    bug wontfix 
    opened by sdahdah 0
  • `score_trajectory()` sometimes returns `-inf` instead of `error_score`

    `score_trajectory()` sometimes returns `-inf` instead of `error_score`

    Expected Behavior

    pykoop.score_trajectory() should always return a finite float (no np.inf or np.nan), return the error_score, or raise a ValueError.

    Actual Behavior

    If the inputs to pykoop.score_trajectory() are finite, but overflow during the calculation of the score, then the function will return -np.inf instead of error_score. This can cause external hyperparameter optimizers to crash.

    Steps to Reproduce the Problem

    import numpy as np
    
    import pykoop
    
    X_predicted = np.array([
        [1e-2, 1e-3],
    ]).T
    
    X_expected = np.array([
        [1e150, 1e250],
    ]).T
    
    score = pykoop.score_trajectory(X_predicted, X_expected, episode_feature=False)
    
    print(score)
    

    Specifications

    • Package version: 1.1.3
    • Python version: 3.10.9
    • Platform: Arch Linux
    bug 
    opened by sdahdah 0
  • Fix incorrect scoring with `NaN` entries

    Fix incorrect scoring with `NaN` entries

    Resolves #122

    Proposed Changes

    • Add error_score parameter to control behaviour of scorer when predictions diverge.

    Checklist

    • [x] Write unit tests
    • [x] Add new estimators to existing scikit-learn compatibility tests
    • [x] Write examples in docstrings
    • [x] Update Sphinx documentation
    • [x] Bump version number and date in setup.py, CITATION.cff, and README.rst
    bug 
    opened by sdahdah 0
  • Incorrect scoring with `NaN` entries

    Incorrect scoring with `NaN` entries

    Scoring does not work correctly when X has NaN entries. Example to reproduce:

    """Example of how to use the Koopman pipeline."""
    
    from sklearn.preprocessing import MaxAbsScaler, StandardScaler
    
    import pykoop
    import numpy as np
    
    
    def example_pipeline_simple() -> None:
        """Demonstrate how to use the Koopman pipeline."""
        # Get example mass-spring-damper data
        eg = pykoop.example_data_msd()
    
        # Create pipeline
        kp = pykoop.KoopmanPipeline(
            lifting_functions=[
                ('pl', pykoop.PolynomialLiftingFn(order=10)),
            ],
            regressor=pykoop.Edmd(alpha=0),
        )
    
        # Fit the pipeline
        kp.fit(
            eg['X_train'],
            n_inputs=eg['n_inputs'],
            episode_feature=eg['episode_feature'],
        )
    
        # Predict using the pipeline
        X_pred = kp.predict_trajectory(eg['x0_valid'], eg['u_valid'])
        print(np.any(np.isnan(X_pred)))
    
        # Score using the pipeline
        score = kp.score(eg['X_valid'])
        print(score)
    
    
    if __name__ == '__main__':
        example_pipeline_simple()
    

    Solution is to implement the scikit-learn convention:

    error_score 'raise' or numeric, default=np.nan Value to assign to the score if an error occurs in estimator fitting. If set to ‘raise’, the error is raised. If a numeric value is given, FitFailedWarning is raised. This parameter does not affect the refit step, which will always raise the error.

    bug 
    opened by sdahdah 0
  • Incorrect overflow handling in `predict_trajectory()` when `relift=False`

    Incorrect overflow handling in `predict_trajectory()` when `relift=False`

    Resolves #117

    Proposed Changes

    • Remove reference to X_ikm1

    Checklist

    • [x] Write unit tests
    • [x] Add new estimators to existing scikit-learn compatibility tests
    • [x] Write examples in docstrings
    • [x] Update Sphinx documentation
    bug 
    opened by sdahdah 0
  • Allow creation of a `KoopmanRegressor` object from a Koopman matrix

    Allow creation of a `KoopmanRegressor` object from a Koopman matrix

    Resolves #125

    Proposed Changes

    • Add DataRegressor, which accepts a Koopman matrix in the form of a NumPy array as input.

    Checklist

    • [x] Write unit tests
    • [x] Add new estimators to existing scikit-learn compatibility tests
    • [x] Write examples in docstrings
    • [x] Update Sphinx documentation
    • [x] Bump version number and date in setup.py, CITATION.cff, and README.rst
    enhancement 
    opened by sdahdah 0
  • Allow creation of a `KoopmanRegressor` object from a Koopman matrix

    Allow creation of a `KoopmanRegressor` object from a Koopman matrix

    Desired Behavior

    Given a Koopman matrix U, create a KoopmanRegressor object that functions as if it was fit with pykoop.

    Proposed Solution

    Along these lines:

    class DataRegressor(KoopmanRegressor):
    
        def __init__(self, U):
            self.U = U
            
        def fit(X, y):
            self.n_features_in_ = ...
            self.coef_ = U.copy()
    
    enhancement 
    opened by sdahdah 0
  • Implement Hermite/Lagrange/Legendre polynomial lifting functions

    Implement Hermite/Lagrange/Legendre polynomial lifting functions

    Refactor PolynomialLiftingFn to support products of other polynomials instead of just monomials. For example, instead of x1^2 * x2, allow H2(x1) * H1(x2) where Hx is the xth Hermite polynomial.

    This can be achieved by removing the wrapped scikit-learn polynomial transformer and using a custom one with similar functionality. Specifically the powers_ matrix.

    enhancement 
    opened by sdahdah 0
Releases(v1.1.3)
  • v1.1.3(Dec 20, 2022)

    This release fixes a bug where diverging predictions were not scored correctly. It also adds the error_score parameter to KoopmanPipeline.make_scorer() and score_trajectory() to allow more fine-grained control over the behaviour.

    Full changelog: https://github.com/decargroup/pykoop/compare/v1.1.2...v1.1.3

    Bug fixes

    • Fix incorrect scoring with NaN entries (https://github.com/decargroup/pykoop/pull/123)
    Source code(tar.gz)
    Source code(zip)
  • v1.1.2(Dec 17, 2022)

    This release exists because I forgot to bump the version number in the source code when releasing v1.1.1. Sorry!

    Full changelog: https://github.com/decargroup/pykoop/compare/v1.1.1...v1.1.2

    Source code(tar.gz)
    Source code(zip)
  • v1.1.1(Dec 17, 2022)

    This release fixes two bugs in KoopmanPipeline.predict_trajectory() and lowers the setup.py minimum Python version to 3.7 for Binder. However, 3.8 is still the lowest officially supported version.

    Full changelog: https://github.com/decargroup/pykoop/compare/v1.1.0...v1.1.1

    Bug fixes

    • Fixed incorrect overflow handling in predict_trajectory() when relift=False (https://github.com/decargroup/pykoop/pull/118)
    • Fixed bug where predict_trajectory() did not account for episode feature if U=None (https://github.com/decargroup/pykoop/pull/116)
    • Lowered required Python version in setup.py so Binder would work again (https://github.com/decargroup/pykoop/pull/114)
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Dec 15, 2022)

    This release features two new types of lifting functions: radial basis functions, and random Fourier features. Click the links for examples, or check them out on Binder!

    You can now also use almost any scikit-learn regressor as a backend for EDMD with EdmdMeta. You can find a cool example of sparse regression with the lasso here.

    Finally, two quality-of-life changes are introduced in this update. You can access your lifting function feature names with KoopmanLiftingFn.get_feature_names_out(), and you can quickly plot Koopman predictions and Koopman operator properties with a bunch of plot_*() methods scattered throughout the library. See below for more details.

    Note that in this release, we are dropping official Python 3.7 support, though almost all features should still work.

    Full changelog: https://github.com/decargroup/pykoop/compare/v1.0.5...v1.1.0

    New features

    • Added radial basis function (RBF) lifting functions in RbfLiftingFn, along with several ways to choose centers (https://github.com/decargroup/pykoop/pull/103)
    • Added random Fourier feature (RFF) lifting functions in KernelApproxLiftingFn, along with other kernel approximations (https://github.com/decargroup/pykoop/pull/110)
    • Added constant lifting function in ConstantLiftingFn (https://github.com/decargroup/pykoop/pull/85)
    • Added support for scikit-learn linear regressors in EdmdMeta (https://github.com/decargroup/pykoop/pull/92)
    • Added support for feature name tracking as strings in KoopmanLiftingFn.get_feature_names_in() and KoopmanLiftingFn.get_feature_names_out(). If you pass in a pandas.DataFrame, then pykoop can take the feature names from there (https://github.com/decargroup/pykoop/pull/75)
    • Added easy plotting helpers in
      • KoopmanLiftingFn.plot_lifted_trajectory(),
      • KoopmanRegressor.plot_bode(),
      • KoopmanRegressor.plot_eigenvalues(),
      • KoopmanRegressor.plot_koopman_matrix(),
      • KoopmanRegressor.plot_svd(),
      • KoopmanPipeline.plot_predicted_trajectory(),
      • KoopmanPipeline.plot_bode(),
      • KoopmanPipeline.plot_eigenvalues(),
      • KoopmanPipeline.plot_koopman_matrix(), and
      • KoopmanPipeline.plot_svd() (https://github.com/decargroup/pykoop/pull/83)
    • Added example_data_pendulum() and example_data_duffing().

    Bug fixes

    • Fixed bug where predict_trajectory indexing was wrong when relift_state=false (https://github.com/decargroup/pykoop/pull/112)
    • Fixed Binder package versions (https://github.com/decargroup/pykoop/pull/108)
    Source code(tar.gz)
    Source code(zip)
  • v1.0.5(Sep 6, 2022)

    This release features two quality of life improvements: a better lifting function interface for use outside of scikit-learn, and improved trajectory prediction functionality. More importantly, the docs have been reorganized, the unit tests are no longer a mess, and some Jupyter notebook examples have been added to binder.

    Full changelog: https://github.com/decarsg/pykoop/compare/v1.0.4...v1.0.5

    New features

    • Added lift(), lift_state(), lift_input(), retract(), retract_state(), and retract_input() helper methods to KoopmanPipeline and all Koopman lifting functions. These functions provide a more convenient way to use a fit Koopman model outside of scikit-learn (e.g. in control applications) (https://github.com/decarsg/pykoop/pull/61)
    • Added predict_trajectory() as a replacement for predict_multistep(), which is now deprecated. This new function provides a more convenient interface for use outside of scikit-learn, and also supports global Koopman predictions, where states are not retracted and re-lifted between timesteps (https://github.com/decarsg/pykoop/pull/65).

    Enhancements

    • Overhauled organization of Sphinx docs (https://github.com/decarsg/pykoop/pull/66)
    • Updated examples and added binder links to Juypter notebooks (https://github.com/decarsg/pykoop/pull/70, https://github.com/decarsg/pykoop/pull/71).
    • Refactored unit tests, and enabled remote testing and doctests in CI (https://github.com/decarsg/pykoop/pull/67).

    Bug fixes

    • Fixed a serious bug in predict_multistep() where only the first episode was scored (https://github.com/decarsg/pykoop/pull/65).
    • Allowed force quitting LMI regressor using ^C twice (https://github.com/decarsg/pykoop/pull/54).
    • Stopped doctests from failing due to floating point comparisons (https://github.com/decarsg/pykoop/pull/58).
    Source code(tar.gz)
    Source code(zip)
  • v1.0.4(Nov 9, 2021)

  • v1.0.3(Oct 19, 2021)

  • v1.0.2(Oct 19, 2021)

  • v1.0.1(Oct 18, 2021)

Owner
DECAR Systems Group
Dynamics Estimation Control of Aerospace and Robotic (DECAR) Systems Group
DECAR Systems Group
Framework for training options with different attention mechanism and using them to solve downstream tasks.

Using Attention in HRL Framework for training options with different attention mechanism and using them to solve downstream tasks. Requirements GPU re

5 Nov 03, 2022
This repository contains a set of codes to run (i.e., train, perform inference with, evaluate) a diarization method called EEND-vector-clustering.

EEND-vector clustering The EEND-vector clustering (End-to-End-Neural-Diarization-vector clustering) is a speaker diarization framework that integrates

45 Dec 26, 2022
Toward Realistic Single-View 3D Object Reconstruction with Unsupervised Learning from Multiple Images (ICCV 2021)

Table of Content Introduction Getting Started Datasets Installation Experiments Training & Testing Pretrained models Texture fine-tuning Demo Toward R

VinAI Research 42 Dec 05, 2022
[CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias

Counterfactual VQA (CF-VQA) This repository is the Pytorch implementation of our paper "Counterfactual VQA: A Cause-Effect Look at Language Bias" in C

Yulei Niu 94 Dec 03, 2022
WRENCH: Weak supeRvision bENCHmark

🔧 What is it? Wrench is a benchmark platform containing diverse weak supervision tasks. It also provides a common and easy framework for development

Jieyu Zhang 176 Dec 28, 2022
For holding anime-related object classification and detection models

Animesion An end-to-end framework for anime-related object classification, detection, segmentation, and other models. Update: 01/22/2020. Due to time-

Edwin Arkel Rios 72 Nov 30, 2022
Create time-series datacubes for supervised machine learning with ICEYE SAR images.

ICEcube is a Python library intended to help organize SAR images and annotations for supervised machine learning applications. The library generates m

ICEYE Ltd 65 Jan 03, 2023
A chemical analysis of lipophilicities & molecule drawings including ML

A chemical analysis of lipophilicity & molecule drawings including a bit of ML analysis. This is a simple project that includes two Jupyter files (one

Aurimas A. Nausėdas 7 Nov 22, 2022
Social Distancing Detector

Computer vision has opened up a lot of opportunities to explore into AI domain that were earlier highly limited. Here is an application of haarcascade classifier and OpenCV to develop a social distan

Ashish Pandey 2 Jul 18, 2022
Invariant Causal Prediction for Block MDPs

MISA Abstract Generalization across environments is critical to the successful application of reinforcement learning algorithms to real-world challeng

Meta Research 41 Sep 17, 2022
3.8% and 18.3% on CIFAR-10 and CIFAR-100

Wide Residual Networks This code was used for experiments with Wide Residual Networks (BMVC 2016) http://arxiv.org/abs/1605.07146 by Sergey Zagoruyko

Sergey Zagoruyko 1.2k Dec 29, 2022
PyTorch Implementation of CvT: Introducing Convolutions to Vision Transformers

CvT: Introducing Convolutions to Vision Transformers Pytorch implementation of CvT: Introducing Convolutions to Vision Transformers Usage: img = torch

Rishikesh (ऋषिकेश) 193 Jan 03, 2023
An implementation of RetinaNet in PyTorch.

RetinaNet An implementation of RetinaNet in PyTorch. Installation Training COCO 2017 Pascal VOC Custom Dataset Evaluation Todo Credits Installation In

Conner Vercellino 297 Jan 04, 2023
Anomaly detection in multi-agent trajectories: Code for training, evaluation and the OpenAI highway simulation.

Anomaly Detection in Multi-Agent Trajectories for Automated Driving This is the official project page including the paper, code, simulation, baseline

12 Dec 02, 2022
Prometheus exporter for Cisco Unified Computing System (UCS) Manager

prometheus-ucs-exporter Overview Use metrics from the UCS API to export relevant metrics to Prometheus This repository is a fork of Drew Stinnett's or

Marshall Wace 6 Nov 07, 2022
Code for the paper: Sketch Your Own GAN

Sketch Your Own GAN Project | Paper | Youtube | Slides Our method takes in one or a few hand-drawn sketches and customizes an off-the-shelf GAN to mat

677 Dec 28, 2022
Patient-Survival - Using Python, I developed a Machine Learning model using classification techniques such as Random Forest and SVM classifiers to predict a patient's survival status that have undergone breast cancer surgery.

Patient-Survival - Using Python, I developed a Machine Learning model using classification techniques such as Random Forest and SVM classifiers to predict a patient's survival status that have underg

Nafis Ahmed 1 Dec 28, 2021
Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch

Rotary Embeddings - Pytorch A standalone library for adding rotary embeddings to transformers in Pytorch, following its success as relative positional

Phil Wang 110 Dec 30, 2022
Storchastic is a PyTorch library for stochastic gradient estimation in Deep Learning

Storchastic is a PyTorch library for stochastic gradient estimation in Deep Learning

Emile van Krieken 140 Dec 30, 2022
Exploring Image Deblurring via Blur Kernel Space (CVPR'21)

Exploring Image Deblurring via Encoded Blur Kernel Space About the project We introduce a method to encode the blur operators of an arbitrary dataset

VinAI Research 118 Dec 19, 2022