sktime companion package for deep learning based on TensorFlow

Overview

NOTE: sktime-dl is currently being updated to work correctly with sktime 0.6, and wwill be fully relaunched over the summer. The plan is

  1. Refactor and update classifiers (documentation etc)
  2. Import pytorch, add a pytorch classifier
  3. Add a forecasting module
  4. Review literature on the latest dl classifiers, assimilate and evaluate any worth including
  5. Update devops so it exactly mirror sktime

done

  1. Update it to be compliant with sktime 0.6 (currently works with sktime 0.4)

travis pypi gitter binder

sktime-dl

An extension package for deep learning with Tensorflow/Keras for sktime, a scikit-learn compatible Python toolbox for learning with time series and panel data.

sktime-dl is under development and currently in a state of flux.

Overview

A repository for off-the-shelf networks

The aim is to define Keras networks able to be directly used within sktime and its pipelining and strategy tools, and by extension scikit-learn, for use in applications and research. Over time, we wish to interface or implement a wide range of networks from the literature in the context of time series analysis.

Classification

Currently, we interface with a number of networks for time series classification in particular. A large part of the current toolset serves as an interface to dl-4-tsc, and implements the following network architectures:

  • Time convolutional neural network (CNN)
  • Encoder (Encoder)
  • Fully convolutional neural network (FCNN)
  • Multi channel deep convolutional neural network (MCDCNN)
  • Multi-scale convolutional neural network (MCNN)
  • Multi layer perceptron (MLP)
  • Residual network (ResNet)
  • Time Le-Net (TLeNet)
  • Time warping invariant echo state network (TWIESN)

We also interface with InceptionTime, as of writing the strongest deep learning approach to general time series classification.

  • Inception network, singular.

Regression

Most of the classifier architectures have been adapted to also provide regressors. These are:

  • Time convolutional neural network (CNN)
  • Encoder (Encoder)
  • Fully convolutional neural network (FCNN)
  • Multi layer perceptron (MLP)
  • Residual network (ResNet)
  • Time Le-Net (TLeNet)
  • InceptionTime (Inception)

Forecasting

The regression networks can also be used to perform time series forecasting via sktime's reduction strategies.

We aim to incorporate bespoke forecasting networks in future updates, both specific architectures and general RNNs/LSTMs.

Meta-functionality

  • Hyper-parameter tuning (through calls to sci-kit learn's Grid and RandomizedSearch tools, currently)
  • Ensembling methods (over different random initialisations for stability)

These act as wrappers to networks, and can be used in high-level and experimental pipelines as with any sktime model.

Documentation

sktime-dl is an extension package to sktime, primarily introducing different learning algorithms. All examples and documentation on higher level funtionality and usage from the base sktime apply to this package.

Documentation specifically for sktime-dl shall be produced in due course.

Example notebooks for sktime-dl usage can be found under the examples folder.

Contributors

Former and current active contributors are as follows:

James Large (@James-Large, @jammylarge, [email protected]), Aaron Bostrom (@ABostrom), Hassan Ismail Fawaz (@hfawaz), Markus Löning (@mloning), @Withington

Installation

The simplest installation method is to install in a new environment via pip:

# if using anaconda for environment management
conda create -n sktime-dl python=3.6
conda activate sktime-dl

# if using virtualenv for environment management
virtualenv sktime-dl
source bin/activate            #unix
sktime-dl\Scipts\activate      #windows

pip install sktime-dl

sktime-dl is under development. To ensure that you're using the most up to date code, you can instead install the development version in your environment:

git clone https://github.com/sktime/sktime-dl.git
cd sktime-dl
git checkout dev
git pull origin dev
pip install .

When installing sktime-dl from scratch, the latest stable version of Tensorflow 2.x will be installed. Tensorflow 1.x is also supported beyond 1.9, if you have an existing installation in your environment that you wish to maintain.

Users with Tensorflow versions older than 2.1.0 shall also need to install keras-contrib after installing sktime-dl, using the installation instructions for tf.keras.

Using GPUS

With the above instructions, the networks can be run out the box on your CPU. If you wish to run the networks on an NVIDIA® GPU, you can:

  • use Docker (see below)

or

  • install extra drivers and toolkits (GPU drivers, CUDA Toolkit, and CUDNN library). See this page for links and instructions, and also this page for a list of definite versioning compatabilities.

Docker

Follow Tensorflow's instuctions to install Docker and nvidia-docker (Linux only).

Build the sktime-dl Docker image:

cd sktime-dl
docker build -t sktime_dl .

Run a container with GPU support using the image:

docker run --gpus all --rm -it sktime_dl:latest

Run all the tests with:

pytest -v --cov=sktime_dl

or exclude the long-running tests with:

pytest -v -m="not slow" --cov=sktime_dl --pyargs sktime_dl

CPU

To run this Docker container on CPU, replace the above docker run command with:

docker run --rm -it sktime_dl:latest
Comments
  • Generalisation to regression

    Generalisation to regression

    Reference Issues/PRs

    Fixes #18

    What does this implement/fix? Explain your changes.

    Draft pull request: do not merge. Please review the structure of the new regressors and networks folders and classes. These enable regressors and classifiers to be built using the same code to define the main part of their networks.

    Any other comments?

    • Implemented MLP and CNN to demonstrate the structure. See test_regressors.py for tests on the regressors and on using them for forecasting.
    • Added utils folder for common functionality, e.g. save_trained_model

    I look forward to your feedback.

    opened by Withington 27
  • Docker examples

    Docker examples

    Reference Issues/PRs

    Fixes #27 and #26.

    What does this implement/fix? Explain your changes.

    Dockerfile added. Jupyter notebook time_series_classification added.

    README.rst provides instructions for building the Docker image and then using it to either run the tests or to install the local sktime-dl project and then launch Jupyter.

    Any other comments?

    Draft pull request: I will add more text to the notebook, a regression and a multivariate classifier.

    This was the best way I found to enable local sktime-dl to be installed in the Jupyter environment. Even so, it only works for notebooks that are saved at the top folder level (hence the Dockerfile COPY examples/time_series_classification.ipynb .). Any better ways to do this?

    opened by Withington 21
  • [BUG] Problem with sktime-dl installation; cannot import e.g. sktime_dl.deeplearning

    [BUG] Problem with sktime-dl installation; cannot import e.g. sktime_dl.deeplearning

    Describe the bug

    I create a new Conda env: conda create -n sktime-dl python=3.6 then run: pip install sktime-dl but than when I try to from sktime_dl.deeplearning import CNNClassifier Python cannot find sktime_dl.deeplearning.

    I also tried to install development version of sktime-dl but the package cannot be installed as there are NumPy and h5py conflicts (Tensorflow requires older numpy and h5py). I also tried to install required NumPy and h5py before running pip install ., but it doesn't help.

    To Reproduce

    conda create -n sktime-dl python=3.6
    conda activate sktime-dl
    pip install sktime-dl
    
    from sktime_dl.deeplearning import CNNClassifier
    

    Expected behavior

    I can run: from sktime_dl.deeplearning import CNNClassifier and I can run locally Jupyter Notebooks from examples.

    Additional context

    Versions Linux-4.19.128-microsoft-standard-x86_64-with-debian-buster-sid Python 3.6.11 | packaged by conda-forge | (default, Aug 5 2020, 20:09:42) NumPy 1.18.5 SciPy 1.5.4 sktime 0.4.3 sktime_dl 0.1.0

    bug 
    opened by saltazaur 19
  • LSTM

    LSTM

    Reference Issues/PRs

    Added a simple stacked LSTM regressor for forecasting #32

    Any other comments?

    I've put it as a work in progress because it will need mods to work with the check_is_fitted PR #39

    Classifier still to do.

    Are you happy with this choice of baseline LSTM? I'll put some suggestions for refs detailing more complex LSTMs in #32, we could add one of those too.

    opened by Withington 18
  • Dev

    Dev

    Long time no PR, getting this into master before working on new things. Proposing that this updates sktime-dl to 0.2.0

    See #14 for most of the changes.

    Major:

    • added the inception time TSC model (https://github.com/hfawaz/InceptionTime)
    • introduction of meta package on top level, housing generic ensemble and tuning classes and their tests
    • capability to save trained models added (back in, to some extent). Default off
    • readme semi-overhaul (somewhat preempting setup.py changes, see below)

    Also included are some updates to the setup.py and travis tests.

    opened by James-Large 13
  • Linting

    Linting

    Reference Issues/PRs

    #50

    What does this implement/fix? Explain your changes.

    Cleans up code and enforces PEP8 on Azure Pipelines using flake8 package

    Any other comments?

    • potential problem in MCNN classifer for n_train_batch around line 219 in if-statement (now commented out), variable unused, wrong name?
    • runs all tests, including "slow" ones
    • removed long comment in MCNN classifier, move to relevant issue if important
    • __all__ is required in __init__ files for linting to pass, otherwise unused variable error is raised
    • excludes experimental/ from linting for now
    • I recommend to set IDE max line length to PEP8 standard of 79 characters (had to go through quite a few lines manually because pycharm couldn't clean them up automatically)
    enhancement 
    opened by mloning 9
  • Update maint/build tools

    Update maint/build tools

    Reference issues

    #28

    Summary

    • Updates binder requirements
    • Updates GitHub PR/issue templates
    • Adds linting based on flake8 to check code quality
    • Adds CI on Azure DevOps
    • Adds testing/building for macOS and Windows
    • Adds CHANGELOG.md
    • Adds Makefile and script for maintenance tasks and release process
    • Reports coverage and test results on Azure DevOps project website

    Comments

    • For Python 3.6, we haven't released any built wheels for sktime and we need to build them too when we build sktime-dl, which is why we currently require Cython. With the new sktime release, we'll no longer need Cython as a build dependency.

    To do

    • [x] Fix issues with Python 3.6 with TensorFlow 1.9 (see below)
    • [ ] Enforce linting (you can see the report here or pip install and run flake8 in the root directory)
    opened by mloning 9
  • Compatibility with sktime v0.6.1

    Compatibility with sktime v0.6.1

    Reference Issues/PRs

    Fixes #74 , #76

    What does this implement/fix? Explain your changes.

    This PR fixes issues #74 and #76 by updating sktime references in the code to that of sktime v0.6.1. Hence this resolves compatibility issues with sktime v0.6.1, bringing sktime_dl up-to-date with the latest version of sktime. It also updates deprecated Keras code to resolve some of the build failures on running Pytest.

    Tested by:

    conda activate venv
    git clone https://github.com/sktime/sktime-dl.git
    cd sktime-dl
    pip install  .
    python
    >> from sktime_dl.deeplearning import CNNClassifier
    

    After this I ran pytest in the root directory, and confirmed that no tests are failing due to incompatibility with sktime. Rather they are associated with version incompatibilities of Keras, Tensorflow, and Numpy, and I debugged by updating deprecated Keras code.

    opened by Riyabelle25 8
  • Match sktime-dl libs name with official sktime repo

    Match sktime-dl libs name with official sktime repo

    What does this implement/fix? Explain your changes.

    Modify function name of sktime libs which are imported by sktime-dl

    Does your contribution introduce a new dependency? If yes, which one?

    If your contribution does add a new dependency, we may suggest to initially develop your contribution in a separate companion package in https://github.com/sktime/ to keep external dependencies of the core sktime package to a minimum.

    opened by wangyida 8
  • Inheritance and initialisation

    Inheritance and initialisation

    As discussed in PR #43, how best to handle the inheritance and initialisation of attributes:

    1. Leave it as it is
    2. Cooperative classes
    3. Initiate attributes when needed

    With both 1 and 3, this test, as per sklearn rolling your own estimator, fails:

    from sklearn.utils.estimator_checks import check_estimator
    from sktime_dl.deeplearning import CNNClassifier
       def test_api(network=CNNClassifier()):
       check_estimator(network)
    
    opened by Withington 7
  • check_is_fitted

    check_is_fitted

    Reference Issues/PRs

    #37

    Adapted and implemented check_is_fitted for sktime-dl, added under utils._models. Added tests to confirm expected fit-checking behaviour for all networks, and calls to the func where a couple were missing (those networks that are not regressors yet)

    opened by James-Large 7
  • Bump tensorflow from 2.5.1 to 2.9.3 in /build_tools

    Bump tensorflow from 2.5.1 to 2.9.3 in /build_tools

    Bumps tensorflow from 2.5.1 to 2.9.3.

    Release notes

    Sourced from tensorflow's releases.

    TensorFlow 2.9.3

    Release 2.9.3

    This release introduces several vulnerability fixes:

    TensorFlow 2.9.2

    Release 2.9.2

    This releases introduces several vulnerability fixes:

    ... (truncated)

    Changelog

    Sourced from tensorflow's changelog.

    Release 2.9.3

    This release introduces several vulnerability fixes:

    Release 2.8.4

    This release introduces several vulnerability fixes:

    ... (truncated)

    Commits
    • a5ed5f3 Merge pull request #58584 from tensorflow/vinila21-patch-2
    • 258f9a1 Update py_func.cc
    • cd27cfb Merge pull request #58580 from tensorflow-jenkins/version-numbers-2.9.3-24474
    • 3e75385 Update version numbers to 2.9.3
    • bc72c39 Merge pull request #58482 from tensorflow-jenkins/relnotes-2.9.3-25695
    • 3506c90 Update RELEASE.md
    • 8dcb48e Update RELEASE.md
    • 4f34ec8 Merge pull request #58576 from pak-laura/c2.99f03a9d3bafe902c1e6beb105b2f2417...
    • 6fc67e4 Replace CHECK with returning an InternalError on failing to create python tuple
    • 5dbe90a Merge pull request #58570 from tensorflow/r2.9-7b174a0f2e4
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump numpy from 1.19.2 to 1.22.0 in /build_tools

    Bump numpy from 1.19.2 to 1.22.0 in /build_tools

    Bumps numpy from 1.19.2 to 1.22.0.

    Release notes

    Sourced from numpy's releases.

    v1.22.0

    NumPy 1.22.0 Release Notes

    NumPy 1.22.0 is a big release featuring the work of 153 contributors spread over 609 pull requests. There have been many improvements, highlights are:

    • Annotations of the main namespace are essentially complete. Upstream is a moving target, so there will likely be further improvements, but the major work is done. This is probably the most user visible enhancement in this release.
    • A preliminary version of the proposed Array-API is provided. This is a step in creating a standard collection of functions that can be used across application such as CuPy and JAX.
    • NumPy now has a DLPack backend. DLPack provides a common interchange format for array (tensor) data.
    • New methods for quantile, percentile, and related functions. The new methods provide a complete set of the methods commonly found in the literature.
    • A new configurable allocator for use by downstream projects.

    These are in addition to the ongoing work to provide SIMD support for commonly used functions, improvements to F2PY, and better documentation.

    The Python versions supported in this release are 3.8-3.10, Python 3.7 has been dropped. Note that 32 bit wheels are only provided for Python 3.8 and 3.9 on Windows, all other wheels are 64 bits on account of Ubuntu, Fedora, and other Linux distributions dropping 32 bit support. All 64 bit wheels are also linked with 64 bit integer OpenBLAS, which should fix the occasional problems encountered by folks using truly huge arrays.

    Expired deprecations

    Deprecated numeric style dtype strings have been removed

    Using the strings "Bytes0", "Datetime64", "Str0", "Uint32", and "Uint64" as a dtype will now raise a TypeError.

    (gh-19539)

    Expired deprecations for loads, ndfromtxt, and mafromtxt in npyio

    numpy.loads was deprecated in v1.15, with the recommendation that users use pickle.loads instead. ndfromtxt and mafromtxt were both deprecated in v1.17 - users should use numpy.genfromtxt instead with the appropriate value for the usemask parameter.

    (gh-19615)

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Don't refit onehot_encoder for validation data

    Don't refit onehot_encoder for validation data

    Reference Issues/PRs

    Fixes #131

    What does this implement/fix? Explain your changes.

    Currently, this leads to a bug when validation data has fewer classes than the training data since the onehot_encoder will be fit to the validation data categories.

    Does your contribution introduce a new dependency? If yes, which one?

    No new dependency

    opened by cedricdonie 0
  • [BUG] InceptionTime does not support validation data with fewer classes than in the training data

    [BUG] InceptionTime does not support validation data with fewer classes than in the training data

    Describe the bug When I input a validation dataset that contains fewer categories than the training dataset, I get the following error.

      File "src/models/train_model.py", line 162, in train
        classifier.fit(X, y, validation_X=X_val, validation_y=y_val)
      File "/usr/local/lib/python3.6/dist-packages/sktime_dl/classification/_inceptiontime.py", line 189, in fit
        validation_data=validation_data,
      File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py", line 1225, in fit
        _use_cached_eval_dataset=True)
      File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py", line 1489, in evaluate
        tmp_logs = self.test_function(iterator)
      File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 889, in __call__
        result = self._call(*args, **kwds)
      File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 933, in _call
        self._initialize(args, kwds, add_initializers_to=initializers)
      File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 764, in _initialize
        *args, **kwds))
      File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 3050, in _get_concrete_function_internal_garbage_collected
        graph_function, _ = self._maybe_define_function(args, kwargs)
      File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 3444, in _maybe_define_function
        graph_function = self._create_graph_function(args, kwargs)
      File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 3289, in _create_graph_function
        capture_by_value=self._capture_by_value),
      File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py", line 999, in func_graph_from_py_func
        func_outputs = python_func(*func_args, **func_kwargs)
      File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 672, in wrapped_fn
        out = weak_wrapped_fn().__wrapped__(*args, **kwds)
      File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py", line 986, in wrapper
        raise e.ag_error_metadata.to_exception(e)
    ValueError: in user code:
    
        /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:1323 test_function  *
            return step_function(self, iterator)
        /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:1314 step_function  **
            outputs = model.distribute_strategy.run(run_step, args=(data,))
        /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:1285 run
            return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
        /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2833 call_for_each_replica
            return self._call_for_each_replica(fn, args, kwargs)
        /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:3608 _call_for_each_replica
            return fn(*args, **kwargs)
        /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:1307 run_step  **
            outputs = model.test_step(data)
        /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:1269 test_step
            y, y_pred, sample_weight, regularization_losses=self.losses)
        /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:204 __call__
            loss_value = loss_obj(y_t, y_p, sample_weight=sw)
        /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:155 __call__
            losses = call_fn(y_true, y_pred)
        /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:259 call  **
            return ag_fn(y_true, y_pred, **self._fn_kwargs)
        /usr/local/lib/python3.6/dist-packages/tensorflow/python/util/dispatch.py:206 wrapper
            return target(*args, **kwargs)
        /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:1644 categorical_crossentropy
            y_true, y_pred, from_logits=from_logits)
        /usr/local/lib/python3.6/dist-packages/tensorflow/python/util/dispatch.py:206 wrapper
            return target(*args, **kwargs)
        /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/backend.py:4862 categorical_crossentropy
            target.shape.assert_is_compatible_with(output.shape)
        /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_shape.py:1161 assert_is_compatible_with
            raise ValueError("Shapes %s and %s are incompatible" % (self, other))
    
        ValueError: Shapes (None, 3) and (None, 4) are incompatible
    

    To Reproduce I'm having trouble producing a MWE. The following code works unexpectedly.

    from sktime.datasets import load_basic_motions
    from sktime_dl.classification import InceptionTimeClassifier
    import numpy as np
    
    
    def test_validation_has_fewer_classes_than_test():
        X, _ = load_basic_motions(return_X_y=True, split="train")
        X_val, _ = load_basic_motions(return_X_y=True, split="test")
    
        y = np.ones(len(X))
        y[0] = 0
        y[1] = 2
    
        y_val = np.ones_like(y)
        y_val[0] = 0
    
        clf = InceptionTimeClassifier(nb_epochs=2)
    
        clf.fit(X, y, validation_data=(X_val, y_val))
    

    Expected behavior The label encoder will pad with zeros, allowing training with validation data with fewer classes.

    Additional context The onehot encoder is refitted to validation data. Maybe that is the issue?

    Versions

    Linux-5.13.0-51-generic-x86_64-with-Ubuntu-20.04-focal Python 3.6.15 (default, Apr 25 2022, 01:55:53) [GCC 9.4.0] NumPy 1.19.5 SciPy 1.5.4 sktime 0.7.0 sktime_dl 0.2.0
    bug 
    opened by cedricdonie 0
  • modified the loss function for _cnn.py

    modified the loss function for _cnn.py

    Reference Issues/PRs

    Not an issue fix

    What does this implement/fix? Explain your changes.

    I have modified the loss function of the _cnn.py file. The current loss function for the _cnn.py file which is used to classify between two classes is RMSE but the appropriate loss function for training a model for binary classification is binary_crossentrophy. This loss function can significantly improve the training and performance of the estimator.

    Does your contribution introduce a new dependency? If yes, which one?

    No

    Any other comments?

    No

    opened by Vasudeva-bit 0
Releases(v0.1.0)
  • v0.1.0(Jul 31, 2019)

Owner
sktime
A unified framework for machine learning with time series
sktime
OpenDILab RL Kubernetes Custom Resource and Operator Lib

DI Orchestrator DI Orchestrator is designed to manage DI (Decision Intelligence) jobs using Kubernetes Custom Resource and Operator. Prerequisites A w

OpenDILab 205 Dec 29, 2022
Self-Supervised Pillar Motion Learning for Autonomous Driving (CVPR 2021)

Self-Supervised Pillar Motion Learning for Autonomous Driving Chenxu Luo, Xiaodong Yang, Alan Yuille Self-Supervised Pillar Motion Learning for Autono

QCraft 101 Dec 05, 2022
Code repo for "Towards Interpretable Deep Networks for Monocular Depth Estimation" paper.

InterpretableMDE A PyTorch implementation for "Towards Interpretable Deep Networks for Monocular Depth Estimation" paper. arXiv link: https://arxiv.or

Zunzhi You 16 Aug 12, 2022
E2EDNA2 - An automated pipeline for simulation of DNA aptamers complexed with small molecules and short peptides

E2EDNA2 - An automated pipeline for simulation of DNA aptamers complexed with small molecules and short peptides

11 Nov 08, 2022
Gym environments used in the paper: "Developmental Reinforcement Learning of Control Policy of a Quadcopter UAV with Thrust Vectoring Rotors"

gym_multirotor Gym to train reinforcement learning agents on UAV platforms Quadrotor Tiltrotor Requirements This package has been tested on Ubuntu 18.

Aditya M. Deshpande 19 Dec 29, 2022
Segcache: a memory-efficient and scalable in-memory key-value cache for small objects

Segcache: a memory-efficient and scalable in-memory key-value cache for small objects This repo contains the code of Segcache described in the followi

TheSys Group @ CMU CS 78 Jan 07, 2023
Turn based roguelike in python

pyTB Turn based roguelike in python Documentation can be found here: http://mcgillij.github.io/pyTB/index.html Screenshot Dependencies Written in Pyth

Jason McGillivray 4 Sep 29, 2022
Pca-on-genotypes - Mini bioinformatics project - PCA on genotypes

Mini bioinformatics project: PCA on genotypes This repo contains the code from t

Maria Nattestad 8 Dec 04, 2022
a Lightweight library for sequential learning agents, including reinforcement learning

SaLinA: SaLinA - A Flexible and Simple Library for Learning Sequential Agents (including Reinforcement Learning) TL;DR salina is a lightweight library

Facebook Research 405 Dec 17, 2022
FcaNet: Frequency Channel Attention Networks

FcaNet: Frequency Channel Attention Networks PyTorch implementation of the paper "FcaNet: Frequency Channel Attention Networks". Simplest usage Models

327 Dec 27, 2022
Bounding Wasserstein distance with couplings

BoundWasserstein These scripts reproduce the results of the article Bounding Wasserstein distance with couplings by Niloy Biswas and Lester Mackey. ar

Niloy Biswas 1 Jan 11, 2022
Approximate Nearest Neighbors in C++/Python optimized for memory usage and loading/saving to disk

Annoy Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given quer

Spotify 10.6k Jan 04, 2023
Python scripts for performing road segemtnation and car detection using the HybridNets multitask model in ONNX.

ONNX-HybridNets-Multitask-Road-Detection Python scripts for performing road segemtnation and car detection using the HybridNets multitask model in ONN

Ibai Gorordo 45 Jan 01, 2023
Orbivator AI - To Determine which features of data (measurements) are most important for diagnosing breast cancer and find out if breast cancer occurs or not.

Orbivator_AI Breast Cancer Wisconsin (Diagnostic) GOAL To Determine which features of data (measurements) are most important for diagnosing breast can

anurag kumar singh 1 Jan 02, 2022
Official implementation of the paper Visual Parser: Representing Part-whole Hierarchies with Transformers

Visual Parser (ViP) This is the official implementation of the paper Visual Parser: Representing Part-whole Hierarchies with Transformers. Key Feature

Shuyang Sun 117 Dec 11, 2022
Anomaly Detection Based on Hierarchical Clustering of Mobile Robot Data

We proposed a new approach to detect anomalies of mobile robot data. We investigate each data seperately with two clustering method hierarchical and k-means. There are two sub-method that we used for

Zekeriyya Demirci 1 Jan 09, 2022
The first machine learning framework that encourages learning ML concepts instead of memorizing class functions.

SeaLion is designed to teach today's aspiring ml-engineers the popular machine learning concepts of today in a way that gives both intuition and ways of application. We do this through concise algori

Anish 324 Dec 27, 2022
Sparse Physics-based and Interpretable Neural Networks

Sparse Physics-based and Interpretable Neural Networks for PDEs This repository contains the code and manuscript for research done on Sparse Physics-b

28 Jan 03, 2023
PyTorch implementation for OCT-GAN Neural ODE-based Conditional Tabular GANs (WWW 2021)

OCT-GAN: Neural ODE-based Conditional Tabular GANs (OCT-GAN) Code for reproducing the experiments in the paper: Jayoung Kim*, Jinsung Jeon*, Jaehoon L

BigDyL 7 Dec 27, 2022
DCT-Mask: Discrete Cosine Transform Mask Representation for Instance Segmentation

DCT-Mask: Discrete Cosine Transform Mask Representation for Instance Segmentation This project hosts the code for implementing the DCT-MASK algorithms

Alibaba Cloud 57 Nov 27, 2022