Stats, linear algebra and einops for xarray

Overview

xarray-einstats

Documentation Status Code style: black PyPI

Stats, linear algebra and einops for xarray

⚠️ Caution: This project is still in a very early development stage

Installation

To install, run

(.venv) $ pip install xarray-einstats

Overview

As stated in their website:

xarray makes working with multi-dimensional labeled arrays simple, efficient and fun!

The code is often more verbose, but it is generally because it is clearer and thus less error prone and intuitive. Here are some examples of such trade-off:

numpy xarray
a[2, 5] da.sel(drug="paracetamol", subject=5)
a.mean(axis=(0, 1)) da.mean(dim=("chain", "draw"))
`` ``

In some other cases however, using xarray can result in overly verbose code that often also becomes less clear. xarray-einstats provides wrappers around some numpy and scipy functions (mostly numpy.linalg and scipy.stats) and around einops with an api and features adapted to xarray.

% ⚠️ Attention: A nicer rendering of the content below is available at our documentation

Data for examples

The examples in this overview page use the DataArrays from the Dataset below (stored as ds variable) to illustrate xarray-einstats features:


   
    
Dimensions:  (dim_plot: 50, chain: 4, draw: 500, team: 6)
Coordinates:
  * chain    (chain) int64 0 1 2 3
  * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 492 493 494 495 496 497 498 499
  * team     (team) object 'Wales' 'France' 'Ireland' ... 'Italy' 'England'
Dimensions without coordinates: dim_plot
Data variables:
    x_plot   (dim_plot) float64 0.0 0.2041 0.4082 0.6122 ... 9.592 9.796 10.0
    atts     (chain, draw, team) float64 0.1063 -0.01913 ... -0.2911 0.2029
    sd_att   (draw) float64 0.272 0.2685 0.2593 0.2612 ... 0.4112 0.2117 0.3401

   

Stats

xarray-einstats provides two wrapper classes {class}xarray_einstats.XrContinuousRV and {class}xarray_einstats.XrDiscreteRV that can be used to wrap any distribution in {mod}scipy.stats so they accept {class}~xarray.DataArray as inputs.

We can evaluate the logpdf using inputs that wouldn't align if using numpy in a couple lines:

norm_dist = xarray_einstats.XrContinuousRV(scipy.stats.norm)
norm_dist.logpdf(ds["x_plot"], ds["atts"], ds["sd_att"])

which returns:


   
    
array([[[[ 3.06470249e-01,  3.80373065e-01,  2.56575936e-01,
...
          -4.41658154e+02, -4.57599982e+02, -4.14709280e+02]]]])
Coordinates:
  * chain    (chain) int64 0 1 2 3
  * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 492 493 494 495 496 497 498 499
  * team     (team) object 'Wales' 'France' 'Ireland' ... 'Italy' 'England'
Dimensions without coordinates: dim_plot

   

einops

only rearrange wrapped for now

einops uses a convenient notation inspired in Einstein notation to specify operations on multidimensional arrays. It uses spaces as a delimiter between dimensions, parenthesis to indicate splitting or stacking of dimensions and -> to separate between input and output dim specification. einstats uses an adapted notation then translates to einops and calls {func}xarray.apply_ufunc under the hood.

Why change the notation? There are three main reasons, each concerning one of the elements respectively: ->, space as delimiter and parenthesis:

  • In xarray dimensions are already labeled. In many cases, the left side in the einops notation is only used to label the dimensions. In fact, 5/7 examples in https://einops.rocks/api/rearrange/ fall in this category. This is not necessary when working with xarray objects.
  • In xarray dimension names can be any {term}xarray:hashable. xarray-einstats only supports strings as dimension names, but the space can't be used as delimiter.
  • In xarray dimensions are labeled and the order doesn't matter. This might seem the same as the first reason but it is not. When splitting or stacking dimensions you need (and want) the names of both parent and children dimensions. In some cases, for example stacking, we can autogenerate a default name, but in general you'll want to give a name to the new dimension. After all, dimension order in xarray doesn't matter and there isn't much to be done without knowing the dimension names.

xarray-einstats uses two separate arguments, one for the input pattern (optional) and another for the output pattern. Each is a list of dimensions (strings) or dimension operations (lists or dictionaries). Some examples:

We can combine the chain and draw dimensions and name the resulting dimension sample using a list with a single dictionary. The team dimension is not present in the pattern and is not modified.

rearrange(ds.atts, [{"sample": ("chain", "draw")}])

Out:


   
    
array([[ 0.10632395,  0.1538294 ,  0.17806237, ...,  0.16744257,
         0.14927569,  0.21803568],
         ...,
       [ 0.30447644,  0.22650416,  0.25523419, ...,  0.28405435,
         0.29232681,  0.20286656]])
Coordinates:
  * team     (team) object 'Wales' 'France' 'Ireland' ... 'Italy' 'England'
Dimensions without coordinates: sample

   

Note that following xarray convention, new dimensions and dimensions on which we operated are moved to the end. This only matters when you access the underlying array with .values or .data and you can always transpose using {meth}xarray.Dataset.transpose, but it can matter. You can change the pattern to enforce the output dimension order:

rearrange(ds.atts, [{"sample": ("chain", "draw")}, "team"])

Out:


   
    
array([[ 0.10632395, -0.01912607,  0.13671159, -0.06754783, -0.46083807,
         0.30447644],
       ...,
       [ 0.21803568, -0.11394285,  0.09447937, -0.11032643, -0.29111234,
         0.20286656]])
Coordinates:
  * team     (team) object 'Wales' 'France' 'Ireland' ... 'Italy' 'England'
Dimensions without coordinates: sample

   

Now to a more complicated pattern. We will split the chain and draw dimension, then combine those split dimensions between them.

rearrange(
    ds.atts,
    # combine split chain and team dims between them
    # here we don't use a dict so the new dimensions get a default name
    out_dims=[("chain1", "team1"), ("team2", "chain2")],
    # use dicts to specify which dimensions to split, here we *need* to use a dict
    in_dims=[{"chain": ("chain1", "chain2")}, {"team": ("team1", "team2")}],
    # set the lengths of split dimensions as kwargs
    chain1=2, chain2=2, team1=2, team2=3
)

Out:


   
    
array([[[ 1.06323952e-01,  2.47005252e-01, -1.91260714e-02,
         -2.55769582e-02,  1.36711590e-01,  1.23165119e-01],
...
        [-2.76616968e-02, -1.10326428e-01, -3.99582340e-01,
         -2.91112341e-01,  1.90714405e-01,  2.02866563e-01]]])
Coordinates:
  * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 492 493 494 495 496 497 498 499
Dimensions without coordinates: chain1,team1, team2,chain2

   

More einops examples at {ref}einops

Linear Algebra

Still missing in the package

There is no one size fits all solution, but knowing the function we are wrapping we can easily make the code more concise and clear. Without xarray-einstats, to invert a batch of matrices stored in a 4d array you have to do:

inv = xarray.apply_ufunc(   # output is a 4d labeled array
    numpy.linalg.inv,
    batch_of_matrices,      # input is a 4d labeled array
    input_core_dims=[["matrix_dim", "matrix_dim_bis"]],
    output_core_dims=[["matrix_dim", "matrix_dim_bis"]]
)

to calculate it's norm instead, it becomes:

norm = xarray.apply_ufunc(  # output is a 2d labeled array
    numpy.linalg.norm,
    batch_of_matrices,      # input is a 4d labeled array
    input_core_dims=[["matrix_dim", "matrix_dim_bis"]],
)

With xarray-einstats, those operations become:

inv = xarray_einstats.inv(batch_of_matrices, dim=("matrix_dim", "matrix_dim_bis"))
norm = xarray_einstats.norm(batch_of_matrices, dim=("matrix_dim", "matrix_dim_bis"))

Similar projects

Here we list some similar projects we know of. Note that all of them are complementary and don't overlap:

Comments
  • distutils.errors.DistutilsOptionError: No configuration found for dynamic 'description'.

    distutils.errors.DistutilsOptionError: No configuration found for dynamic 'description'.

    Build fails on FreeBSD:

    /usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py:102: _ExperimentalProjectMetadata: Support for project metadata in `pyproject.toml` is still experimental and may be removed (or change) in future releases.
      warnings.warn(msg, _ExperimentalProjectMetadata)
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "setup.py", line 1, in <module>
        import setuptools; setuptools.setup()
      File "/usr/local/lib/python3.8/site-packages/setuptools/__init__.py", line 87, in setup
        return distutils.core.setup(**attrs)
      File "/usr/local/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 122, in setup
        dist.parse_config_files()
      File "/usr/local/lib/python3.8/site-packages/setuptools/dist.py", line 854, in parse_config_files
        pyprojecttoml.apply_configuration(self, filename, ignore_option_errors)
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 54, in apply_configuration
        config = read_configuration(filepath, True, ignore_option_errors, dist)
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 134, in read_configuration
        return expand_configuration(asdict, root_dir, ignore_option_errors, dist)
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 189, in expand_configuration
        return _ConfigExpander(config, root_dir, ignore_option_errors, dist).expand()
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 236, in expand
        self._expand_all_dynamic(dist, package_dir)
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 271, in _expand_all_dynamic
        obtained_dynamic = {
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 272, in <dictcomp>
        field: self._obtain(dist, field, package_dir)
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 309, in _obtain
        self._ensure_previously_set(dist, field)
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 295, in _ensure_previously_set
        raise OptionError(msg)
    distutils.errors.DistutilsOptionError: No configuration found for dynamic 'description'.
    Some dynamic fields need to be specified via `tool.setuptools.dynamic`
    others must be specified via the equivalent attribute in `setup.py`.
    *** Error code 1
    

    Version: 0.2.2 Python-3.8 FreeBSD 13.1

    opened by yurivict 6
  • test stats on datasets

    test stats on datasets

    If using the right subset of dimensions, the summary stats already work on datasets. This PR adds tests to make sure this behaviour always works. This is very convenient for MCMC output for example to take the mad over the chain and draw dimension of all the variables in a dataset at once.

    opened by OriolAbril 2
  • update readme, index and install pages

    update readme, index and install pages

    Update the installation page to add conda, and separate the readme and the index page to have slightly different content (expecting different audiences in each page)


    :books: Documentation preview :books:: https://xarray-einstats--33.org.readthedocs.build/en/33/

    opened by OriolAbril 1
  • Use Read the Docs action v1

    Use Read the Docs action v1

    Read the Docs repository was renamed from readthedocs/readthedocs-preview to readthedocs/actions/. Now, the preview action is under readthedocs/actions/preview and is tagged as v1


    :books: Documentation preview :books:: https://xarray-einstats--31.org.readthedocs.build/en/31/

    opened by humitos 1
  • Catch positive definite error

    Catch positive definite error

    Even though the docstring from https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.multivariate_normal.html says

    Symmetric positive (semi)definite covariance matrix of the distribution.

    It also accepts matrices whose determinant is close to 0 but negative, generally due to numerical issues. It doesn't expect users to solve those numerical issues themselves before passing the covariance matrix to the multivariate_normal class or methods. This adds a try/except to catch the error related to this issue and checks if adding an identity matrix with 1e-10 to the provided covariance matrix solves the issue.

    cc @symeneses

    opened by OriolAbril 1
  • Check behaviour on groupby objects

    Check behaviour on groupby objects

    I think that most functions in the stats module can be used on xarray groupby objects, like az.hdi as shown here. If that is the case we should document that and add tests to prevent future changes from removing that feature.

    opened by OriolAbril 1
  • try preferred citation to add the doi in generated citation

    try preferred citation to add the doi in generated citation

    The cite this repository button generated by github currenly copies the following text to the clipboard:

    @software{Abril_Pla_xarray-einstats,
    author = {Abril Pla, Oriol},
    license = {Apache-2.0},
    title = {{xarray-einstats}},
    url = {https://github.com/arviz-devs/xarray-einstats}
    }
    

    which completely ignores the doi even though it is provided as identifier. There is also an option for "preferred citation" that can be used to point to an article instead. This PR tries to use that to generate still a software citation but with the general Zenodo doi for this repository.

    Note: Zenodo and releases are a cyclic dependency. The doi is only generated once the release is crafted, so the released code can't include the right doi.

    This branch currently generates this:

    @software{Abril-Pla_xarray-einstats_2022,
    author = {Abril-Pla, Oriol},
    doi = {10.5281/zenodo.5895451},
    license = {Apache-2.0},
    title = {{xarray-einstats}},
    url = {https://github.com/arviz-devs/xarray-einstats},
    year = {2022}
    }
    

    will think about what should be present and update accordingly. Some info like the publisher is ignored (I guess for software type citations) even if provided inside the preferred_ctation section.

    opened by OriolAbril 1
  • Change the monkeypatch thing to using with contexts?

    Change the monkeypatch thing to using with contexts?

    It would be nice to be able to do things like

    with matrix_dims(["dim1", "dim3"]):
        chol = xe.linalg.cholesky(da)
        eig = xe.linalg.eig(da)
    

    instead of having to use the monkeypatch trick (currently documented) or needing to pass the dimensions every time.

    opened by OriolAbril 0
  • Tests fail: AttributeError: partially initialized module 'einops' has no attribute '_backends'

    Tests fail: AttributeError: partially initialized module 'einops' has no attribute '_backends'

    collected 140 items / 2 errors                                                                                                                                                               
    
    =========================================================================================== ERRORS ===========================================================================================
    _________________________________________________________________ ERROR collecting src/xarray_einstats/tests/test_einops.py __________________________________________________________________
    tests/test_einops.py:6: in <module>
        from xarray_einstats.einops import raw_rearrange, raw_reduce, rearrange, reduce, translate_pattern
    einops.py:9: in <module>
        import einops
    einops.py:407: in <module>
        class DaskBackend(einops._backends.AbstractBackend):  # pylint: disable=protected-access
    E   AttributeError: partially initialized module 'einops' has no attribute '_backends' (most likely due to a circular import)
    __________________________________________________________________ ERROR collecting src/xarray_einstats/tests/test_numba.py __________________________________________________________________
    tests/test_numba.py:6: in <module>
        from xarray_einstats.numba import histogram
    numba.py:2: in <module>
        import numba
    numba.py:9: in <module>
        @numba.guvectorize(
    E   AttributeError: partially initialized module 'numba' has no attribute 'guvectorize' (most likely due to a circular import)
    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 2 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    ===================================================================================== 2 errors in 1.83s ======================================================================================
    *** Error code 2
    
    

    OS: FreeBSD 13.1

    opened by yurivict 3
  • Add logsumexp wrapper

    Add logsumexp wrapper

    I think it is the only function in scipy.special worth wrapping, so it might be worth adding a misc module for this and other possible "loose ends" functions to wrap

    opened by OriolAbril 0
Releases(v0.4.0)
  • v0.4.0(Dec 9, 2022)

  • v0.3.0(Jun 19, 2022)

  • v0.2.2(Apr 2, 2022)

    Patch release to include the license and changelog files in the pypi package, now using the PEP 621 metadata in pyproject.toml. Packaging the license is needed to add an xarray-einstats feedstock to conda forge.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Apr 2, 2022)

    Patch release to use a manifest file to include the license and changelog files in the pypi package. Packaging the license is needed to add an xarray-einstats feedstock to conda forge.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Apr 1, 2022)

  • v0.1.1(Jan 24, 2022)

    Initial release of xarray_einstats.

    xarray_einstats extends array manipulation libraries to use with xarray. It starts with 4 modules:

    • linalg -> extends functionality from numpy.linalg module
    • stats -> extends functionality from scipy.stats module
    • einops -> extends einops library, which needs to be installed
    • numba -> miscellaneous extensions (numpy.histogram for now only) that need numba to accelerate and/or vectorize the functions. numba needs to be installed to use it

    v0.1.1 indicates the second try at uploading to pypi

    Source code(tar.gz)
    Source code(zip)
LILLIE: Information Extraction and Database Integration Using Linguistics and Learning-Based Algorithms

LILLIE: Information Extraction and Database Integration Using Linguistics and Learning-Based Algorithms Based on the work by Smith et al. (2021) Query

5 Aug 06, 2022
Greykite: A flexible, intuitive and fast forecasting library

The Greykite library provides flexible, intuitive and fast forecasts through its flagship algorithm, Silverkite.

LinkedIn 1.4k Jan 15, 2022
ThunderSVM: A Fast SVM Library on GPUs and CPUs

What's new We have recently released ThunderGBM, a fast GBDT and Random Forest library on GPUs. add scikit-learn interface, see here Overview The miss

Xtra Computing Group 1.4k Dec 22, 2022
This repository has datasets containing information of Uber pickups in NYC from April 2014 to September 2014 and January to June 2015. data Analysis , virtualization and some insights are gathered here

uber-pickups-analysis Data Source: https://www.kaggle.com/fivethirtyeight/uber-pickups-in-new-york-city Information about data set The dataset contain

B DEVA DEEKSHITH 1 Nov 03, 2021
QML: A Python Toolkit for Quantum Machine Learning

QML is a Python2/3-compatible toolkit for representation learning of properties of molecules and solids.

176 Dec 09, 2022
TIANCHI Purchase Redemption Forecast Challenge

TIANCHI Purchase Redemption Forecast Challenge

Haorui HE 4 Aug 26, 2022
Unofficial pytorch implementation of the paper "Context Reasoning Attention Network for Image Super-Resolution (ICCV 2021)"

CRAN Unofficial pytorch implementation of the paper "Context Reasoning Attention Network for Image Super-Resolution (ICCV 2021)" This code doesn't exa

4 Nov 11, 2021
A Streamlit demo to interactively visualize Uber pickups in New York City

Streamlit Demo: Uber Pickups in New York City A Streamlit demo written in pure Python to interactively visualize Uber pickups in New York City. View t

Streamlit 230 Dec 28, 2022
BigDL: Distributed Deep Learning Framework for Apache Spark

BigDL: Distributed Deep Learning on Apache Spark What is BigDL? BigDL is a distributed deep learning library for Apache Spark; with BigDL, users can w

4.1k Jan 09, 2023
Predicting job salaries from ads - a Kaggle competition

Predicting job salaries from ads - a Kaggle competition

Zygmunt Zając 57 Oct 23, 2020
Laporan Proyek Machine Learning - Azhar Rizki Zulma

Laporan Proyek Machine Learning - Azhar Rizki Zulma Project Overview Domain proyek yang dipilih dalam proyek machine learning ini adalah mengenai hibu

Azhar Rizki Zulma 6 Mar 12, 2022
💀mummify: a version control tool for machine learning

mummify is a version control tool for machine learning. It's simple, fast, and designed for model prototyping.

Max Humber 43 Jul 09, 2022
A statistical library designed to fill the void in Python's time series analysis capabilities, including the equivalent of R's auto.arima function.

pmdarima Pmdarima (originally pyramid-arima, for the anagram of 'py' + 'arima') is a statistical library designed to fill the void in Python's time se

alkaline-ml 1.3k Dec 22, 2022
🌲 Implementation of the Robust Random Cut Forest algorithm for anomaly detection on streams

🌲 Implementation of the Robust Random Cut Forest algorithm for anomaly detection on streams

Real-time water systems lab 416 Jan 06, 2023
This jupyter notebook project was completed by me and my friend using the dataset from Kaggle

ARM This jupyter notebook project was completed by me and my friend using the dataset from Kaggle. The world Happiness 2017, which ranks 155 countries

1 Jan 23, 2022
A Pythonic framework for threat modeling

pytm: A Pythonic framework for threat modeling Introduction Traditional threat modeling too often comes late to the party, or sometimes not at all. In

Izar Tarandach 644 Dec 20, 2022
BudouX is the successor to Budou, the machine learning powered line break organizer tool.

BudouX Standalone. Small. Language-neutral. BudouX is the successor to Budou, the machine learning powered line break organizer tool. It is standalone

Google 868 Jan 05, 2023
Accelerating model creation and evaluation.

EmeraldML A machine learning library for streamlining the process of (1) cleaning and splitting data, (2) training, optimizing, and testing various mo

Yusuf 0 Dec 06, 2021
Pragmatic AI Labs 421 Dec 31, 2022
The easy way to combine mlflow, hydra and optuna into one machine learning pipeline.

mlflow_hydra_optuna_the_easy_way The easy way to combine mlflow, hydra and optuna into one machine learning pipeline. Objective TODO Usage 1. build do

shibuiwilliam 9 Sep 09, 2022