An implementation of chunked, compressed, N-dimensional arrays for Python.

Overview

Zarr

Latest Release latest release
latest release
Package Status status
License license
Build Status travis build status
Coverage coverage
Downloads pypi downloads
Gitter
Citation DOI

What is it?

Zarr is a Python package providing an implementation of compressed, chunked, N-dimensional arrays, designed for use in parallel computing. See the documentation for more information.

Main Features

  • Create N-dimensional arrays with any NumPy dtype.
  • Chunk arrays along any dimension.
  • Compress and/or filter chunks using any NumCodecs codec.
  • Store arrays in memory, on disk, inside a zip file, on S3, etc...
  • Read an array concurrently from multiple threads or processes.
  • Write to an array concurrently from multiple threads or processes.
  • Organize arrays into hierarchies via groups.

Where to get it

Zarr can be installed from PyPI using pip:

pip install zarr

or via conda:

conda install -c conda-forge zarr

For more details, including how to install from source, see the installation documentation.

Comments
  • Refactoring WIP

    Refactoring WIP

    This PR contains work on refactoring to achieve separation of concerns between compression/decompression, storage, synchronization and array/chunk data management. Primarily motivated by #21.

    Still TODO:

    • [x] Improve test coverage
    • [x] Rework persistance/storage documentation to generalise beyond directory store
    • [x] Improve docstrings, add examples where appropriate
    • [x] Do some benchmarking/profiling
    • [x] Fall back to pure python install
    opened by alimanfoo 103
  • async in zarr

    async in zarr

    I think there are some places where zarr would benefit immensely from some async capabilities when reading and writing data. I will try to illustrate this with the simplest example I can.

    Let's consider a zarr array stored in a public S3 bucket, which we can read with fsspec's HTTPFileSystem interface (no special S3 API needed, just regular http calls).

    import fsspec
    url_base = 'https://mur-sst.s3.us-west-2.amazonaws.com/zarr/time'
    mapper = fsspec.get_mapper(url_base)
    za = zarr.open(mapper)
    za.info
    

    image

    Note that this is a highly sub-optimal choice of chunks. The 1D array of shape (6443,) is stored in chunks of only (5,) items, resulting in over 1000 tiny chunks. Reading this data takes forever, over 5 minutes

    %prun tdata = za[:]
    
             20312192 function calls (20310903 primitive calls) in 342.624 seconds
    
       Ordered by: internal time
    
       ncalls  tottime  percall  cumtime  percall filename:lineno(function)
         1289  139.941    0.109  140.077    0.109 {built-in method _openssl.SSL_do_handshake}
         2578   99.914    0.039   99.914    0.039 {built-in method _openssl.SSL_read}
         1289   68.375    0.053   68.375    0.053 {method 'connect' of '_socket.socket' objects}
         1289    9.252    0.007    9.252    0.007 {built-in method _openssl.SSL_CTX_load_verify_locations}
         1289    7.857    0.006    7.868    0.006 {built-in method _socket.getaddrinfo}
         1289    1.619    0.001    1.828    0.001 connectionpool.py:455(close)
       930658    0.980    0.000    2.103    0.000 os.py:674(__getitem__)
    ...
    

    I believe fsspec is introducing some major overhead by not reusing a connectionpool. But regardless, zarr is iterating synchronously over each chunk to load the data:

    https://github.com/zarr-developers/zarr-python/blob/994f2449b84be544c9dfac3e23a15be3f5478b71/zarr/core.py#L1023-L1028

    As a lower bound on how fast this approach could be, we bypass zarr and fsspec and just fetch all the chunks with requests:

    import requests
    s = requests.Session()
    
    def get_chunk_http(n):
        r = s.get(url_base + f'/{n}')
        r.raise_for_status()
        return r.content
    
    %prun all_data = [get_chunk_http(n) for n in range(za.shape[0] // za.chunks[0])] 
    
             12550435 function calls (12549147 primitive calls) in 98.508 seconds
    
       Ordered by: internal time
    
       ncalls  tottime  percall  cumtime  percall filename:lineno(function)
         2576   87.798    0.034   87.798    0.034 {built-in method _openssl.SSL_read}
           13    1.436    0.110    1.437    0.111 {built-in method _openssl.SSL_do_handshake}
       929936    1.042    0.000    2.224    0.000 os.py:674(__getitem__)
    

    As expected, reusing a connection pool sped things up, but it still takes 100 s to read the array.

    Finally, we can try the same thing with asyncio

    import asyncio
    import aiohttp
    import time
    
    async def get_chunk_http_async(n, session):
        url = url_base + f'/{n}'
        async with session.get(url) as r:
            r.raise_for_status()
            data = await r.read()
        return data
    
    async with aiohttp.ClientSession() as session:
        tic = time.time()
        all_data = await asyncio.gather(*[get_chunk_http_async(n, session)
                                        for n in range(za.shape[0] // za.chunks[0])])
        print(f"{time.time() - tic} seconds")
    
    # > 1.7969944477081299 seconds
    

    This is a MAJOR speedup!

    I am aware that using dask could possibly help me here. But I don't have big data here, and I don't want to use dask. I want zarr to support asyncio natively.

    I am quite new to async programming and have no idea how hard / complicated it would be to do this. But based on this experiment, I am quite sure there are major performance benefits to be had, particularly when using zarr with remote storage protocols.

    Thoughts?

    cc @cgentemann

    opened by rabernat 93
  • Sharding Prototype I: implementation as translating Store

    Sharding Prototype I: implementation as translating Store

    This PR is for an early prototype of sharding support, as described in the corresponding issue #877. It serves mainly to discuss the overall implementation approach for sharding. This PR is not (yet) meant to be merged.

    This prototype

    • allows to specify shards as the number of chunks that should be contained in a shard (e.g. using arr.zeros((20, 3), chunks=(3, 3), shards=(2, 2), …)). One shard corresponds to one storage key, but can contain multiple chunks: sharding
    • ensures that this setting is persisted in the .zarray config and loaded when opening an array again, adding two entries:
      • "shard_format": "indexed" specifies the binary format of the shards and allows to extend sharding with other formats later
      • "shards": [2, 2] specifies how many chunks are contained in a shard,
    • adds a IndexedShardedStore class that is used to wrap the chunk-store when sharding is enabled. This store handles the grouping of multiple chunks to one shard and transparently reads and writes them via the inner store in a binary format which is specified below. The original store API does not need to be adapted, it just stores shards instead of chunks, which are translated back to chunks by the IndexedShardedStore.
    • adds a small script chunking_test.py for demonstration purposes, this is not meant to be merged but servers to illustrate the changes.

    The currently implemented file format is still up for discussion. It implements "Format 2" @jbms describes in https://github.com/zarr-developers/zarr-python/pull/876#issuecomment-973462279.

    Chunks are written successively in a shard (unused space between them is allowed), followed by an index referencing them. The index holding an offset, length pair of little-endian uint64 per chunk, the chunks-order in the index is row-major (C) order, e.g. for (2, 2) chunks per shard an index would look like:

    | chunk (0, 0)    | chunk (0, 1)    | chunk (1, 0)    | chunk (1, 1)    |
    | offset | length | offset | length | offset | length | offset | length |
    | uint64 | uint64 | uint64 | uint64 | uint64 | uint64 | uint64 | uint64 |
    

    Empty chunks are denoted by setting both offset and length to 2^64 - 1. All the index always has the full shape of all possible chunks per shard, even if they are outside of the array size.

    For the default order of the actual chunk-content in a shard I'd propose to use Morton order, but this can easily be changed and customized, since any order can be read.


    If the overall direction of this PR is pursued, the following steps (and possibly more) are missing:

    • Functionality
      • [ ] Use a default write-order (Morton) of chunks in a shard and allow customization
      • [ ] Support deletion in the ShardedStore
      • [ ] Group chunk-wise operations in Array where possible (e.g. in digest & _resize_nosync)
      • [ ] Consider locking mechanisms to guard against concurrency issues within a shard
      • [ ] Allow partial reads and writes when the wrapped store supports them
      • [ ] Add support for prefixes before the chunk-dimensions in the storage key, e.g. for arrays that are contained in a group
      • [ ] Add warnings for inefficient reads/writes (might be configured)
      • [ ] Maybe use the new partial read method on the Store also for the current PartialReadBuffer usage (to detect if this is possible and reading via it)
    • Tests
      • [ ] Add unit tests and/or doctests in docstrings
      • [ ] Test coverage is 100% (Codecov passes)
    • Documentation
      • [ ] also document optional optimization possibilities on the Store or BaseStore class, such as getitems or partial reads
      • [ ] Add docstrings and API docs for any new/modified user-facing classes and functions
      • [ ] New/modified features documented in docs/tutorial.rst
      • [ ] Changes documented in docs/release.rst

    changed 2021-12-07: added file format description and updated TODOs

    opened by jstriebel 69
  • Consolidate zarr metadata into single key

    Consolidate zarr metadata into single key

    A simple possible way of scanning all the metadata keys ('.zgroup'...) in a dataset and copying them into a single key, so that on systems where there is a substantial overhead to reading small files, everything can be grabbed in a single read. This is important in the context of xarray, which traverses all groups during opening the dataset, to find the various sub-groups and arrays.

    The test shows how you could use the generated key. We could contemplate automatically looking for the metadata key when opening.

    REF: https://github.com/pangeo-data/pangeo/issues/309

    TODO:

    • [x] Add unit tests and/or doctests in docstrings
    • [x] Unit tests and doctests pass locally under Python 3.6 (e.g., run tox -e py36 or pytest -v --doctest-modules zarr)
    • [x] Unit tests pass locally under Python 2.7 (e.g., run tox -e py27 or pytest -v zarr)
    • [x] PEP8 checks pass (e.g., run tox -e py36 or flake8 --max-line-length=100 zarr)
    • [x] Add docstrings and API docs for any new/modified user-facing classes and functions
    • [x] New/modified features documented in docs/tutorial.rst
    • [x] Doctests in tutorial pass (e.g., run tox -e py36 or python -m doctest -o NORMALIZE_WHITESPACE -o ELLIPSIS docs/tutorial.rst)
    • [x] Changes documented in docs/release.rst
    • [x] Docs build locally (e.g., run tox -e docs)
    • [x] AppVeyor and Travis CI passes
    • [ ] Test coverage to 100% (Coveralls passes)
    opened by martindurant 66
  • WIP: google cloud storage class

    WIP: google cloud storage class

    First, apologies for submitting an un-solicited pull request. I know that is against the contributor guidelines. I thought this idea would be easier to discuss with a concrete implementation to look at.

    In my highly opinionated view, the killer feature of zarr is its ability to efficiently store array data in cloud storage. Currently, the recommended way to do this is via outside packages (e.g. s3fs, gcsfs), which provide a MutableMapping that zarr can store things in.

    In this PR, I have implemented an experimental google cloud storage class directly within zarr.

    Why did I do this? Because, in the pangeo project, we are now making heavy use of the xarray -> zarr -> gcsfs -> cloud storage stack. I have come to the conclusion that a tighter coupling between zarr and gcs, via the google.cloud.storage API, may prove advantageous.

    In addition to performance benefits and easier debugging, I think there are social advantages to having cloud storage as a first-class part of zarr. Lots of people want to store arrays in the cloud, and if zarr can provide this capability more natively, it could increase adoption.

    Thoughts?

    These tests require GCP credentials and the google cloud storage package. It is possible add encrypted credentials to travis, but I haven't done that yet. Tests are (mostly) working locally for me.

    TODO:

    • [x] Add unit tests and/or doctests in docstrings
    • [ ] Unit tests and doctests pass locally under Python 3.6 (e.g., run tox -e py36 or pytest -v --doctest-modules zarr)
    • [ ] Unit tests pass locally under Python 2.7 (e.g., run tox -e py27 or pytest -v zarr)
    • [ ] PEP8 checks pass (e.g., run tox -e py36 or flake8 --max-line-length=100 zarr)
    • [ ] Add docstrings and API docs for any new/modified user-facing classes and functions
    • [ ] New/modified features documented in docs/tutorial.rst
    • [ ] Doctests in tutorial pass (e.g., run tox -e py36 or python -m doctest -o NORMALIZE_WHITESPACE -o ELLIPSIS docs/tutorial.rst)
    • [ ] Changes documented in docs/release.rst
    • [ ] Docs build locally (e.g., run tox -e docs)
    • [ ] AppVeyor and Travis CI passes
    • [ ] Test coverage to 100% (Coveralls passes)
    opened by rabernat 52
  • Add N5 Support

    Add N5 Support

    This adds support to read and write from and to N5 containers. The N5Store handling the conversion between the zarr and N5 format will automatically be selected whenever the path for a container ends in .n5 (similar to how the ZipStore is used for files ending in .zip).

    The conversion is done mostly transparently, with one exception being the N5ChunkWrapper: This is a Codec with id n5_wrapper that will automatically be wrapped around the requested compressor. For example, if you create an array with a zlib compressor, in fact the n5_wrapper codec will be used that delegates to the zlib codec internally. The additional codec was necessary to introduce N5's chunk headers and ensure big endian storage.

    In a related note, gzip compressed N5 arrays can at the moment not be read, since numcodecs treats zlib and gzip as synonyms, which they aren't (their compression headers differ). PR https://github.com/zarr-developers/numcodecs/pull/87 solves this issue.

    See also https://github.com/zarr-developers/zarr/issues/231

    TODO:

    • [x] Add unit tests and/or doctests in docstrings
    • [x] Unit tests and doctests pass locally under Python 3.6 (e.g., run tox -e py36 or pytest -v --doctest-modules zarr)
    • [x] Unit tests pass locally under Python 2.7 (e.g., run tox -e py27 or pytest -v zarr)
    • [x] PEP8 checks pass (e.g., run tox -e py36 or flake8 --max-line-length=100 zarr)
    • [x] Add docstrings and API docs for any new/modified user-facing classes and functions
    • [x] New/modified features documented in docs/tutorial.rst
    • [x] Doctests in tutorial pass (e.g., run tox -e py36 or python -m doctest -o NORMALIZE_WHITESPACE -o ELLIPSIS docs/tutorial.rst)
    • [x] Changes documented in docs/release.rst
    • [x] Docs build locally (e.g., run tox -e docs)
    • [x] AppVeyor and Travis CI passes
    • [x] Test coverage to 100% (Coveralls passes)
    opened by funkey 45
  • Confusion about the dimension_separator keyword

    Confusion about the dimension_separator keyword

    I don't really understand how to use the new dimension_separator keyword, in particular:

    1. Creating a DirectoryStore(dimension_separator="/") does not have the effect I would expect (see code and problem description below).
    2. Why does zarr still have the NestedDirectoryStore? Shouldn't it be the same as DirectoryStore(dimension_separator="/")? Hence I would assume that NestedDirectoryStore could either be removed or (if to be kept for legacy purposes) should just map to DirectoryStore(dimension_seperator="/").

    Minimal, reproducible code sample, a copy-pastable example if possible

    import zarr
    store = zarr.DirectoryStore("test.zarr", dimension_separator="/")
    g = zarr.open(store, mode="a")
    ds = g.create_dataset("test", shape=(10, 10, 10))
    ds[:] = 1
    

    Problem description

    Now, I would assume that the chunks are nested, but I get:

    $ ls test.zarr/test
    0.0.0
    

    but to, to my confusion, also this:

    $ cat test.zarr/test/.zarray
    ...
    "dimension_separator": "/",
    ...
    

    If I use NestedDirectoryStore instead, the chunks are nested as expected.

    Version and installation information

    Please provide the following:

    • Value of zarr.__version__: 2.8.3
    • Value of numcodecs.__version__: 0.7.3
    • Version of Python interpreter: 3.8.6
    • Operating system: Linux
    • How Zarr was installed: using conda
    opened by constantinpape 39
  • Add support for fancy indexing on get/setitem

    Add support for fancy indexing on get/setitem

    Addresses #657

    This matches NumPy behaviour in that basic, boolean, and vectorized integer (fancy) indexing are all accessible from __{get,set}item__. Users still have access to all the indexing methods if they want to be sure to use only basic indexing (integer + slices).

    I'm not 100% sure about the approach, but it seemed much easier to use a try/except than to try to detect all the cases when fancy indexing should be used. Happy to hear some guidance about how best to arrange that.

    I still need to update docstrings + docs, will do that now — thanks for the checklist below. 😂

    TODO:

    • [x] Add unit tests and/or doctests in docstrings
    • [x] Add docstrings and API docs for any new/modified user-facing classes and functions
    • [x] New/modified features documented in docs/tutorial.rst
    • [x] Changes documented in docs/release.rst
    • [x] GitHub Actions have all passed
    • [x] Test coverage is 100% (Codecov passes)
    opened by jni 39
  • Migrate to pyproject.toml + cleanup

    Migrate to pyproject.toml + cleanup

    • Zarr has a lot of redundant files in the root directory. The information contained in these files can be easily moved to pyproject.toml.
    • Latest Python PEPs encourage users to get rid of setup.py and switch to pyproject.toml
    • We should not be using setup.cfg until it is a necessity - https://github.com/pypa/setuptools/issues/3214
    • We should not be using setuptools as a frontend (python setup.py install) - this is not maintained (confirmed by setuptools developers, but I cannot find the exact issue number at this moment)
    • Zarr should switch to pre-commit.ci and remove the pre-commit workflow

    I have tried to perform a 1:1 port. No extra information was added and all the existing metadata has been moved to pyproject.toml.

    TODO:

    • [ ] Add unit tests and/or doctests in docstrings
    • [ ] Add docstrings and API docs for any new/modified user-facing classes and functions
    • [ ] New/modified features documented in docs/tutorial.rst
    • [ ] Changes documented in docs/release.rst
    • [ ] GitHub Actions have all passed
    • [ ] Test coverage is 100% (Codecov passes)
    opened by Saransh-cpp 38
  • Add FSStore

    Add FSStore

    Fixes #540 Ref https://github.com/zarr-developers/zarr-python/pull/373#issuecomment-592722584 (@rabernat )

    Introduces a short Store implementation for generic fsspec url+options. Allows both lowercasing of keys and choice between '.' and '/'-based ("nested") keys.

    For testing, have hijacked the TestNestedDirectoryStore tests just as an example - this is not how things will remain.

    opened by martindurant 38
  • Mutable mapping for Azure Blob

    Mutable mapping for Azure Blob

    [Description of PR]

    TODO:

    • [ ] Add unit tests and/or doctests in docstrings
    • [ ] Add docstrings and API docs for any new/modified user-facing classes and functions
    • [ ] New/modified features documented in docs/tutorial.rst
    • [ ] Changes documented in docs/release.rst
    • [ ] Docs build locally (e.g., run tox -e docs)
    • [ ] AppVeyor and Travis CI passes
    • [ ] Test coverage is 100% (Coveralls passes)
    opened by shikharsg 34
  • http:// → https://

    http:// → https://

    [Description of PR]

    TODO:

    • [ ] Add unit tests and/or doctests in docstrings
    • [ ] Add docstrings and API docs for any new/modified user-facing classes and functions
    • [ ] New/modified features documented in docs/tutorial.rst
    • [x] Changes documented in docs/release.rst
    • [x] GitHub Actions have all passed
    • [x] Test coverage is 100% (Codecov passes)
    opened by DimitriPapadopoulos 1
  • Allow reading utf-8 encoded json files

    Allow reading utf-8 encoded json files

    Fixes #1308

    Currently, zarr-python errors when reading a json file with non-ascii characters encoded with utf-8, however, Zarr.jl writes json files that include non-ascii characters using utf-8 encoding. This PR will enable zarr attributes written in Zarr.jl to be read by zarr-python.

    TODO:

    • [x] Add unit tests and/or doctests in docstrings
    • [x] Add docstrings and API docs for any new/modified user-facing classes and functions
    • [x] New/modified features documented in docs/tutorial.rst
    • [x] Changes documented in docs/release.rst
    • [ ] GitHub Actions have all passed
    • [ ] Test coverage is 100% (Codecov passes)
    opened by nhz2 0
  • Bump numpy from 1.24.0 to 1.24.1

    Bump numpy from 1.24.0 to 1.24.1

    Bumps numpy from 1.24.0 to 1.24.1.

    Release notes

    Sourced from numpy's releases.

    v1.24.1

    NumPy 1.24.1 Release Notes

    NumPy 1.24.1 is a maintenance release that fixes bugs and regressions discovered after the 1.24.0 release. The Python versions supported by this release are 3.8-3.11.

    Contributors

    A total of 12 people contributed to this release. People with a "+" by their names contributed a patch for the first time.

    • Andrew Nelson
    • Ben Greiner +
    • Charles Harris
    • Clément Robert
    • Matteo Raso
    • Matti Picus
    • Melissa Weber Mendonça
    • Miles Cranmer
    • Ralf Gommers
    • Rohit Goswami
    • Sayed Adel
    • Sebastian Berg

    Pull requests merged

    A total of 18 pull requests were merged for this release.

    • #22820: BLD: add workaround in setup.py for newer setuptools
    • #22830: BLD: CIRRUS_TAG redux
    • #22831: DOC: fix a couple typos in 1.23 notes
    • #22832: BUG: Fix refcounting errors found using pytest-leaks
    • #22834: BUG, SIMD: Fix invalid value encountered in several ufuncs
    • #22837: TST: ignore more np.distutils.log imports
    • #22839: BUG: Do not use getdata() in np.ma.masked_invalid
    • #22847: BUG: Ensure correct behavior for rows ending in delimiter in...
    • #22848: BUG, SIMD: Fix the bitmask of the boolean comparison
    • #22857: BLD: Help raspian arm + clang 13 about __builtin_mul_overflow
    • #22858: API: Ensure a full mask is returned for masked_invalid
    • #22866: BUG: Polynomials now copy properly (#22669)
    • #22867: BUG, SIMD: Fix memory overlap in ufunc comparison loops
    • #22868: BUG: Fortify string casts against floating point warnings
    • #22875: TST: Ignore nan-warnings in randomized out tests
    • #22883: MAINT: restore npymath implementations needed for freebsd
    • #22884: BUG: Fix integer overflow in in1d for mixed integer dtypes #22877
    • #22887: BUG: Use whole file for encoding checks with charset_normalizer.

    Checksums

    ... (truncated)

    Commits
    • a28f4f2 Merge pull request #22888 from charris/prepare-1.24.1-release
    • f8fea39 REL: Prepare for the NumPY 1.24.1 release.
    • 6f491e0 Merge pull request #22887 from charris/backport-22872
    • 48f5fe4 BUG: Use whole file for encoding checks with charset_normalizer [f2py] (#22...
    • 0f3484a Merge pull request #22883 from charris/backport-22882
    • 002c60d Merge pull request #22884 from charris/backport-22878
    • 38ef9ce BUG: Fix integer overflow in in1d for mixed integer dtypes #22877 (#22878)
    • bb00c68 MAINT: restore npymath implementations needed for freebsd
    • 64e09c3 Merge pull request #22875 from charris/backport-22869
    • dc7bac6 TST: Ignore nan-warnings in randomized out tests
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies python needs release notes 
    opened by dependabot[bot] 0
  • Bump actions/setup-python from 4.3.0 to 4.4.0

    Bump actions/setup-python from 4.3.0 to 4.4.0

    Bumps actions/setup-python from 4.3.0 to 4.4.0.

    Release notes

    Sourced from actions/setup-python's releases.

    Add support to install multiple python versions

    In scope of this release we added support to install multiple python versions. For this you can try to use this snippet:

        - uses: actions/[email protected]
          with:
            python-version: |
                3.8
                3.9
                3.10
    

    Besides, we changed logic with throwing the error for GHES if cache is unavailable to warn (actions/setup-python#566).

    Improve error handling and messages

    In scope of this release we added improved error message to put operating system and its version in the logs (actions/setup-python#559). Besides, the release

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies github_actions needs release notes 
    opened by dependabot[bot] 0
  • Ensure `zarr.create` uses writeable mode

    Ensure `zarr.create` uses writeable mode

    Closes https://github.com/zarr-developers/zarr-python/issues/1306

    cc @ravwojdyla @djhoese

    TODO:

    • [x] Add unit tests and/or doctests in docstrings
    • [ ] ~Add docstrings and API docs for any new/modified user-facing classes and functions~
    • [ ] ~New/modified features documented in docs/tutorial.rst~
    • [ ] Changes documented in docs/release.rst
    • [x] GitHub Actions have all passed
    • [x] Test coverage is 100% (Codecov passes)
    needs release notes 
    opened by jrbourbeau 4
  • Cannot read attributes that contain non-ASCII characters

    Cannot read attributes that contain non-ASCII characters

    Zarr version

    2.13.4.dev68

    Numcodecs version

    0.11.0

    Python Version

    3.10.6

    Operating System

    Linux

    Installation

    With pip, using the instructions here https://zarr.readthedocs.io/en/stable/contributing.html

    Description

    I expect zarr to be able to read attribute files that contain non-ASCII characters, because JSON files use utf-8 encoding. However, currently, there is a check to error if the JSON file contains any non-ASCII characters.

    Steps to reproduce

    import zarr
    import tempfile
    tempdir = tempfile.mkdtemp()
    f = open(tempdir + '/.zgroup','w', encoding='utf-8')
    f.write('{"zarr_format":2}')
    f.close()
    f = open(tempdir + '/.zattrs','w', encoding='utf-8')
    f.write('{"foo": "た"}')
    f.close()
    z = zarr.open(tempdir, mode='r')
    z.attrs['foo']
    
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/root/github/zarr-python/zarr/attrs.py", line 74, in __getitem__
        return self.asdict()[item]
      File "/root/github/zarr-python/zarr/attrs.py", line 55, in asdict
        d = self._get_nosync()
      File "/root/github/zarr-python/zarr/attrs.py", line 48, in _get_nosync
        d = self.store._metadata_class.parse_metadata(data)
      File "/root/github/zarr-python/zarr/meta.py", line 104, in parse_metadata
        meta = json_loads(s)
      File "/root/github/zarr-python/zarr/util.py", line 56, in json_loads
        return json.loads(ensure_text(s, 'ascii'))
      File "/root/pyenv/zarr-dev/lib/python3.10/site-packages/numcodecs/compat.py", line 181, in ensure_text
        s = codecs.decode(s, encoding)
    UnicodeDecodeError: 'ascii' codec can't decode byte 0xe3 in position 9: ordinal not in range(128)
    

    Additional output

    No response

    bug 
    opened by nhz2 0
Releases(v2.13.3)
Owner
Zarr Developers
Contributors to the Zarr open source project.
Zarr Developers
PyTorch implementation of paper "IBRNet: Learning Multi-View Image-Based Rendering", CVPR 2021.

IBRNet: Learning Multi-View Image-Based Rendering PyTorch implementation of paper "IBRNet: Learning Multi-View Image-Based Rendering", CVPR 2021. IBRN

Google Interns 371 Jan 03, 2023
Web mining module for Python, with tools for scraping, natural language processing, machine learning, network analysis and visualization.

Pattern Pattern is a web mining module for Python. It has tools for: Data Mining: web services (Google, Twitter, Wikipedia), web crawler, HTML DOM par

Computational Linguistics Research Group 8.4k Jan 03, 2023
AQP is a modular pipeline built to enable the comparison and testing of different quality metric configurations.

Audio Quality Platform - AQP An Open Modular Python Platform for Objective Speech and Audio Quality Metrics AQP is a highly modular pipeline designed

Jack Geraghty 24 Oct 01, 2022
Repository for scripts and notebooks from the book: Programming PyTorch for Deep Learning

Repository for scripts and notebooks from the book: Programming PyTorch for Deep Learning

Ian Pointer 368 Dec 17, 2022
A video scene detection algorithm is designed to detect a variety of different scenes within a video

Scene-Change-Detection - A video scene detection algorithm is designed to detect a variety of different scenes within a video. There is a very simple definition for a scene: It is a series of logical

1 Jan 04, 2022
TACTO: A Fast, Flexible and Open-source Simulator for High-Resolution Vision-based Tactile Sensors

TACTO: A Fast, Flexible and Open-source Simulator for High-Resolution Vision-based Tactile Sensors This package provides a simulator for vision-based

Facebook Research 255 Dec 27, 2022
Painting app using Python machine learning and vision technology.

AI Painting App We are making an app that will track our hand and helps us to draw from that. We will be using the advance knowledge of Machine Learni

Badsha Laskar 3 Oct 03, 2022
A custom DeepStack model for detecting 16 human actions.

DeepStack_ActionNET This repository provides a custom DeepStack model that has been trained and can be used for creating a new object detection API fo

MOSES OLAFENWA 16 Nov 11, 2022
Supporting code for the paper "Dangers of Bayesian Model Averaging under Covariate Shift"

Dangers of Bayesian Model Averaging under Covariate Shift This repository contains the code to reproduce the experiments in the paper Dangers of Bayes

Pavel Izmailov 25 Sep 21, 2022
Code for "LoRA: Low-Rank Adaptation of Large Language Models"

LoRA: Low-Rank Adaptation of Large Language Models This repo contains the implementation of LoRA in GPT-2 and steps to replicate the results in our re

Microsoft 394 Jan 08, 2023
Code for the paper "On the Power of Edge Independent Graph Models"

Edge Independent Graph Models Code for the paper: "On the Power of Edge Independent Graph Models" Sudhanshu Chanpuriya, Cameron Musco, Konstantinos So

Konstantinos Sotiropoulos 0 Oct 26, 2021
Weak-supervised Visual Geo-localization via Attention-based Knowledge Distillation

Weak-supervised Visual Geo-localization via Attention-based Knowledge Distillation Introduction WAKD is a PyTorch implementation for our ICPR-2022 pap

2 Oct 20, 2022
InsTrim: Lightweight Instrumentation for Coverage-guided Fuzzing

InsTrim The paper: InsTrim: Lightweight Instrumentation for Coverage-guided Fuzzing Build Prerequisite llvm-8.0-dev clang-8.0 cmake = 3.2 Make git cl

75 Dec 23, 2022
Deeper DCGAN with AE stabilization

AEGeAN Deeper DCGAN with AE stabilization Parallel training of generative adversarial network as an autoencoder with dedicated losses for each stage.

Tyler Kvochick 36 Feb 17, 2022
scikit-learn: machine learning in Python

scikit-learn is a Python module for machine learning built on top of SciPy and is distributed under the 3-Clause BSD license. The project was started

scikit-learn 52.5k Jan 08, 2023
A Closer Look at Structured Pruning for Neural Network Compression

A Closer Look at Structured Pruning for Neural Network Compression Code used to reproduce experiments in https://arxiv.org/abs/1810.04622. To prune, w

Bayesian and Neural Systems Group 140 Dec 05, 2022
[NeurIPS 2021] Source code for the paper "Qu-ANTI-zation: Exploiting Neural Network Quantization for Achieving Adversarial Outcomes"

Qu-ANTI-zation This repository contains the code for reproducing the results of our paper: Qu-ANTI-zation: Exploiting Quantization Artifacts for Achie

Secure AI Systems Lab 8 Mar 26, 2022
Weighted K Nearest Neighbors (kNN) algorithm implemented on python from scratch.

kNN_From_Scratch I implemented the k nearest neighbors (kNN) classification algorithm on python. This algorithm is used to predict the classes of new

1 Dec 14, 2021
Transformer Huffman coding - Complete Huffman coding through transformer

Transformer_Huffman_coding Complete Huffman coding through transformer 2022/2/19

3 May 19, 2022
The CLRS Algorithmic Reasoning Benchmark

Learning representations of algorithms is an emerging area of machine learning, seeking to bridge concepts from neural networks with classical algorithms.

DeepMind 251 Jan 05, 2023