Lazydata: Scalable data dependencies for Python projects

Overview

CircleCI

lazydata: scalable data dependencies

lazydata is a minimalist library for including data dependencies into Python projects.

Problem: Keeping all data files in git (e.g. via git-lfs) results in a bloated repository copy that takes ages to pull. Keeping code and data out of sync is a disaster waiting to happen.

Solution: lazydata only stores references to data files in git, and syncs data files on-demand when they are needed.

Why: The semantics of code and data are different - code needs to be versioned to merge it, and data just needs to be kept in sync. lazydata achieves exactly this in a minimal way.

Benefits:

  • Keeps your git repository clean with just code, while enabling seamless access to any number of linked data files
  • Data consistency assured using file hashes and automatic versioning
  • Choose your own remote storage backend: AWS S3 or (coming soon:) directory over SSH

lazydata is primarily designed for machine learning and data science projects. See this medium post for more.

Getting started

In this section we'll show how to use lazydata on an example project.

Installation

Install with pip (requires Python 3.5+):

$ pip install lazydata

Add to your project

To enable lazydata, run in project root:

$ lazydata init 

This will initialise lazydata.yml which will hold the list of files managed by lazydata.

Tracking a file

To start tracking a file use track("<path_to_file>") in your code:

my_script.py

from lazydata import track

# store the file when loading  
import pandas as pd
df = pd.read_csv(track("data/my_big_table.csv"))

print("Data shape:" + df.shape)

Running the script the first time will start tracking the file:

$ python my_script.py
## lazydata: Tracking a new file data/my_big_table.csv
## Data shape: (10000,100)

The file is now tracked and has been backed-up in your local lazydata cache in ~/.lazydata and added to lazydata.yml:

files:
  - path: data/my_big_table.csv
    hash: 2C94697198875B6E...
    usage: my_script.py

If you re-run the script without modifying the data file, lazydata will just quickly check that the data file hasn't changed and won't do anything else.

If you modify the data file and re-run the script, this will add another entry to the yml file with the new hash of the data file, i.e. data files are automatically versioned. If you don't want to keep past versions, simply remove them from the yml.

And you are done! This data file is now tracked and linked to your local repository.

Sharing your tracked files

To access your tracked files from multiple machines add a remote storage backend where they can be uploaded. To use S3 as a remote storage backend run:

$ lazydata add-remote s3://mybucket/lazydata

This will configure the S3 backend and also add it to lazydata.yml for future reference.

You can now git commit and push your my_script.py and lazydata.yml files as you normally would.

To copy the stored data files to S3 use:

$ lazydata push

When your collaborator pulls the latest version of the git repository, they will get the script and the lazydata.yml file as usual.

Data files will be downloaded when your collaborator runs my_script.py and the track("my_big_table.csv") is executed:

$ python my_script.py
## lazydata: Downloading stored file my_big_table.csv ...
## Data shape: (10000,100)

To get the data files without running the code, you can also use the command line utility:

# download just this file
$ lazydata pull my_big_table.csv

# download everything used in this script
$ lazydata pull my_script.py

# download everything stored in the data/ directory and subdirs
$ lazydata pull data/

# download the latest version of all data files
$ lazydata pull

Because lazydata.yml is tracked by git you can safely make and switch git branches.

Data dependency scenarios

You can achieve multiple data dependency scenarios by putting lazydata.track() into different parts of the code:

  • Jupyter notebook data dependencies by using tracking in notebooks
  • Data pipeline output tracking by tracking saved files
  • Class-level data dependencies by tracking files in __init__(self)
  • Module-level data dependencies by tracking files in __init__.py
  • Package-level data dependencies by tracking files in setup.py

Coming soon...

  • Examine stored file provenance and properties
  • Faceting multiple files into portable datasets
  • Storing data coming from databases and APIs
  • More remote storage options

Stay in touch

This is an early stable beta release. To find out about new releases subscribe to our new releases mailing list.

Contributing

The library is licenced under Apache-2 licence. All contributions are welcome!

Comments
  • lazydata command not recognizable on Windows

    lazydata command not recognizable on Windows

    Add to your project To enable lazydata, run in project root:

    $ lazydata init

    This resulted in:

    'lazydata' is not recognized as an internal or external command, operable program or batch file.

    on windows 10.

    opened by lastmeta 6
  • Adding support for custom endpoint

    Adding support for custom endpoint

    Useful for users that do not want to rely on Amazon S3 while using this package.

    I'm running a Minio Server for storage, which mimics an S3 container.

    boto3 (api doc here) supports custom endpoints that it's going to hit via the S3 API.

    I would be good to add tests to this behaviour, along with testing pulls and pushes for normal behaviour. Using mocks maybe ?

    EDIT: I also corrected a line which caused the package not to work for python 3.5, see commit 0d4f8fc

    opened by zbitouzakaria 6
  • corrupted lazydata.yml if application crashes

    corrupted lazydata.yml if application crashes

    I noticed when the python application that is tracking some data crashes at some point, it can leave behind a corrupted yaml file. It is not that uncommon when you are in an exploratory phase of building an ML model to write a code that can crash for example because of memory issues etc. It would be great if yaml file handle closes after each track call to ensure the file does not get corrupted! Thanks! I really like this project!

    opened by rmanak 4
  • All local file revisions hardlink to the latest revision

    All local file revisions hardlink to the latest revision

    I tested this out by creating and tracking a single file through multiple revisions.

    Let's say we have a big_file.csv whose content look like this:

    a, b, c
    1, 2, 3
    

    We first track it using this script:

    from lazydata import track
    
    # store the file when loading  
    import pandas as pd
    df = pd.read_csv(track("big_file.csv"))
    
    print("Data shape:" + str(df.shape))
    

    Change the file content multiple times, for ex:

    a, b, c
    1, 2, 3
    4, 5, 6
    

    And keep executing the script between the multiple revisions:

    (dev3.5)  ~/test_lazydata > python my_script.py 
    LAZYDATA: Tracking new file `big_file.csv`
    Data shape:(1, 3)
    (dev3.5)  ~/test_lazydata > vim big_file.csv  # changing file
    (dev3.5)  ~/test_lazydata > python my_script.py
    LAZYDATA: Tracked file `big_file.csv` changed, recording a new version...
    Data shape:(2, 3)
    (dev3.5)  ~/test_lazydata > vim big_file.csv  # changing file
    (dev3.5)  ~/test_lazydata > python my_script.py
    LAZYDATA: Tracked file `big_file.csv` changed, recording a new version...
    Data shape:(3, 3)
    (dev3.5)  ~/test_lazydata > vim big_file.csv  # changing file
    (dev3.5)  ~/test_lazydata > python my_script.py
    LAZYDATA: Tracked file `big_file.csv` changed, recording a new version...
    Data shape:(4, 3)
    

    A simple ls afterwards points to the mistake:

    (dev3.5)  ~/test_lazydata > ls -lah
    total 20
    drwxrwxr-x  2 zakaria zakaria 4096 sept.  5 16:14 .
    drwxr-xr-x 56 zakaria zakaria 4096 sept.  5 16:14 ..
    -rw-rw-r--  5 zakaria zakaria   44 sept.  5 16:14 big_file.csv
    -rw-rw-r--  1 zakaria zakaria  482 sept.  5 16:14 lazydata.yml
    -rw-rw-r--  1 zakaria zakaria  158 sept.  5 16:12 my_script.py
    

    Notice the number of hardlinks to big_file.csv. There should only be one. What is happening is that all the revisions point to the same file.

    You can also check ~/.lazydata/data directly for the content of the different files. It'a all the same.

    opened by zbitouzakaria 3
  • SyntaxError: invalid syntax

    SyntaxError: invalid syntax

    Using Python 2.7.6 and the example script you provide, with a file outside of the github repo (file exists and file path is correct, I checked):

    from lazydata import track
    
    with open(track("/home/lg390/tmp/data/some_data_file.txt"), "r") as f:
        print(f.read())
    

    I get

    -> % python sample_script.py
    Traceback (most recent call last):
      File "sample_script.py", line 1, in <module>
        from lazydata import track
      File "/usr/local/lib/python2.7/dist-packages/lazydata/__init__.py", line 1, in <module>
        from .tracker import track
      File "/usr/local/lib/python2.7/dist-packages/lazydata/tracker.py", line 11
        def track(path:str) -> str:
                      ^
    SyntaxError: invalid syntax
    
    opened by LaurentGatto 3
  • Add http link remote backend

    Add http link remote backend

    I have a project https://github.com/rominf/profanityfilter that could benefit from lazydata. I think it would be cool to move badword dictionaries out of repository and track them alongside hunspell dictionaries with lazydata. The problem is that I want these files to be accessible by end users, that means, I don't want them to be stored in AWS. Instead, I would like them downloaded by http link.

    opened by rominf 2
  • Comparison with DVC

    Comparison with DVC

    Hello!

    First of all thank you for your contribution to the community! I’ve just found out about this and it seems to be a nice project that is growing!

    You are probably familiar with dvc (https://github.com/iterative/dvc).

    I’ve been investigating it in order to include it in my ML pipeline. Can you explain briefly how/if Lazydata differs from dvc? And any advantages and disadvantages? I understand that there may be some functionalities that maybe are not yet implemented purely due to time constraints or similar. I’m more interested in knowing if there are any differences in terms of paradigm.

    Ps- if you have a different channel for these kind of questions please let me know.

    Thank you very much!

    opened by silverdna 2
  • Publish a release on PyPI

    Publish a release on PyPI

    I've made a PR #13 that you've accepted. Unfortunately, I cannot use the library the easy way because you didn't upload the latest changes on PyPI.

    I propose to implement #20 first.

    opened by rominf 0
  • Move backends requirements to extras_require

    Move backends requirements to extras_require

    If the package has optional features that require their own dependencies you can use extras_require.

    I propose to make use of extras_require for all backends that require dependencies to minimize the number of installed packages. For example, I do not use s3, but all 11 packages are installed, 6 of them are needed for s3.

    opened by rominf 0
  • Azure integration

    Azure integration

    Here's a start of the azure integration. Haven't written tests yet but let me know what you think. Also sorry for some of the style changes, I have an autoformatter on (black). Let me know if you want me to turn that off.

    Ref #18

    opened by avril-affine 3
  • Implementing multiple backends by re-using snakemake.remote or pyfilesystem2

    Implementing multiple backends by re-using snakemake.remote or pyfilesystem2

    Would it be possible to wrap the classes implementing snakemake.remote.AbstractRemoteObject (snakemake.remote, AbstractRemoteObject) into lazydata.remote.RemoteStorage class?

    This would allow to implement the following remote storage providers in one go (https://snakemake.readthedocs.io/en/stable/snakefiles/remote_files.html):

    • Amazon Simple Storage Service (AWS S3): snakemake.remote.S3
    • Google Cloud Storage (GS): snakemake.remote.GS
    • File transfer over SSH (SFTP): snakemake.remote.SFTP
    • Read-only web (HTTP[S]): snakemake.remote.HTTP
    • File transfer protocol (FTP): snakemake.remote.FTP
    • Dropbox: snakemake.remote.dropbox
    • XRootD: snakemake.remote.XRootD
    • GenBank / NCBI Entrez: snakemake.remote.NCBI
    • WebDAV: snakemake.remote.webdav
    • GFAL: snakemake.remote.gfal
    • GridFTP: snakemake.remote.gridftp
    • iRODS: snakemake.remote.iRODS
    • EGA: snakemake.remote.EGA

    Pyfilesystem2

    Another alternative would be to write a wrapper around pyfilesystem2: https://github.com/PyFilesystem/pyfilesystem2. It supports the following filesystems: https://www.pyfilesystem.org/page/index-of-filesystems/

    Builtin

    • FTPFS File Transfer Protocol.
    • ...

    Official

    Filesystems in the PyFilesystem organisation on GitHub.

    • S3FS Amazon S3 Filesystem.
    • WebDavFS WebDav Filesystem.

    Third Party

    • fs.archive Enhanced archive filesystems.
    • fs.dropboxfs Dropbox Filesystem.
    • fs-gcsfs Google Cloud Storage Filesystem.
    • fs.googledrivefs Google Drive Filesystem.
    • fs.onedrivefs Microsoft OneDrive Filesystem.
    • fs.smbfs A filesystem running over the SMB protocol.
    • fs.sshfs A filesystem running over the SSH protocol.
    • fs.youtube A filesystem for accessing YouTube Videos and Playlists.
    • fs.dnla A filesystem for accessing accessing DLNA Servers
    opened by Avsecz 3
  • lazydata track - tracking files produced by other CLI tools

    lazydata track - tracking files produced by other CLI tools

    First, thanks for the amazing package. Exactly what I was looking for!

    It would be great to also have a command lazydata track <file1> <file2> ..., which would run lazydata.track() on the specified files. That way, the user can use CLI tools outside of python while still easily tracking the produced files.

    opened by Avsecz 2
Releases(1.0.19)
Python interface to Oracle Database conforming to the Python DB API 2.0 specification.

cx_Oracle version 8.2 (Development) cx_Oracle is a Python extension module that enables access to Oracle Database. It conforms to the Python database

Oracle 841 Dec 21, 2022
Sample scripts to show extracting details directly from the AIQUM database

Sample scripts to show extracting details directly from the AIQUM database

1 Nov 19, 2021
Pandas on AWS - Easy integration with Athena, Glue, Redshift, Timestream, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretManager, PostgreSQL, MySQL, SQLServer and S3 (Parquet, CSV, JSON and EXCEL).

AWS Data Wrangler Pandas on AWS Easy integration with Athena, Glue, Redshift, Timestream, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretMana

Amazon Web Services - Labs 3.3k Dec 31, 2022
Familiar asyncio ORM for python, built with relations in mind

Tortoise ORM Introduction Tortoise ORM is an easy-to-use asyncio ORM (Object Relational Mapper) inspired by Django. Tortoise ORM was build with relati

Tortoise 3.3k Dec 31, 2022
A Python library for Cloudant and CouchDB

Cloudant Python Client This is the official Cloudant library for Python. Installation and Usage Getting Started API Reference Related Documentation De

Cloudant 162 Dec 19, 2022
Anomaly detection on SQL data warehouses and databases

With CueObserve, you can run anomaly detection on data in your SQL data warehouses and databases. Getting Started Install via Docker docker run -p 300

Cuebook 171 Dec 18, 2022
PyRemoteSQL is a python SQL client that allows you to connect to your remote server with phpMyAdmin installed.

PyRemoteSQL Python MySQL remote client Basically this is a python SQL client that allows you to connect to your remote server with phpMyAdmin installe

ProbablyX 3 Nov 04, 2022
Asynchronous Python client for InfluxDB

aioinflux Asynchronous Python client for InfluxDB. Built on top of aiohttp and asyncio. Aioinflux is an alternative to the official InfluxDB Python cl

Gustavo Bezerra 159 Dec 27, 2022
Official Python low-level client for Elasticsearch

Python Elasticsearch Client Official low-level client for Elasticsearch. Its goal is to provide common ground for all Elasticsearch-related code in Py

elastic 3.8k Jan 01, 2023
A simple Python tool to transfer data from MySQL to SQLite 3.

MySQL to SQLite3 A simple Python tool to transfer data from MySQL to SQLite 3. This is the long overdue complimentary tool to my SQLite3 to MySQL. It

Klemen Tusar 126 Jan 03, 2023
Creating a python package to convert /transfer excelsheet data to a mysql Database Table

Creating a python package to convert /transfer excelsheet data to a mysql Database Table

Odiwuor Lameck 1 Jan 07, 2022
aiomysql is a library for accessing a MySQL database from the asyncio

aiomysql aiomysql is a "driver" for accessing a MySQL database from the asyncio (PEP-3156/tulip) framework. It depends on and reuses most parts of PyM

aio-libs 1.5k Jan 03, 2023
Application which allows you to make PostgreSQL databases with Python

Automate PostgreSQL Databases with Python Application which allows you to make PostgreSQL databases with Python I used the psycopg2 library which is u

Marc-Alistair Coffi 0 Dec 31, 2021
This repository is for active development of the Azure SDK for Python.

Azure SDK for Python This repository is for active development of the Azure SDK for Python. For consumers of the SDK we recommend visiting our public

Microsoft Azure 3.4k Jan 02, 2023
The Database Toolkit for Python

SQLAlchemy The Python SQL Toolkit and Object Relational Mapper Introduction SQLAlchemy is the Python SQL toolkit and Object Relational Mapper that giv

SQLAlchemy 6.5k Jan 01, 2023
Python ODBC bridge

pyodbc pyodbc is an open source Python module that makes accessing ODBC databases simple. It implements the DB API 2.0 specification but is packed wit

Michael Kleehammer 2.6k Dec 27, 2022
Query multiple mongoDB database collections easily

leakscoop Perform queries across multiple MongoDB databases and collections, where the field names and the field content structure in each database ma

bagel 5 Jun 24, 2021
Py2neo is a client library and toolkit for working with Neo4j from within Python

Py2neo Py2neo is a client library and toolkit for working with Neo4j from within Python applications. The library supports both Bolt and HTTP and prov

py2neo.org 1.2k Jan 02, 2023
A tiny python web application based on Flask to set, get, expire, delete keys of Redis database easily with direct link at the browser.

First Redis Python (CRUD) A tiny python web application based on Flask to set, get, expire, delete keys of Redis database easily with direct link at t

Max Base 9 Dec 24, 2022
DBMS Mini-project: Recruitment Management System

# Hire-ME DBMS Mini-project: Recruitment Management System. 💫 ✨ Features Python + MYSQL using mysql.connector library Recruiter and Client Panel Beau

Karan Gandhi 35 Dec 23, 2022