Deep Learning Pipelines for Apache Spark

Overview

Deep Learning Pipelines for Apache Spark

Build Status Coverage

The repo only contains HorovodRunner code for local CI and API docs. To use HorovodRunner for distributed training, please use Databricks Runtime for Machine Learning, Visit databricks doc HorovodRunner: distributed deep learning with Horovod for details.

To use the previous release that contains Spark Deep Learning Pipelines API, please go to Spark Packages page.

API Documentation

class sparkdl.HorovodRunner(*, np, driver_log_verbosity='all')

Bases: object

HorovodRunner runs distributed deep learning training jobs using Horovod.

On Databricks Runtime 5.0 ML and above, it launches the Horovod job as a distributed Spark job. It makes running Horovod easy on Databricks by managing the cluster setup and integrating with Spark. Check out Databricks documentation to view end-to-end examples and performance tuning tips.

The open-source version only runs the job locally inside the same Python process, which is for local development only.

NOTE: Horovod is a distributed training framework developed by Uber.

  • Parameters

    • np - number of parallel processes to use for the Horovod job. This argument only takes effect on Databricks Runtime 5.0 ML and above. It is ignored in the open-source version. On Databricks, each process will take an available task slot, which maps to a GPU on a GPU cluster or a CPU core on a CPU cluster. Accepted values are:

      • If <0, this will spawn -np subprocesses on the driver node to run Horovod locally. Training stdout and stderr messages go to the notebook cell output, and are also available in driver logs in case the cell output is truncated. This is useful for debugging and we recommend testing your code under this mode first. However, be careful of heavy use of the Spark driver on a shared Databricks cluster. Note that np < -1 is only supported on Databricks Runtime 5.5 ML and above.
      • If >0, this will launch a Spark job with np tasks starting all together and run the Horovod job on the task nodes. It will wait until np task slots are available to launch the job. If np is greater than the total number of task slots on the cluster, the job will fail. As of Databricks Runtime 5.4 ML, training stdout and stderr messages go to the notebook cell output. In the event that the cell output is truncated, full logs are available in stderr stream of task 0 under the 2nd spark job started by HorovodRunner, which you can find in the Spark UI.
      • If 0, this will use all task slots on the cluster to launch the job. .. warning:: Setting np=0 is deprecated and it will be removed in the next major Databricks Runtime release. Choosing np based on the total task slots at runtime is unreliable due to dynamic executor registration. Please set the number of parallel processes you need explicitly.
    • np - driver_log_verbosity: This argument is only available on Databricks Runtime.

run(main, **kwargs)

Runs a Horovod training job invoking main(**kwargs).

The open-source version only invokes main(**kwargs) inside the same Python process. On Databricks Runtime 5.0 ML and above, it will launch the Horovod job based on the documented behavior of np. Both the main function and the keyword arguments are serialized using cloudpickle and distributed to cluster workers.

  • Parameters

    • main – a Python function that contains the Horovod training code. The expected signature is def main(**kwargs) or compatible forms. Because the function gets pickled and distributed to workers, please change global states inside the function, e.g., setting logging level, and be aware of pickling limitations. Avoid referencing large objects in the function, which might result large pickled data, making the job slow to start.

    • kwargs – keyword arguments passed to the main function at invocation time.

  • Returns

    return value of the main function. With np>=0, this returns the value from the rank 0 process. Note that the returned value should be serializable using cloudpickle.

Releases

Visit Github Release Page to check the release notes.

License

  • The source code is released under the Apache License 2.0 (see the LICENSE file).
Comments
  • Can't import sparkdl with spark-deep-learning-assembly-0.1.0-spark2.1.jar

    Can't import sparkdl with spark-deep-learning-assembly-0.1.0-spark2.1.jar

    First of all, thank you for a great library!

    I tried to use sparkdl in PySpark, but couldn't import sparkdl. Detailed procedure is as follows:

    # make sparkdl jar
    build/sbt assembly
    
    # run pyspark with sparkdl
    pyspark --master local[4] --jars target/scala-2.11/spark-deep-learning-assembly-0.1.0-spark2.1.jar
    
    # import sparkdl
    import sparkdl
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    ImportError: No module named sparkdl
    

    After digging a few places, I found that it works if I deflate the jar file as follows.

    cd target/scala-2.11
    mkdir tmp
    cp spark-deep-learning-assembly-0.1.0-spark2.1.jar tmp/
    cd tmp
    jar xf spark-deep-learning-assembly-0.1.0-spark2.1.jar
    
    pyspark --jars spark-deep-learning-assembly-0.1.0-spark2.1.jar
    
    import sparkdl
    Using TensorFlow backend.
    

    Edited-1 : The second method works only in the directory where the jar file is deflated.

    Best wishes, HanCheol

    opened by priancho 14
  • Porting Keras Estimator API and Reference Implementation

    Porting Keras Estimator API and Reference Implementation

    What changes are proposed in this pull request?

    Creating a Spark MLlib Estimator API for Keras models, with a reference implementation. It provides a taste of how to ingest Image from URI in a DataFrame and use them to train a Keras model.

    The changes consist of these components.

    1. Extracted a few Params types for Keras Transformers/Estimators.
    2. Keras utilities
      • Serialization: model <=> hdf5 <=> bytes (for broadcast)
      • Check avaialble Keras options (optimizers, loss functions, etc.)
    3. Keras Estimator.

    How is this patch tested?

    • [x] Unit tests
    • [x] Manual tests
    opened by phi-dbq 11
  • Not able to import sparkdl in jupyter notebook

    Not able to import sparkdl in jupyter notebook

    Hi,

    I am trying to use this library in jupyter notebook, but I am getting error "no module found".

    When I am running the below command pyspark --packages databricks:spark-deep-learning:1.5.0-spark2.4-s_2.11 I am able to import sparkdl in the spark shell.

    How can I use it in jupyter notebook?

    opened by yashwanthmadaka24 7
  • Support and build against Keras 2.2.2 and TF 1.10.0

    Support and build against Keras 2.2.2 and TF 1.10.0

    • bump spark version to 2.3.1
    • bump tensorframes version to 0.4.0
    • bump keras==2.2.2 and tensorflow==1.10.0 to fix travis issues
    • TF_C_API_GRAPH_CONSTRUCTION added as a temp fix
    • Drop support for Spark <2.3 and hence Scala 2.10
    • add python3 friendly print
    • add pooling='avg' in resnet50 testing model beccause keras api changed
    • test arrays almost equal with whatever precision 5 in NamedImageTransformerBaseTestCase, test_bare_keras_module, keras_load_and_preproc
    • make keras model smaller in test_simple_keras_udf

    This is a continued work from https://github.com/databricks/spark-deep-learning/pull/149.

    opened by lu-wang-dl 6
  • Fix KerasImageFileEstimator for model tuning

    Fix KerasImageFileEstimator for model tuning

    • add test for KerasImageFileEstimator used with CrossValidator
    • fix pickling issue
    • use new fitMultiple api and ensure thread safety
    • fix tests to reflect new api
    • bugfix setDefault
    • bugfix HasOutputMode
    • remove _validateParams
    • avoid testing KIFT functionality in KIFEst tests
    opened by yogeshg 6
  • Replace sparkdl's ImageSchema with  Spark2.3's version

    Replace sparkdl's ImageSchema with Spark2.3's version

    Use Spark 2.3's ImageSchema as image interface.

    • the biggest change is using opposite ordering of color channels - BGR instead of RGB, requires extra reordering in couple of places. -preserved ability to read and resize images in python using PIL to match Keras (resize gives different result but also reading jpegs produced images which were off by 1 on some green pixels)
    • needed few tweeks to run with spark 2.3 - notably UDFs are now referenced by SQL identifier and can not have dash as part of the name

    [TODO] - In order to run on spark < 2.3, the image schema files have been copied here and need to be removed in the future.

    opened by tomasatdatabricks 6
  • TensorFlow Graph Transformer Part-1: Params and Converters

    TensorFlow Graph Transformer Part-1: Params and Converters

    This is the first part of PRs from the TF Transformer API design POC PR.

    It introduces parameters needed for the TFTransformer (minus those for TFInputGraph) and corresponding type converters.

    • Introducing MLlib model Params for TFTransformer.
    • Type conversion utilities for the introduced MLlib Params used in Spark Deep Learning Pipelines.
      • We follow the convention of MLlib to name these utilities "converters", but most of them act as type checkers that return the argument if it is the desired type and raise TypeError otherwise.
      • We follow the practice that a type converter only returns a single type.
    opened by phi-dbq 6
  • Add style checks and refactor suggestions

    Add style checks and refactor suggestions

    In this PR, we

    • add python/.pylint/suggested.rc adapted from the default configuration generated by pylint
    • allow both camelCase and snake_case using regexes lifted from pylint source code
    • increase thresholds for number of arguments, local, variables
    • disable checks that are used often in this project: unused-argument, too-many-arguments, no-member, missing-docstring, no-init, protected-access, misplaced-comparison-constant, no-else-return, fixme
    • escape some code with # pylint: disable=... because it was hard to refactor without thorough testing

    Some style decisions that were discussed are:

    • disables are acceptable if there is no other way to do this, in which case a comment must be left explaining that
    • other disables should be removed and should be considered similar to todos
    • we allow todo marks in code because these are acceptable for this project and should be taken care of in future
    • there are 50 todos, fixmes or pylint disables currently, we should aim to bring this down find python/sparkdl | grep ".*\.py$" | xargs egrep -ino --color=auto "(TODO|FIXME|# pylint).*"
    • function calls and function defintions that span more than 1 line are left to committer and reviewer's discretion
      • pep8 style:
      long_function_name(
          long_argument_one = "one",
          long_argument_two = "two",
          long_argument_three = "three",
          long_argument_four = "four",
          long_argument_five = "five")
      
      • MLlib style:
      long_function_name(
          long_argument_one = "one", long_argument_two = "two", long_argument_three = "three",
          long_argument_four = "four", long_argument_five = "five")
      
    opened by yogeshg 5
  • Fix bug in conversions from row image to/from BufferedImage

    Fix bug in conversions from row image to/from BufferedImage

    Fix bug in conversions to/from BufferedImage in which we copied raw byte data to BufferedImage rasters using the wrong channel ordering. The fix in this PR is to use BufferedImage.setRGB, BufferedImage.getRGB APIs instead of accessing image raster data directly for three and four-channel images.

    Also enhanced an existing unit test to verify that we correctly convert from row image to BufferedImage for one and four-channel images.

    opened by smurching 5
  • Update ImageUtils to support resizing one, three, or four channel images

    Update ImageUtils to support resizing one, three, or four channel images

    This PR:

    • Updates conversions from row image to/from Buffered image (spImageToBufferedImage and spImageFromBufferedImage) to support one, three, and four channel images
    • Updates resizeImage to use the tgtChannels parameter to determine the number of channels in the output image instead of defaulting to three output channels
    • Updates existing tests to verify that resizing, conversions to/from BufferedImage work for one, three, and four-channel images
    opened by smurching 5
  • Make python DeepImageFeaturizer use Scala version.

    Make python DeepImageFeaturizer use Scala version.

    • Based of Image schema PR, do not merge until Image schema is merged.
    • Otherwise mostly straightforward except results will not match keras in general due to different image libraries
    opened by tomasatdatabricks 5
  • sparkdl.xgboost getting stuck trying to map partitions

    sparkdl.xgboost getting stuck trying to map partitions

    I am running the following code to try to fit a model

    from sparkdl.xgboost import XgboostClassifier
    param = {
        'num_workers': 4, # number of workers on the cluster, adjust as needed
      'missing': 0,
        "objective": "binary:logistic",
        "eval_metric": "logloss",
          'featuresCol':"features", 
          'labelCol':"objective",
          'nthread':32 # equal to the number of cpus on each worker machine
    }
      
    train, test = data.randomSplit([0.001, 0.001])
    xgb_classifier = XgboostClassifier(**param)
    xgb_clf_model = xgb_classifier.fit(train)
    

    When I run the model training on my databricks cluster is seems to be getting stuck when it is trying to map partitions. It is using almost zero cpu on each cluster but the memory usage is slowly increasing.

    image

    is there anything I can do to get around this issue

    opened by timpiperseek 0
  • Need to modify kdl/transformers/keras_applications to be able to use resnet50

    Need to modify kdl/transformers/keras_applications to be able to use resnet50

    Hi,

    Per this overflow question one needs to modify /home/user/.local/lib/python3.8/site-packages/sparkdl/transformers/keras_applications.py . This happened in databricks using v 10.3.

    Have to change from keras.applications import inception_v3, xception, resnet50

    to

    from keras.applications import inception_v3, xception from tensorflow.keras.applications import resnet50

    opened by yobdoy 1
  • Plugin Help with Spark framework

    Plugin Help with Spark framework

    https://github.com/hongzimao/decima-sim would you like to help me to integrate this deep learning model into your pipeline> how can I integrate or plug it with your frameworks?

    opened by jahidhasanlinix 0
  • Necessary imports not included in setup.py

    Necessary imports not included in setup.py

    Hi,

    I'm developing a neural network using Pytorch in a non-databricks cluster to ensure its functionality prior migrating to a databricks cluster.

    Since I'm using Pytorch, I don't need Keras or TensorFlow. I installed successfully Horovod and Sparkdl, however, when I try to run the Spark process I found (for now) three consecutive exceptions related to missing dependencies:

        from sparkdl import HorovodRunner
      File "/opt/conda/default/lib/python3.8/site-packages/sparkdl/__init__.py", line 17, in <module>
        from sparkdl.transformers.keras_image import KerasImageFileTransformer
      File "/opt/conda/default/lib/python3.8/site-packages/sparkdl/transformers/keras_image.py", line 16, in <module>
        import keras.backend as K
      File "/opt/conda/default/lib/python3.8/site-packages/keras/__init__.py", line 21, in <module>
        from tensorflow.python import tf2
    ModuleNotFoundError: No module named 'tensorflow'
    
        from sparkdl import HorovodRunner
      File "/opt/conda/default/lib/python3.8/site-packages/sparkdl/__init__.py", line 17, in <module>
        from sparkdl.transformers.keras_image import KerasImageFileTransformer
      File "/opt/conda/default/lib/python3.8/site-packages/sparkdl/transformers/keras_image.py", line 16, in <module>
        import keras.backend as K
    ModuleNotFoundError: No module named 'keras'
    

    This one is DEPRECATED!!:

        from sparkdl import HorovodRunner
      File "/opt/conda/default/lib/python3.8/site-packages/sparkdl/__init__.py", line 17, in <module>
        from sparkdl.transformers.keras_image import KerasImageFileTransformer
      File "/opt/conda/default/lib/python3.8/site-packages/sparkdl/transformers/keras_image.py", line 27, in <module>
        from sparkdl.transformers.tf_image import TFImageTransformer
      File "/opt/conda/default/lib/python3.8/site-packages/sparkdl/transformers/tf_image.py", line 18, in <module>
        import tensorframes as tfs
    ModuleNotFoundError: No module named 'tensorframes'
    

    On one hand, I don't understand why should I need these dependencies if I'm not going to use them... Shouldn't it be checked and disabled instead of forcing it to be installed?

    On the other hand, if those dependencies are unavoidable, they should be included in the setup.py script to avoid having these errors and losing time, since installing Horovod packages in an ephemeral cluster takes a lot of time just to discover that you cannot run the program...

    I'm sure I won't have a problem in a Databricks cluster, but I cannot use it yet and that shouldn't be a problem to test HorovodRunner functionality as stated in the warning message when running a program in a non-databricks cluster...

    Kind regards

    opened by carlosfrutos 0
  • I find it so many ‘spark-deep-learning’

    I find it so many ‘spark-deep-learning’

    I find it so many ‘spark-deep-learning’, such as : elephas:https://github.com/maxpumperla/elephas dist-keras:https://github.com/cerndb/dist-keras sparknet:https://github.com/amplab/sparknet dl4j:https://github.com/deeplearning4j/dl4j-spark-ml TensorFlowOnSpark:https://github.com/yahoo/TensorFlowOnSpark spark-deep-learning:https://github.com/databricks/spark-deep-learning H2O:https://github.com/h2oai/sparkling-water/tree/master/ BigDL:https://github.com/intel-analytics/BigDL analytics-zoo:https://github.com/intel-analytics/analytics-zoo

    It looks like BigDL is the most active one. I want to start my DeepLearning on spark by using spark-deep-learning, but I afraid others will popular than databricks.spark-deep-learning. So I still hesitate which one to choice.

    opened by shuDaoNan9 1
Releases(v1.6.0)
  • v1.6.0(Jan 8, 2020)

  • v1.5.0(Jan 25, 2019)

  • v1.4.0(Nov 18, 2018)

  • v1.3.0(Nov 13, 2018)

    • Added HorovovodRunner API.
    • Simplified test and doc build w/ Docker and conda.
    • Updated public Python API docs.
    • Removed persistence from DeepImageFeaturizer.
    Source code(tar.gz)
    Source code(zip)
  • v1.2.0(Aug 28, 2018)

    • ignore nullable in DeepImageFeaturizer.validateSchema
    • upgrade TensorFrames version to 0.5.0
    • upgrade Tensorflow version to 1.10.0 and Keras version to 2.2.2
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Jun 18, 2018)

    • keras_image_file_estimator support both sparse and dense vectors
    • upgrade TensorFrames version to 0.4.0
    • add style checks to Travis CI
    • doc fixes
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(May 1, 2018)

    This is the 1.0.0 release. It brings compatibility with newer versions of Spark (2.3) and Tensorflow (1.6+). The custom image schema formerly defined in this package has been replaced with Spark's ImageSchema so there may be some breaking changes when updating to this version.

    Notable changes:

    • (breaking change) Using the definition of images from Spark 2.3.0. The new definition uses the BGR channel ordering for 3-channel images instead of the RGB ordering used in this project before the change.
    • Persistence for DeepImageFeaturizer (both Python and Scala).
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Jan 30, 2018)

    This is the final release of dl-pipelines prior to migrating to new ImageSchema.

    Notable changes:

    • Added vgg16, vgg19 models to DeepImageFeaturizer/DeepImagePredictor (Python).
    • Added a Scala API for DeepImageFeaturizer (for transfer learning for images).
    • Added TFTransformer and KerasTransformer for applying TensorFlow graphs or TensorFlow-backed Keras models to a column of arrays in a Spark DataFrame.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Oct 31, 2017)

    This is the final release for Deep Learning Pipelines 0.2.0

    Notable additions since 0.1.0:

    • KerasImageFileEstimator API (train a Keras model on image files)
    • SQL UDF support for Keras models
    • Added Xception, Resnet50 models to DeepImageFeaturizer/DeepImagePredictor.
    Source code(tar.gz)
    Source code(zip)
Owner
Databricks
Helping data teams solve the world’s toughest problems using data and AI
Databricks
CRISCE: Automatically Generating Critical Driving Scenarios From Car Accident Sketches

CRISCE: Automatically Generating Critical Driving Scenarios From Car Accident Sketches This document describes how to install and use CRISCE (CRItical

Chair of Software Engineering II, Uni Passau 2 Feb 09, 2022
An Efficient Implementation of Analytic Mesh Algorithm for 3D Iso-surface Extraction from Neural Networks

AnalyticMesh Analytic Marching is an exact meshing solution from neural networks. Compared to standard methods, it completely avoids geometric and top

Karbo 45 Dec 21, 2022
Unofficial PyTorch implementation of MobileViT based on paper "MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer".

MobileViT RegNet Unofficial PyTorch implementation of MobileViT based on paper MOBILEVIT: LIGHT-WEIGHT, GENERAL-PURPOSE, AND MOBILE-FRIENDLY VISION TR

Hong-Jia Chen 91 Dec 02, 2022
Supplementary materials for ISMIR 2021 LBD paper "Evaluation of Latent Space Disentanglement in the Presence of Interdependent Attributes"

Evaluation of Latent Space Disentanglement in the Presence of Interdependent Attributes Supplementary materials for ISMIR 2021 LBD submission: K. N. W

Karn Watcharasupat 2 Oct 25, 2021
Hcpy - Interface with Home Connect appliances in Python

Interface with Home Connect appliances in Python This is a very, very beta inter

Trammell Hudson 116 Dec 27, 2022
ICML 21 - Voice2Series: Reprogramming Acoustic Models for Time Series Classification

Voice2Series-Reprogramming Voice2Series: Reprogramming Acoustic Models for Time Series Classification International Conference on Machine Learning (IC

49 Jan 03, 2023
small collection of functions for neural networks

neurobiba other languages: RU small collection of functions for neural networks. very easy to use! Installation: pip install neurobiba See examples h

4 Aug 23, 2021
这是一个利用facenet和retinaface实现人脸识别的库,可以进行在线的人脸识别。

Facenet+Retinaface:人脸识别模型在Pytorch当中的实现 目录 注意事项 Attention 所需环境 Environment 文件下载 Download 预测步骤 How2predict 参考资料 Reference 注意事项 该库中包含了两个网络,分别是retinaface和

Bubbliiiing 102 Dec 30, 2022
An implementation of the proximal policy optimization algorithm

PPO Pytorch C++ This is an implementation of the proximal policy optimization algorithm for the C++ API of Pytorch. It uses a simple TestEnvironment t

Martin Huber 59 Dec 09, 2022
Attention over nodes in Graph Neural Networks using PyTorch (NeurIPS 2019)

Intro This repository contains code to generate data and reproduce experiments from our NeurIPS 2019 paper: Boris Knyazev, Graham W. Taylor, Mohamed R

Boris Knyazev 242 Jan 06, 2023
REGTR: End-to-end Point Cloud Correspondences with Transformers

REGTR: End-to-end Point Cloud Correspondences with Transformers This repository contains the source code for REGTR. REGTR utilizes multiple transforme

Zi Jian Yew 108 Dec 17, 2022
A unofficial pytorch implementation of PAN(PSENet2): Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network

Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network Requirements pytorch 1.1+ torchvision 0.3+ pyclipper opencv3 gcc

zhoujun 400 Dec 26, 2022
Resources for the Ki testnet challenge

Ki Testnet Challenge This repository hosts ki-testnet-challenge. A set of scripts and resources to be used for the Ki Testnet Challenge What is the te

Ki Foundation 23 Aug 08, 2022
Metric learning algorithms in Python

metric-learn: Metric Learning in Python metric-learn contains efficient Python implementations of several popular supervised and weakly-supervised met

1.3k Jan 02, 2023
Pytorch Implementation for Dilated Continuous Random Field

DilatedCRF Pytorch implementation for fully-learnable DilatedCRF. If you find my work helpful, please consider our paper: @article{Mo2022dilatedcrf,

DunnoCoding_Plus 3 Nov 13, 2022
labelpix is a graphical image labeling interface for drawing bounding boxes

Welcome to labelpix 👋 labelpix is a graphical image labeling interface for drawing bounding boxes. 🏠 Homepage Install pip install -r requirements.tx

schissmantics 26 May 24, 2022
A Python type explainer!

typesplainer A Python typehint explainer! Available as a cli, as a website, as a vscode extension, as a vim extension Usage First, install the package

Typesplainer 79 Dec 01, 2022
PyTorch Implementation of ByteDance's Cross-speaker Emotion Transfer Based on Speaker Condition Layer Normalization and Semi-Supervised Training in Text-To-Speech

Cross-Speaker-Emotion-Transfer - PyTorch Implementation PyTorch Implementation of ByteDance's Cross-speaker Emotion Transfer Based on Speaker Conditio

Keon Lee 114 Jan 08, 2023
CPPE - 5 (Medical Personal Protective Equipment) is a new challenging object detection dataset

CPPE - 5 CPPE - 5 (Medical Personal Protective Equipment) is a new challenging dataset with the goal to allow the study of subordinate categorization

Rishit Dagli 53 Dec 17, 2022
Spontaneous Facial Micro Expression Recognition using 3D Spatio-Temporal Convolutional Neural Networks

Spontaneous Facial Micro Expression Recognition using 3D Spatio-Temporal Convolutional Neural Networks Abstract Facial expression recognition in video

Bogireddy Sai Prasanna Teja Reddy 103 Dec 29, 2022