Microscopy Image Cytometry Toolkit

Related tags

Deep Learningcytokit
Overview

Build Status Coverage Status

Cytokit

Cytokit is a collection of tools for quantifying and analyzing properties of individual cells in large fluorescent microscopy datasets with a focus on those generated from multiplexed staining protocols. This includes a GPU-accelerated image processing pipeline (via TensorFlow), CLI tools for batch processing of experimental replicates (often requiring conditional configuration, as things tend go wrong when capturing hundreds of thousands of microscope images over a period of hours or days), and visualization UIs (either Cytokit Explorer or CellProfiler Analyst).

Cytokit runs in a Python 3 environment but also comes (via Docker) with CellProfiler (Python 2) and Ilastik installations.

For more information, see: Cytokit: A single-cell analysis toolkit for high dimensional fluorescent microscopy imaging

Quick Start

Installing and configuring Cytokit currently involves little more than installing nvidia-docker and building or downloading the Cytokit container image, but this inherently limits support to Linux operating systems for GPU-acceleration. Additional limitations include:

  • There is currently no CPU-only docker image
  • Generating and running pipelines requires working knowledge of JupyterLab and a little tolerance for yaml/json files as well as command lines
  • Only tiff files are supported as a raw input image format
  • Deconvolution requires manual configuration of microscope attributes like filter wavelengths, immersion media, and numerical aperture (though support to infer much of this based on the imaging platform may be added in the future)
  • 3 dimensional images are supported but cell segmentation and related outputs are currently 2 dimensional
  • General system requirements include at least 24G RAM and 8G of GPU memory (per GPU)

Once nvidia-docker is installed, the container can be launched and used as follows:

nvidia-docker pull eczech/cytokit:latest

# Set LOCAL_IMAGE_DATA_DIR variable to a host directory for data sharing
# and persistent storage between container runs
export LOCAL_IMAGE_DATA_DIR=/tmp 

# Run the container with an attached volume to contain raw images and results  
nvidia-docker run --rm -ti -p 8888:8888 -p 8787:8787 -p 8050:8050 \
-v $LOCAL_IMAGE_DATA_DIR:/lab/data \
eczech/cytokit

This will launch JupyterLab on port 8888. After navigating to localhost:8888 and entering the access token printed on the command line following nvidia-docker run, you can then run an example notebook like cellular_marker_profiling_example, which can be found at /lab/repos/cytokit/python/notebooks/examples in the JupyterLab file navigator.

Using a Specific Release

To use a release-specific container, the instructions above can be modified as such where the below example shows how to launch the 0.1.1 container:

nvidia-docker pull eczech/cytokit:0.1.1
export LOCAL_IMAGE_DATA_DIR=/tmp   
nvidia-docker run --rm -ti -p 8888:8888 -p 8787:8787 -p 8050:8050 \
-v $LOCAL_IMAGE_DATA_DIR:/lab/data \
eczech/cytokit:0.1.1

Example

One of the goals of Cytokit is to make it as easy as possible to reproduce complicated workflows on big image datasets and to that end, the majority of the logic that drives how Cytokit functions is determined by json/yaml configurations.
Starting from template configurations like this sample Test Experiment and more realistically, this CODEX BALBc1 configuration, pipelines are meant to work as bash scripts executing small variants on these parameterizations for evaluation against one another. Here is a bash script demonstrating how this often works:

EXPERIMENT_DATA_DIR=/lab/data/201801-codex-lung

for REPLICATE in "201801-codex-lung-01" "201801-codex-lung-02"; do
    DATA_DIR=$EXPERIMENT_DATA_DIR/$REPLICATE
    
    # This command will generate 3 processing variants to run:
    # v01 - Cell object determined as fixed radius from nuclei
    # v02 - Cell object determined by membrane stain
    # v03 - 5x5 grid subset with deconvolution applied and before/after channels extracted
    cytokit config editor --base-config-path=template_config.yaml --output-dir=$DATA_DIR/output \
      set processor.cytometry.segmentation_params.nucleus_dilation 10 \
    save_variant v01/config reset \
      set processor.cytometry.membrane_channel_name CD45 \
    save_variant v02/config reset \
      set acquisition.region_height 5 \
      set acquisition.region_width 5 \
      set processor.args.run_deconvolution True \
      add operator '{extract: {name:deconvolution, channels:[raw_DAPI,proc_DAPI]}}' \
    save_variant v03/config exit 
    
    # Run everything for each variant of this experiment
    for VARIANT in v01 v02 v03; do
        OUTPUT_DIR=$DATA_DIR/output/$VARIANT
        CONFIG_DIR=$OUTPUT_DIR/config
        cytokit processor run_all --config-path=$CONFIG_DIR --data-dir=$OUTPUT_DIR --output-dir=$OUTPUT_DIR
        cytokit operator run_all  --config-path=$CONFIG_DIR --data-dir=$OUTPUT_DIR 
        cytokit analysis run_all  --config-path=$CONFIG_DIR --data-dir=$OUTPUT_DIR 
    done
done

The above, when executed, would produce several things:

  1. 5D tiles with processed image data (which can be reused without having to restart from raw data)
  2. 5D tile extracts corresponding to user-defined slices (e.g. raw vs processed DAPI images above) as well as montages of these tiles (e.g. stitchings of 16 2048x2048 images on 4x4 grid into single 8192x8192 images)
  3. CSV/FCS files with single-cell data
  4. Final yaml configuration files representing how each variant was defined

For example, an ad-hoc extraction like this (which could also be defined in the configuration files):

cytokit operator extract --name='primary_markers' --z='best' \
  --channels=['proc_dapi','proc_cd3','proc_cd4','proc_cd8','cyto_cell_boundary','cyto_nucleus_boundary']

Would produce 5D hyperstack images that could be loaded into ImageJ and blended together:

Human T Cells stained for DAPI (gray), CD3 (blue), CD4 (red), CD8 (green) and with nucleus outline (light green), cell outline (light red)

Cytokit Explorer UI

After processing an experiment, the Explorer UI application can be run within the same docker container for fast visualization of the relationship between spatial features of cells and fluorescent signal intensities:

High-Res Version

See the Cytokit Explorer docs for more details.

CellProfiler Analyst

In addition to Cytokit Explorer, exports can also be generated using CellProfiler (CP) directly. This makes it possible to ammend a configuration with a line like this to generate both CP spreadhseets and a SQLite DB compatible with CellProfiler Analyst (see pub/config/codex-spleen/experiment.yaml):

analysis:
  - cellprofiler_quantification: 
    - export_csv: true
    - export_db: true
    - export_db_objects_separately: true

These screenshots from CellProfiler Analyst 2.2.1 show a reconstruction of plots used in the CODEX publication based on data generated by dynamic construction and execution of a CP 3.1.8 pipeline (see pub/analysis/codex-spleen/pipeline_execution.sh):

CellProfiler Integration

CellProfiler is not easy to use programmatically as it is used here. There is no official Python API and direct access to the internals has to be informed largely based on tests and other source code, but for any interested power-users, here are some parts of this project that may be useful resources:

  • Installation: The Dockerfile shows how to bootstrap a minimal Python 2.7 environment compatible with CellProfiler 3.1.8
  • Configuration: The cpcli.py script demonstrates how to build a CP pipeline programmatically (in this case segmented objects are provided to the pipeline that only does quantification and export)
  • Analysis: When exported data from CP in a docker container, the paths in csv files or inserted into a database will all be relative to a container. One simple solution to this problem is to simply create a local /lab/data folder with copies of the information from the container that you would like to analyze.
    A little more information on this can be found at pub/analysis/codex-spleen/README.md.

Custom Segmentation

While the purpose of this pipeline is to perform image preprocessing and segmentation, the semantics of that segmentation often change. Depending on the experimental context, the provided cell nucleus segmentation may not be adequate and if a different segmentation methodology is required then any custom logic can be added to the pipeline as in the mc38-spheroid example. Specifically, a custom segmentation implementation is used here to identify spheroids rather than cells.

Messaging Caveats

Errors in processor logs that can safely be ignored:

  • tornado.iostream.StreamClosedError: Stream is closed: These often follow the completion of successful pipeline runs. These can hopefully be eliminated in the future with a dask upgrade but for now they can simply be ignored.

CODEX Backport

As a small piece of standalone functionality, instructions can be found here for how to run deconvolution on CODEX samples: Standalone Deconvolution Instructions

Comments
  • Is there a way to change threshold for nucleus segmentation?

    Is there a way to change threshold for nucleus segmentation?

    I've noticed that Cytokit segmentation performs poorly on some CODEX datasets. Is there a way to change a threshold that tells which nuclei are allowed to pass. Or maybe there are some other parameters that can influence the quality of nucleus segmentation? I could only find some options that influence size of the nuclei masks.

    opened by VasylVaskivskyi 15
  • Error during analysis step

    Error during analysis step

    Hello,

    I am running Cytokit using the example CODEX BALBc-1 mouse spleen dataset, with the config file set up as in this YAML file, with line 130 uncommented to try to produce the CellProfiler exports. I am using Cytokit via a Singularity container created from the Docker image at docker://eczech/cytokit:latest. I ran the first two steps (with "processor run_all" and "operator run_all") set out in this script, without any problems. But then I ran into problems during the "analysis" step.

    First, I got this error:

    python /lab/repos/cytokit/python/pipeline/cytokit/cli/main.py analysis run_all --config-path=/users/keays/cytokit_testing/Goltsev_mouse_spleen/experiment.yaml --data-dir=/users/keays/cytokit_testing/Goltsev_mouse_spleen/output 2019-12-02 10:26:00,281:INFO:39536:root: Running cytometry statistics aggregation 2019-12-02 10:26:30,798:INFO:39536:cytokit.function.core: Saved cytometry aggregation results to csv at "/users/keays/cytokit_testing/Goltsev_mouse_spleen/output/cytometry/data.csv" 2019-12-02 10:26:31,949:INFO:39536:cytokit.function.core: Saved cytometry aggregation results to fcs at "/users/keays/cytokit_testing/Goltsev_mouse_spleen/output/cytometry/data.fcs" Traceback (most recent call last): File "/lab/repos/cytokit/python/pipeline/cytokit/cli/main.py", line 32, in main() File "/lab/repos/cytokit/python/pipeline/cytokit/cli/main.py", line 28, in main fire.Fire(Cytokit) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/fire/core.py", line 127, in Fire component_trace = _Fire(component, args, context, name) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/fire/core.py", line 366, in _Fire component, remaining_args) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/fire/core.py", line 542, in _CallCallable result = fn(*varargs, kwargs) File "/lab/repos/cytokit/python/pipeline/cytokit/cli/init.py", line 167, in run_all fn({**config[op], **params}) TypeError: cellprofiler_quantification() got an unexpected keyword argument 'export_db_objects_separately'

    I removed the export_db_objects_separately: true from line 130, and then I got this error instead:

    python /lab/repos/cytokit/python/pipeline/cytokit/cli/main.py analysis run_all --config-path=/users/keays/cytokit_testing/Goltsev_mouse_spleen/experiment.yaml --data-dir=/users/keays/cytokit_testing/Goltsev_mouse_spleen/output 2019-12-02 10:31:37,242:INFO:39625:root: Running cytometry statistics aggregation 2019-12-02 10:32:07,818:INFO:39625:cytokit.function.core: Saved cytometry aggregation results to csv at "/users/keays/cytokit_testing/Goltsev_mouse_spleen/output/cytometry/data.csv" 2019-12-02 10:32:08,988:INFO:39625:cytokit.function.core: Saved cytometry aggregation results to fcs at "/users/keays/cytokit_testing/Goltsev_mouse_spleen/output/cytometry/data.fcs" 2019-12-02 10:32:08,989:INFO:39625:root: Running CellProfiler image quantification pipeline INFO:main:Loading experiment configuration from file "/users/keays/cytokit_testing/Goltsev_mouse_spleen/experiment.yaml" INFO:main:Extracting expression channel images INFO:main:Extracting object images Traceback (most recent call last): File "/lab/repos/cytokit/python/external/cellprofiler/cpcli.py", line 450, in sys.exit(main()) File "/lab/repos/cytokit/python/external/cellprofiler/cpcli.py", line 442, in main do_extraction=options.do_extraction == 'true' File "/lab/repos/cytokit/python/external/cellprofiler/cpcli.py", line 307, in run_quantification run_extraction(output_dir, cp_input_dir, channels) File "/lab/repos/cytokit/python/external/cellprofiler/cpcli.py", line 275, in run_extraction for channel_images in extract(filters, cytometry_image_dir): File "/lab/repos/cytokit/python/external/cellprofiler/cpcli.py", line 229, in extract raise ValueError('Expecting 5D tile image, got shape {}'.format(img.shape)) ValueError: Expecting 5D tile image, got shape (15, 4, 1008, 1344) Traceback (most recent call last): File "/lab/repos/cytokit/python/pipeline/cytokit/cli/main.py", line 32, in main() File "/lab/repos/cytokit/python/pipeline/cytokit/cli/main.py", line 28, in main fire.Fire(Cytokit) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/fire/core.py", line 127, in Fire component_trace = _Fire(component, args, context, name) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/fire/core.py", line 366, in _Fire component, remaining_args) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/fire/core.py", line 542, in _CallCallable result = fn(*varargs, kwargs) File "/lab/repos/cytokit/python/pipeline/cytokit/cli/init.py", line 167, in run_all fn({**config[op], **params}) File "/lab/repos/cytokit/python/pipeline/cytokit/cli/analysis.py", line 39, in cellprofiler_quantification log_level=self.py_log_level File "/lab/repos/cytokit/python/pipeline/cytokit/exec/cellprofiler.py", line 19, in run_quantification raise ValueError('CellProfiler cli command returned code {}; Command:\n{}'.format(rc.returncode, cmd)) ValueError: CellProfiler cli command returned code 1; Command: /opt/conda/envs/cellprofiler/bin/python /lab/repos/cytokit/python/external/cellprofiler/cpcli.py --do-extraction=true --export-csv=true --config-path=/users/keays/cytokit_testing/Goltsev_mouse_spleen/experiment.yaml --output-dir=/users/keays/cytokit_testing/Goltsev_mouse_spleen/output --log-level=20 --export-db=true

    I looked at the images under output/cytometry/tile, generated during earlier processing steps, and they are indeed 4D, with 4 "channels" and 15 "slices" according to tiffinfo, but no mention of cycles/frames. Is this expected?

    opened by mkeays 13
  •  CODEX BALBc1 not available

    CODEX BALBc1 not available

    Good Afternoon,

    I am trying to run the CODEX BLABc1 example but the dataset is not available. Where can I find this dataset, and is there any better explanation on how to run cytokit on CODEX datasets?

    Kind Regards

    opened by jesusdpa1 10
  • Compute Error due to Memory Error

    Compute Error due to Memory Error

    Good Afternoon,

    In the last run I have try with cytokit, I started observing the following error

    image image

    I try modifying the parameter target_shape: [504, 672] #1024, 1344 and changing the:

    tile_height: 1007 tile_width: 1344 tile_overlap_x: 576 tile_overlap_y: 433

    Without any luck

    experiment_2.txt

    pipeline_execution_2.txt

    Thanks in advance for your help,

    Kind Regards

    opened by jesusdpa1 9
  • Error during preprocessing

    Error during preprocessing

    Hello,

    I am trying Cytokit with some CODEX data from a collaborator, and have run into the error below and I'm not sure how to overcome it. Would you have any idea what might be causing this?

    Thanks, Maria

    2019-11-26 10:52:38,705:INFO:69377:root: Execution arguments and environment saved to "/nfs/cg/hm/cytokit_testing/analysis/dataset2/output/processor/execution/201911261052.json" 2019-11-26 10:52:47,400:INFO:69377:cytokit.exec.pipeline: Starting Pre-processing pipeline for 2 tasks (2 workers) Using TensorFlow backend. Using TensorFlow backend. distributed.worker - WARNING - Compute Failed Function: run_preprocess_task args: ({'op_flags': <cytokit.exec.pipeline.OpFlags object at 0x7fcda3353080>, 'tile_prefetch_capacity': 1, 'output_dir': '/nfs/cg/hm/cytokit_testing/analysis/dataset2/output', 'data_dir': '/nfs/cg/hm/cytokit_testing/analysis/dataset2/data', 'region_indexes': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]), 'gpu': 2, 'tile_indexes': array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])}) kwargs: {} Exception: ValueError('A Concatenate layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 250, 336, 512), (None, 251, 336, 256)]',) Traceback (most recent call last): File "/lab/repos/cytokit/python/pipeline/cytokit/cli/main.py", line 32, in main() File "/lab/repos/cytokit/python/pipeline/cytokit/cli/main.py", line 28, in main fire.Fire(Cytokit) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/fire/core.py", line 127, in Fire component_trace = _Fire(component, args, context, name) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/fire/core.py", line 366, in _Fire component, remaining_args) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/fire/core.py", line 542, in _CallCallable result = fn(*varargs, kwargs) File "/nfs/cg/hm/cytokit_testing/lab/repos/cytokit/python/pipeline/cytokit/cli/init.py", line 167, in run_all fn({**config[op], **params}) File "/nfs/cg/hm/cytokit_testing/lab/repos/cytokit/python/pipeline/cytokit/cli/processor.py", line 131, in run pipeline.run(pl_config, logging_init_fn=self._logging_init_fn) File "/nfs/cg/hm/cytokit_testing/lab/repos/cytokit/python/pipeline/cytokit/exec/pipeline.py", line 458, in run run_tasks(pl_conf, 'Pre-processing', run_preprocess_task, logging_init_fn) File "/nfs/cg/hm/cytokit_testing/lab/repos/cytokit/python/pipeline/cytokit/exec/pipeline.py", line 421, in run_tasks res = [r.result() for r in res] File "/nfs/cg/hm/cytokit_testing/lab/repos/cytokit/python/pipeline/cytokit/exec/pipeline.py", line 421, in res = [r.result() for r in res] File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/distributed/client.py", line 227, in result six.reraise(*result) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/six.py", line 692, in reraise raise value.with_traceback(tb) File "/nfs/cg/hm/cytokit_testing/lab/repos/cytokit/python/pipeline/cytokit/exec/pipeline.py", line 441, in run_preprocess_task return run_task(task, ops, preprocess_tile) File "/nfs/cg/hm/cytokit_testing/lab/repos/cytokit/python/pipeline/cytokit/exec/pipeline.py", line 355, in run_task with ops: File "/nfs/cg/hm/cytokit_testing/lab/repos/cytokit/python/pipeline/cytokit/ops/op.py", line 200, in enter v.enter() File "/nfs/cg/hm/cytokit_testing/lab/repos/cytokit/python/pipeline/cytokit/ops/op.py", line 152, in enter self.initialize() File "/nfs/cg/hm/cytokit_testing/lab/repos/cytokit/python/pipeline/cytokit/ops/cytometry.py", line 136, in initialize self.cytometer.initialize() File "/nfs/cg/hm/cytokit_testing/lab/repos/cytokit/python/pipeline/cytokit/cytometry/cytometer.py", line 609, in initialize self.model = self._get_model(input_shape) File "/nfs/cg/hm/cytokit_testing/lab/repos/cytokit/python/pipeline/cytokit/cytometry/cytometer.py", line 885, in _get_model return unet_model.get_model(3, input_shape) File "/nfs/cg/hm/cytokit_testing/lab/repos/cytokit/python/pipeline/cytokit/cytometry/models/unet_v2.py", line 82, in get_model [x, y] = get_model_core(n_class, input_shape, **kwargs) File "/nfs/cg/hm/cytokit_testing/lab/repos/cytokit/python/pipeline/cytokit/cytometry/models/unet_v2.py", line 47, in get_model_core y = keras.layers.merge.concatenate([d, c]) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/keras/layers/merge.py", line 649, in concatenate return Concatenate(axis=axis, **kwargs)(inputs) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/keras/engine/base_layer.py", line 431, in call self.build(unpack_singleton(input_shapes)) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/keras/layers/merge.py", line 362, in build 'Got inputs shapes: %s' % (input_shape)) ValueError: A Concatenate layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 250, 336, 512), (None, 251, 336, 256)]

    opened by mkeays 7
  • Issue Executing marker_profiling_example.ipynb

    Issue Executing marker_profiling_example.ipynb

    I started the docker image, and I am trying to execute the example notebook marker_profiling_example.ipynb. I get an error in the 8th cell. My guess is that the h5 file is not correctly downloaded in the back.

    variant_dir = osp.join(out_dir, 'v00')
    !cytokit processor run_all --config-path=$variant_dir/config --data-dir=$raw_dir --output-dir=$variant_dir
    

    Error message:

    2021-04-20 11:05:20,412:INFO:381:root: Execution arguments and environment saved to "/tmp/cytokit-example/cellular-marker/20181116-d40-r1-20x-5by5/output/v00/processor/execution/202104201105.json"
    2021-04-20 11:05:34,536:INFO:381:cytokit.exec.pipeline: Starting Pre-processing pipeline for 1 tasks (1 workers)
    Using TensorFlow backend.
    distributed.worker - WARNING -  Compute Failed
    Function:  run_preprocess_task
    args:      ({'output_dir': '/tmp/cytokit-example/cellular-marker/20181116-d40-r1-20x-5by5/output/v00', 'op_flags': <cytokit.exec.pipeline.OpFlags object at 0x7f2202d777b8>, 'data_dir': '/tmp/cytokit-example/cellular-marker/20181116-d40-r1-20x-5by5/raw', 'tile_indexes': array([0]), 'gpu': 0, 'tile_prefetch_capacity': 1, 'region_indexes': array([0])})
    kwargs:    {}
    Exception: OSError('Unable to open file (file signature not found)',)
    
    Traceback (most recent call last):
      File "/usr/local/bin/cytokit", line 32, in <module>
        main()
      File "/usr/local/bin/cytokit", line 28, in main
        fire.Fire(Cytokit)
      File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/fire/core.py", line 127, in Fire
        component_trace = _Fire(component, args, context, name)
      File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/fire/core.py", line 366, in _Fire
        component, remaining_args)
      File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/fire/core.py", line 542, in _CallCallable
        result = fn(*varargs, **kwargs)
      File "/lab/repos/cytokit/python/pipeline/cytokit/cli/__init__.py", line 167, in run_all
        fn(**{**config[op], **params})
      File "/lab/repos/cytokit/python/pipeline/cytokit/cli/processor.py", line 131, in run
        pipeline.run(pl_config, logging_init_fn=self._logging_init_fn)
      File "/lab/repos/cytokit/python/pipeline/cytokit/exec/pipeline.py", line 458, in run
        run_tasks(pl_conf, 'Pre-processing', run_preprocess_task, logging_init_fn)
      File "/lab/repos/cytokit/python/pipeline/cytokit/exec/pipeline.py", line 421, in run_tasks
        res = [r.result() for r in res]
      File "/lab/repos/cytokit/python/pipeline/cytokit/exec/pipeline.py", line 421, in <listcomp>
        res = [r.result() for r in res]
      File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/distributed/client.py", line 227, in result
        six.reraise(*result)
      File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/six.py", line 702, in reraise
        raise value.with_traceback(tb)
      File "/lab/repos/cytokit/python/pipeline/cytokit/exec/pipeline.py", line 441, in run_preprocess_task
        return run_task(task, ops, preprocess_tile)
      File "/lab/repos/cytokit/python/pipeline/cytokit/exec/pipeline.py", line 355, in run_task
        with ops:
      File "/lab/repos/cytokit/python/pipeline/cytokit/ops/op.py", line 200, in __enter__
        v.__enter__()
      File "/lab/repos/cytokit/python/pipeline/cytokit/ops/op.py", line 152, in __enter__
        self.initialize()
      File "/lab/repos/cytokit/python/pipeline/cytokit/ops/cytometry.py", line 136, in initialize
        self.cytometer.initialize()
      File "/lab/repos/cytokit/python/pipeline/cytokit/cytometry/cytometer.py", line 610, in initialize
        self.model.load_weights(self.weights_path or self._get_weights_path())
      File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/keras/engine/network.py", line 1157, in load_weights
        with h5py.File(filepath, mode='r') as f:
      File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/h5py/_hl/files.py", line 408, in __init__
        swmr=swmr)
      File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/h5py/_hl/files.py", line 173, in make_fid
        fid = h5f.open(name, flags, fapl=fapl)
      File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
      File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
      File "h5py/h5f.pyx", line 88, in h5py.h5f.open
    OSError: Unable to open file (file signature not found)
    

    Any ideas to solve that?

    opened by r0f1 5
  • Using more than two GPUs

    Using more than two GPUs

    Hi @eric-czech ,

    I'm now trying to use Cytokit to process a CODEX dataset with 20 cycles, and I see the following issue, when using two GPUs:

    2020-01-29 06:12:55,608:INFO:43129:cytokit.exec.pipeline: Loaded tile 33 for region 1 [shape = (20, 11, 4, 1440, 1920)] 2020-01-29 06:12:55,609:INFO:43129:cytokit.ops.drift_compensation: Calculating drift translations 2020-01-29 06:12:56,185:INFO:43125:cytokit.exec.pipeline: Loaded tile 1 for region 1 [shape = (20, 11, 4, 1440, 1920)] 2020-01-29 06:12:56,186:INFO:43125:cytokit.ops.drift_compensation: Calculating drift translations 2020-01-29 06:13:30,956:INFO:43125:cytokit.ops.drift_compensation: Applying drift translations 2020-01-29 06:13:30,968:INFO:43129:cytokit.ops.drift_compensation: Applying drift translations distributed.worker - WARNING - Memory use is high but worker has no data to store to disk. Perhaps some other process is leaking memory? Process memory: 33.74 GB -- Worker memory limit: 48.00 GB distributed.worker - WARNING - Memory use is high but worker has no data to store to disk. Perhaps some other process is leaking memory? Process memory: 33.98 GB -- Worker memory limit: 48.00 GB

    I tried processing a subset of the data with 10 cycles, and this began to work as expected.

    I then tried to process the full 20-cycle dataset using a node with more GPUs in case this would help (setting gpus: [0, 1, 2, 3, 4, 5, 6, 7]). However for some reason, although 8 workers were set up, Cytokit still only appeared to use two of them (note the "Loaded tile 1" and "Loaded tile 33", out of 63 tiles total):

    2020-01-29 06:11:41,623:INFO:42970:root: Execution arguments and environment saved to "output/processor/execution/202001291111.json" 2020-01-29 06:11:50,772:INFO:42970:cytokit.exec.pipeline: Starting Pre-processing pipeline for 8 tasks (8 workers) Using TensorFlow backend. Using TensorFlow backend. Using TensorFlow backend. Using TensorFlow backend. Using TensorFlow backend. Using TensorFlow backend. Using TensorFlow backend. Using TensorFlow backend. 2020-01-29 06:12:55,608:INFO:43129:cytokit.exec.pipeline: Loaded tile 33 for region 1 [shape = (20, 11, 4, 1440, 1920)] 2020-01-29 06:12:55,609:INFO:43129:cytokit.ops.drift_compensation: Calculating drift translations 2020-01-29 06:12:56,185:INFO:43125:cytokit.exec.pipeline: Loaded tile 1 for region 1 [shape = (20, 11, 4, 1440, 1920)] 2020-01-29 06:12:56,186:INFO:43125:cytokit.ops.drift_compensation: Calculating drift translations 2020-01-29 06:13:30,956:INFO:43125:cytokit.ops.drift_compensation: Applying drift translations 2020-01-29 06:13:30,968:INFO:43129:cytokit.ops.drift_compensation: Applying drift translations distributed.worker - WARNING - Memory use is high but worker has no data to store to disk. Perhaps some other process is leaking memory? Process memory: 33.74 GB -- Worker memory limit: 48.00 GB

    How can I get it to use all of the GPUs? Do you think it would help with the memory issue even if it does use them?

    Thanks, Maria

    opened by mkeays 4
  • running marker_profiling_example

    running marker_profiling_example

    Greetings and thank you for sharing this fantastic tool, i'm running the example script and got stuck at the last processing step, any comment is appreciated! Untitled

    opened by AlexBouz 4
  • Implementation of CODEXProcessing on GPU

    Implementation of CODEXProcessing on GPU

    Hi @nsamusik (cc: @ashstatic),

    I was working on getting the CODEXProcessor part of the pipeline onto GPUs (or at least most of it) and thought I'd share some results of that:

    • I put together implementations of each CODEXProcessor step as individual operations mostly representing TensorFlow graphs and found, in particular, that moving drift compensation onto a GPU makes a big difference
    • For the tonsil dataset, here are the differences in times it takes to run the whole thing on a 2xGPU Windows 10 machine (drift compensation, cropping, best focus, deconvolution, etc):
      • CODEXProcessor w/ Microvolution - ~50 minutes
      • This project w/ TensorFlow - ~15 minutes
        • Should be about ~10 minutes on Linux though because there was one step, applying the translations from drift compensation, that I had to do on CPU because CUDA is missing an operation on Windows that TensorFlow needs for it
    • For best focal plane selection (best_focus.py), I used the deep learning model in this Google Research blog post about Using Deep Learning to Facilitate Scientific Image Analysis. Here's an example of how it works I made when testing it out, showing how it scores images as they are blurred more and more (lower scores = better focus/quality):

    img_example

    • I also tried orchestration of the tasks using Dask which made it easy to use multiple GPUs and theoretically easy to distribute the whole process amongst a cluster. There certainly might be better things out there for this though it was super simple to integrate and is surprisingly good at automatically profiling the whole application as part of its built in Dashboard. Here's what it shows for the CODEX steps:

    dask_profiling

    Apparently, it can also be used to report progress too which could be nice in a distributed cluster.

    Please let me know if you have any thoughts or suggestions, otherwise I'll try to wrap this up and document a script to run the whole CODEXProcessor component like before with the deconvolution script.

    Thanks!

    opened by eric-czech 4
  • file not found error

    file not found error

    I am trying to run "marker_profiling_example" notebook and while trying to visualize the 6-channel image, I get the following error: No such file or directory: '/tmp/cytokit-example/cellular-marker/20181116-d40-r1-20x-5by5/output/v00/montage/best_z_segm/R001.tif'

    I think montage folder is missing within v00. Can you please help me with this?

    opened by akashparvatikar 3
  • Tensorflow not downloading the NN

    Tensorflow not downloading the NN

    Hi Eric,

    I am trying to run cytokit on a new pc and got the following error:

    image

    I have notice that the network is not being download when doing a clean start. I am not sure if that could be the problem.

    The new PC has the following configuration: Memory 128Gb CPU: Intel Xeon W-2155 GPU: Quadro RTX 5000 12Gb

    The image set that I am trying to analyze is 9x7 20 planes 9 cycles

    I have try normal configuration and adjusted as suggested in the previous error report, with less number of planes and cycles but I still got the same error,

    opened by jesusdpa1 3
  • Example file didn't work well

    Example file didn't work well

    When I tried to run the example, it reported the following errors.

    2022-09-07 21:04:53,032:INFO:91:root: Execution arguments and environment saved to "/tmp/cytokit-example/cellular-marker/20181116-d40-r1-20x-5by5/output/v00/processor/execution/202209072104.json" 2022-09-07 21:05:00,776:INFO:91:cytokit.exec.pipeline: Starting Pre-processing pipeline for 1 tasks (1 workers) 2022-09-07 21:05:00,794:INFO:107:cytokit.data: Downloading url "https://storage.googleapis.com/microscope-image-quality/static/model/model.ckpt-1000042.index" to local path "/lab/data/.cytokit/cache/best_focus/model/model.ckpt-1000042.index" 2022-09-07 21:05:02,492:INFO:107:cytokit.exec.pipeline: Loaded tile 1 for region 1 [shape = (1, 7, 4, 1440, 1920)] 2022-09-07 21:05:05,007:INFO:107:cytokit.data: Downloading url "https://storage.googleapis.com/microscope-image-quality/static/model/model.ckpt-1000042.meta" to local path "/lab/data/.cytokit/cache/best_focus/model/model.ckpt-1000042.meta" 2022-09-07 21:05:06,764:INFO:107:cytokit.data: Downloading url "https://storage.googleapis.com/microscope-image-quality/static/model/model.ckpt-1000042.data-00000-of-00001" to local path "/lab/data/.cytokit/cache/best_focus/model/model.ckpt-1000042.data-00000-of-00001" Using TensorFlow backend. distributed.worker - WARNING - Compute Failed Function: run_preprocess_task args: ({'tile_prefetch_capacity': 1, 'op_flags': <cytokit.exec.pipeline.OpFlags object at 0x7f334cab47b8>, 'output_dir': '/tmp/cytokit-example/cellular-marker/20181116-d40-r1-20x-5by5/output/v00', 'gpu': 0, 'tile_indexes': array([0]), 'region_indexes': array([0]), 'data_dir': '/tmp/cytokit-example/cellular-marker/20181116-d40-r1-20x-5by5/raw'}) kwargs: {} Exception: OSError('Unable to open file (file signature not found)',)

    Traceback (most recent call last): File "/usr/local/bin/cytokit", line 32, in main() File "/usr/local/bin/cytokit", line 28, in main fire.Fire(Cytokit) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/fire/core.py", line 127, in Fire component_trace = _Fire(component, args, context, name) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/fire/core.py", line 366, in _Fire component, remaining_args) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/fire/core.py", line 542, in _CallCallable result = fn(*varargs, kwargs) File "/lab/repos/cytokit/python/pipeline/cytokit/cli/init.py", line 167, in run_all fn({**config[op], **params}) File "/lab/repos/cytokit/python/pipeline/cytokit/cli/processor.py", line 131, in run pipeline.run(pl_config, logging_init_fn=self._logging_init_fn) File "/lab/repos/cytokit/python/pipeline/cytokit/exec/pipeline.py", line 458, in run run_tasks(pl_conf, 'Pre-processing', run_preprocess_task, logging_init_fn) File "/lab/repos/cytokit/python/pipeline/cytokit/exec/pipeline.py", line 421, in run_tasks res = [r.result() for r in res] File "/lab/repos/cytokit/python/pipeline/cytokit/exec/pipeline.py", line 421, in res = [r.result() for r in res] File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/distributed/client.py", line 227, in result six.reraise(*result) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/six.py", line 702, in reraise raise value.with_traceback(tb) File "/lab/repos/cytokit/python/pipeline/cytokit/exec/pipeline.py", line 441, in run_preprocess_task return run_task(task, ops, preprocess_tile) File "/lab/repos/cytokit/python/pipeline/cytokit/exec/pipeline.py", line 355, in run_task with ops: File "/lab/repos/cytokit/python/pipeline/cytokit/ops/op.py", line 200, in enter v.enter() File "/lab/repos/cytokit/python/pipeline/cytokit/ops/op.py", line 152, in enter self.initialize() File "/lab/repos/cytokit/python/pipeline/cytokit/ops/cytometry.py", line 136, in initialize self.cytometer.initialize() File "/lab/repos/cytokit/python/pipeline/cytokit/cytometry/cytometer.py", line 610, in initialize self.model.load_weights(self.weights_path or self._get_weights_path()) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/keras/engine/network.py", line 1157, in load_weights with h5py.File(filepath, mode='r') as f: File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/h5py/_hl/files.py", line 408, in init swmr=swmr) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/h5py/_hl/files.py", line 173, in make_fid fid = h5f.open(name, flags, fapl=fapl) File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper File "h5py/h5f.pyx", line 88, in h5py.h5f.open OSError: Unable to open file (file signature not found) Traceback (most recent call last): File "/usr/local/bin/cytokit", line 32, in main() File "/usr/local/bin/cytokit", line 28, in main fire.Fire(Cytokit) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/fire/core.py", line 127, in Fire component_trace = _Fire(component, args, context, name) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/fire/core.py", line 366, in _Fire component, remaining_args) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/fire/core.py", line 542, in _CallCallable result = fn(*varargs, kwargs) File "/lab/repos/cytokit/python/pipeline/cytokit/cli/init.py", line 167, in run_all fn({**config[op], **params}) File "/lab/repos/cytokit/python/pipeline/cytokit/cli/operator.py", line 135, in extract z_slice_fn = _get_z_slice_fn(z, self.data_dir) File "/lab/repos/cytokit/python/pipeline/cytokit/cli/operator.py", line 78, in _get_z_slice_fn map = function_data.get_best_focus_coord_map(data_dir) File "/lab/repos/cytokit/python/pipeline/cytokit/function/data.py", line 45, in get_best_focus_coord_map return get_best_focus_data(output_dir).set_index(['region_index', 'tile_x', 'tile_y'])['best_z'].to_dict() File "/lab/repos/cytokit/python/pipeline/cytokit/function/data.py", line 31, in get_best_focus_data processor_data, path = get_processor_data(output_dir, return_path=True) File "/lab/repos/cytokit/python/pipeline/cytokit/function/data.py", line 19, in get_processor_data proc_data = exec.read_processor_data(path) File "/lab/repos/cytokit/python/pipeline/cytokit/exec/init.py", line 34, in read_processor_data with open(path, 'r') as fd: FileNotFoundError: [Errno 2] No such file or directory: '/tmp/cytokit-example/cellular-marker/20181116-d40-r1-20x-5by5/output/v00/processor/data.json' 2022-09-07 21:07:19,050:INFO:189:root: Running cytometry statistics aggregation 2022-09-07 21:07:19,051:WARNING:189:cytokit.cytometry.data: Expected cytometry data file at "/tmp/cytokit-example/cellular-marker/20181116-d40-r1-20x-5by5/output/v00/cytometry/statistics/R001_X001_Y001.csv" does not exist. It will be ignored but this may be worth investigating Traceback (most recent call last): File "/usr/local/bin/cytokit", line 32, in main() File "/usr/local/bin/cytokit", line 28, in main fire.Fire(Cytokit) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/fire/core.py", line 127, in Fire component_trace = _Fire(component, args, context, name) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/fire/core.py", line 366, in _Fire component, remaining_args) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/fire/core.py", line 542, in _CallCallable result = fn(*varargs, kwargs) File "/lab/repos/cytokit/python/pipeline/cytokit/cli/init.py", line 167, in run_all fn({**config[op], **params}) File "/lab/repos/cytokit/python/pipeline/cytokit/cli/analysis.py", line 28, in aggregate_cytometry_statistics self.data_dir, self.config, mode=mode, export_csv=export_csv, export_fcs=export_fcs, variant=variant) File "/lab/repos/cytokit/python/pipeline/cytokit/function/core.py", line 19, in aggregate_cytometry_statistics res = function_data.get_cytometry_data(output_dir, config, mode=mode) File "/lab/repos/cytokit/python/pipeline/cytokit/function/data.py", line 59, in get_cytometry_data cyto_data = cytometry_data.aggregate(config, output_dir) File "/lab/repos/cytokit/python/pipeline/cytokit/cytometry/data.py", line 32, in aggregate df = pd.concat(df) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/pandas/core/reshape/concat.py", line 212, in concat copy=copy) File "/opt/conda/envs/cytokit/lib/python3.5/site-packages/pandas/core/reshape/concat.py", line 245, in init raise ValueError('No objects to concatenate') ValueError: No objects to concatenate

    Do you have any ideas to figure out the issue? Thanks a lot.

    opened by d1015 1
  • Bump distributed from 1.28.1 to 2021.10.0 in /python/pipeline

    Bump distributed from 1.28.1 to 2021.10.0 in /python/pipeline

    Bumps distributed from 1.28.1 to 2021.10.0.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • how to generate a template configuration file?

    how to generate a template configuration file?

    This tool looks really cool and I'd like to implement it at our organization! However, I'm new to image analysis and cannot for the life of me figure out how to obtain/create a template configuration file for the image I wish to process.

    I've managed to extract a utf-16 XML file from the qptiff tags. The XML contained a lot of information about the cameras and channels, but it doesn't have all the information contained in the example yaml config files.

    What is the usual way to go about getting the template configuration file? Do you just write it from scratch and hope for the best?

    opened by amcrab 3
  • Bump numpy from 1.16.0 to 1.22.0 in /python/pipeline

    Bump numpy from 1.16.0 to 1.22.0 in /python/pipeline

    Bumps numpy from 1.16.0 to 1.22.0.

    Release notes

    Sourced from numpy's releases.

    v1.22.0

    NumPy 1.22.0 Release Notes

    NumPy 1.22.0 is a big release featuring the work of 153 contributors spread over 609 pull requests. There have been many improvements, highlights are:

    • Annotations of the main namespace are essentially complete. Upstream is a moving target, so there will likely be further improvements, but the major work is done. This is probably the most user visible enhancement in this release.
    • A preliminary version of the proposed Array-API is provided. This is a step in creating a standard collection of functions that can be used across application such as CuPy and JAX.
    • NumPy now has a DLPack backend. DLPack provides a common interchange format for array (tensor) data.
    • New methods for quantile, percentile, and related functions. The new methods provide a complete set of the methods commonly found in the literature.
    • A new configurable allocator for use by downstream projects.

    These are in addition to the ongoing work to provide SIMD support for commonly used functions, improvements to F2PY, and better documentation.

    The Python versions supported in this release are 3.8-3.10, Python 3.7 has been dropped. Note that 32 bit wheels are only provided for Python 3.8 and 3.9 on Windows, all other wheels are 64 bits on account of Ubuntu, Fedora, and other Linux distributions dropping 32 bit support. All 64 bit wheels are also linked with 64 bit integer OpenBLAS, which should fix the occasional problems encountered by folks using truly huge arrays.

    Expired deprecations

    Deprecated numeric style dtype strings have been removed

    Using the strings "Bytes0", "Datetime64", "Str0", "Uint32", and "Uint64" as a dtype will now raise a TypeError.

    (gh-19539)

    Expired deprecations for loads, ndfromtxt, and mafromtxt in npyio

    numpy.loads was deprecated in v1.15, with the recommendation that users use pickle.loads instead. ndfromtxt and mafromtxt were both deprecated in v1.17 - users should use numpy.genfromtxt instead with the appropriate value for the usemask parameter.

    (gh-19615)

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump dask from 1.0.0 to 2021.10.0 in /python/pipeline

    Bump dask from 1.0.0 to 2021.10.0 in /python/pipeline

    Bumps dask from 1.0.0 to 2021.10.0.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump opencv-python from 3.4.3.18 to 4.2.0.32 in /python/pipeline

    Bump opencv-python from 3.4.3.18 to 4.2.0.32 in /python/pipeline

    Bumps opencv-python from 3.4.3.18 to 4.2.0.32.

    Release notes

    Sourced from opencv-python's releases.

    4.2.0.32

    OpenCV version 4.2.0.

    Changes:

    • macOS environment updated from xcode8.3 to xcode 9.4
    • macOS uses now Qt 5 instead of Qt 4
    • Nasm version updated to Docker containers
    • multibuild updated

    Fixes:

    • don't use deprecated brew tap-pin, instead refer to the full package name when installing #267
    • replace get_config_var() with get_config_vars() in setup.py #274
    • add workaround for DLL errors in Windows Server #264

    3.4.9.31

    OpenCV version 3.4.9.

    Changes:

    • macOS environment updated from xcode8.3 to xcode 9.4
    • macOS uses now Qt 5 instead of Qt 4
    • Nasm version updated to Docker containers
    • multibuild updated

    Fixes:

    • don't use deprecated brew tap-pin, instead refer to the full package name when installing #267
    • replace get_config_var() with get_config_vars() in setup.py #274
    • add workaround for DLL errors in Windows Server #264

    4.1.2.30

    OpenCV version 4.1.2.

    Changes:

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
Releases(v0.1.1)
Owner
Hammer Lab
We're a lab working to understand and improve the immune response to cancer
Hammer Lab
A implemetation of the LRCN in mxnet

A implemetation of the LRCN in mxnet ##Abstract LRCN is a combination of CNN and RNN ##Installation Download UCF101 dataset ./avi2jpg.sh to split the

44 Aug 25, 2022
Deep Learning agent of Starcraft2, similar to AlphaStar of DeepMind except size of network.

Introduction This repository is for Deep Learning agent of Starcraft2. It is very similar to AlphaStar of DeepMind except size of network. I only test

Dohyeong Kim 136 Jan 04, 2023
A Dynamic Residual Self-Attention Network for Lightweight Single Image Super-Resolution

DRSAN A Dynamic Residual Self-Attention Network for Lightweight Single Image Super-Resolution Karam Park, Jae Woong Soh, and Nam Ik Cho Environments U

4 May 10, 2022
Simple Text-Generator with OpenAI gpt-2 Pytorch Implementation

GPT2-Pytorch with Text-Generator Better Language Models and Their Implications Our model, called GPT-2 (a successor to GPT), was trained simply to pre

Tae-Hwan Jung 775 Jan 08, 2023
A JAX implementation of Broaden Your Views for Self-Supervised Video Learning, or BraVe for short.

BraVe This is a JAX implementation of Broaden Your Views for Self-Supervised Video Learning, or BraVe for short. The model provided in this package wa

DeepMind 44 Nov 20, 2022
ImageBART: Bidirectional Context with Multinomial Diffusion for Autoregressive Image Synthesis

ImageBART NeurIPS 2021 Patrick Esser*, Robin Rombach*, Andreas Blattmann*, Björn Ommer * equal contribution arXiv | BibTeX | Poster Requirements A sui

CompVis Heidelberg 110 Jan 01, 2023
LoFTR:Detector-Free Local Feature Matching with Transformers CVPR 2021

LoFTR-with-train-script LoFTR:Detector-Free Local Feature Matching with Transformers CVPR 2021 (with train script --- unofficial ---). About Megadepth

Nan Xiaohu 15 Nov 04, 2022
Torchyolo - Yolov3 ve Yolov4 modellerin Pytorch uygulamasıdır

TORCHYOLO : Yolo Modellerin Pytorch Uygulaması Yapılacaklar: Yolov3 model.py ve

Kadir Nar 3 Aug 22, 2022
Music Source Separation; Train & Eval & Inference piplines and pretrained models we used for 2021 ISMIR MDX Challenge.

Music Source Separation with Channel-wise Subband Phase Aware ResUnet (CWS-PResUNet) Introduction This repo contains the pretrained Music Source Separ

Lau 100 Dec 25, 2022
Iowa Project - My second project done at General Assembly, focused on feature engineering and understanding Linear Regression as a concept

Project 2 - Ames Housing Data and Kaggle Challenge PROBLEM STATEMENT Inferring or Predicting? What's more valuable for a housing model? When creating

Adam Muhammad Klesc 1 Jan 03, 2022
Saeed Lotfi 28 Dec 12, 2022
Consumer Fairness in Recommender Systems: Contextualizing Definitions and Mitigations

Consumer Fairness in Recommender Systems: Contextualizing Definitions and Mitigations This is the repository for the paper Consumer Fairness in Recomm

7 Nov 30, 2022
Multi-Stage Episodic Control for Strategic Exploration in Text Games

XTX: eXploit - Then - eXplore Requirements First clone this repo using git clone https://github.com/princeton-nlp/XTX.git Please create two conda envi

Princeton Natural Language Processing 9 May 24, 2022
Revisiting Video Saliency: A Large-scale Benchmark and a New Model (CVPR18, PAMI19)

DHF1K =========================================================================== Wenguan Wang, J. Shen, M.-M Cheng and A. Borji, Revisiting Video Sal

Wenguan Wang 126 Dec 03, 2022
Dynamic View Synthesis from Dynamic Monocular Video

Dynamic View Synthesis from Dynamic Monocular Video Project Website | Video | Paper Dynamic View Synthesis from Dynamic Monocular Video Chen Gao, Ayus

Chen Gao 139 Dec 28, 2022
JAX-based neural network library

Haiku: Sonnet for JAX Overview | Why Haiku? | Quickstart | Installation | Examples | User manual | Documentation | Citing Haiku What is Haiku? Haiku i

DeepMind 2.3k Jan 04, 2023
Second-order Attention Network for Single Image Super-resolution (CVPR-2019)

Second-order Attention Network for Single Image Super-resolution (CVPR-2019) "Second-order Attention Network for Single Image Super-resolution" is pub

516 Dec 28, 2022
Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.

Smaller Multilingual Transformers This repository shares smaller versions of multilingual transformers that keep the same representations offered by t

Geotrend 79 Dec 28, 2022
DeepSpamReview: Detection of Fake Reviews on Online Review Platforms using Deep Learning Architectures. Summer Internship project at CoreView Systems.

Detection of Fake Reviews on Online Review Platforms using Deep Learning Architectures Dataset: https://s3.amazonaws.com/fast-ai-nlp/yelp_review_polar

Ashish Salunkhe 37 Dec 17, 2022
This is an official pytorch implementation of Lite-HRNet: A Lightweight High-Resolution Network.

Lite-HRNet: A Lightweight High-Resolution Network Introduction This is an official pytorch implementation of Lite-HRNet: A Lightweight High-Resolution

HRNet 675 Dec 25, 2022