This repository gives an example on how to preprocess the data of the HECKTOR challenge

Related tags

Deep Learninghecktor
Overview

HECKTOR 2021 challenge

This repository gives an example on how to preprocess the data of the HECKTOR challenge. Any other preprocessing is welcomed and any framework can be used for the challenge, the only requirement is to submit the results in the same coordinate system as the original CT images (same spacing and same origin). This repository also contains the code used to prepare the data of the challenge (DICOM to NIFTI, SUV computation and bounding box generation, not needed for the participants). Moreover, it contains an example of implementation to resample the data within the bounding boxes and resample back to the original resolution.

Download Data

To access the data, visit the challenge website: https://www.aicrowd.com/challenges/miccai-2021-hecktor and follow the instructions. The code included here was intended to work with a specific repository structure described in Section Project Organization. Following git clone https://github.com/voreille/hecktor.git, create a data/ folder in the repository and place the unzipped data in it.

Install Dependencies

To install the necessary dependencies you can use pip install -r requirements.txt. It is advised to use it within a python3 virtual environment.

Resample Data

Run python src/resampling/resample.py to crop and resample the data following the repository structure or use arguments (type python src/resamping/resample.py --help for more informations).

Evaluation

An example of how the segmentation (task 1) will be evaluated is illustrated in the notebook notebooks/evaluate_segmentation.ipynb. Note that the Hausdorff distance at 95 % implemented in https://github.com/deepmind/surface-distance will be used in the challenge (not the one found in src/evaluation/scores.py).

The concordance index used to evaluate task 2 and 3 is implemented in the function concordance_index(event_times, predicted_scores, event_observed=None) from the file src/aicrowd_evaluator/survival_metrics.py. It was adapted from https://github.com/CamDavidsonPilon/lifelines/blob/master/lifelines/utils/concordance.py to account for missing predictions (missing predictions are handled as non-concordant).

Submission

Dummy examples of correct submission for task 1 and 2 can be found in notebooks/example_seg_submission.ipynb and notebooks/example_surv_submission.ipynbrespectively.

Project Organization

├── README.md                     
├── data                              <- NOT in the version control
│   ├── resampled                     <- The data in NIFTI resampled and cropped according to the bounding boxes (bbox.csv).
│   ├── hecktor_nii                   <- The data converted in the nifty format with the original geometric frame,
|   |                                    e.i. the one downloaded form AIcrowd
│   └── bbox.csv                      <- The bounding box for each patient computed with bbox_auto function from src.data.bounding_box
├── requirements.txt                  <- The requirements file for reproducing the analysis environment, e.g.
│                                        generated with `pip freeze > requirements.txt`
├── Makefile                          <- Used to do set up the environment and make the conversion of DICOM to NIFTI
├── notebooks
|   ├── example_seg_submission.ipynb  <- Example of a correct submission for the segmentation task (task 1).
|   ├── example_surv_submission.ipynb <- Example of a correct submission for the survival task (task 2).
│   └── evaluate_segmentation.ipynb   <- Example of how the evaluation will be computed.
└── src                               <- Source code for use in this project
    ├── aicrowd_evaluator             <- Source code for the evaluation on the AIcrowd platform
    │   ├── __init__.py
    │   ├── surface-distance/         <- code to compute the robust Hausdorff distance availabe at https://github.com/deepmind/surface-distance        
    │   ├── evaluator.py              <- Define the evaluator class for task 1 and 2
    │   ├── segmentation_metrics.py   <- Define the metrics used in the segmentation task.
    |   ├── requirements.txt          <- The requirements file specific to this submodule
    │   └── survival_metrics.py       <- Define the metrics used for the survival task.
    ├── data                          <- Scripts to generate data
    │   ├── __init__.py
    │   ├── bounding_box.py        
    │   ├── utils.py                  <- Define functions used in make_dataset.py
    │   └── make_dataset.py           <- Conversion of the DICOM to NIFTI and computation of the bounding boxes
    ├── evaluation
    |   ├── __init__.py
    │   └── scores.py                 <- (DEPRECATED) used to illustrate how the segmentation is evaluated. Refer to `src/aicrowd_evaluator`
    |                                    submodule for the actual evaluation of the challenge.
    └── resampling                    <- Code to resample the data 
        ├── __init__.py
        └── resample.py

Project based on the cookiecutter data science project template. #cookiecutterdatascience

Comments
  • The validation dice is similar when 10% or 100% train datasets were used with the same validation sets.

    The validation dice is similar when 10% or 100% train datasets were used with the same validation sets.

    Hello! I found an uncommon result.The validation results were similar when I used different numbers of patient case in 10% and 100% of training datasets.I have tested different codes, including this repository (the 3D dense_vnet result:23 train case:0.5914 ,180 train case:0.6233).At last, I got the same conclusion,especially in 2D,and the results are almost the same. The case did not happen,when I used the type of dataset,which was randomly split as train and validation set in the way of shuffle all slices of all patient cases, instead of shuffling all patient cases. Why ? The cause of data distribution? I would appreciate you,if you can help me.Thanks!!!

    opened by szhang963 5
  • Resampling issue

    Resampling issue

    I'm facing a problem in running python src/resampling/cli_resampling.py line.

    C:\Users\x\y\hecktor_project\hecktor-master1>python src/resampling/cli_resampling.py
    2020-07-02 22:30:28,050 - __main__ - INFO - Resampling
    resampling is (1.0, 1.0, 1.0)
    multiprocessing.pool.RemoteTraceback:
    """
    Traceback (most recent call last):
      File "C:\Users\mahaw\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\core\indexes\base.py", line 2646, in get_loc
        return self._engine.get_loc(key)
      File "pandas\_libs\index.pyx", line 111, in pandas._libs.index.IndexEngine.get_loc
      File "pandas\_libs\index.pyx", line 138, in pandas._libs.index.IndexEngine.get_loc
      File "pandas\_libs\hashtable_class_helper.pxi", line 1619, in pandas._libs.hashtable.PyObjectHashTable.get_item
      File "pandas\_libs\hashtable_class_helper.pxi", line 1627, in pandas._libs.hashtable.PyObjectHashTable.get_item
    KeyError: 'hecktor'
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "C:\Users\mahaw\AppData\Local\Programs\Python\Python38\lib\multiprocessing\pool.py", line 125, in worker
        result = (True, func(*args, **kwds))
      File "C:\Users\mahaw\AppData\Local\Programs\Python\Python38\lib\multiprocessing\pool.py", line 48, in mapstar
        return list(map(*args))
      File "c:\users\x\y\hecktor_project\hecktor-master1\src\resampling\resampling.py", line 35, in __call__
        bb = (self.bb_df.loc[patient_name, 'x1'], self.bb_df.loc[patient_name,
      File "C:\Users\mahaw\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\core\indexing.py", line 1762, in __getitem__
        return self._getitem_tuple(key)
      File "C:\Users\mahaw\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\core\indexing.py", line 1272, in _getitem_tuple
        return self._getitem_lowerdim(tup)
      File "C:\Users\mahaw\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\core\indexing.py", line 1389, in _getitem_lowerdim
        section = self._getitem_axis(key, axis=i)
      File "C:\Users\mahaw\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\core\indexing.py", line 1965, in _getitem_axis
        return self._get_label(key, axis=axis)
      File "C:\Users\mahaw\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\core\indexing.py", line 625, in _get_label
        return self.obj._xs(label, axis=axis)
      File "C:\Users\mahaw\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\core\generic.py", line 3537, in xs
        loc = self.index.get_loc(key)
      File "C:\Users\mahaw\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\core\indexes\base.py", line 2648, in get_loc
        return self._engine.get_loc(self._maybe_cast_indexer(key))
      File "pandas\_libs\index.pyx", line 111, in pandas._libs.index.IndexEngine.get_loc
      File "pandas\_libs\index.pyx", line 138, in pandas._libs.index.IndexEngine.get_loc
      File "pandas\_libs\hashtable_class_helper.pxi", line 1619, in pandas._libs.hashtable.PyObjectHashTable.get_item
      File "pandas\_libs\hashtable_class_helper.pxi", line 1627, in pandas._libs.hashtable.PyObjectHashTable.get_item
    KeyError: 'hecktor'
    """
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "src/resampling/cli_resampling.py", line 76, in <module>
        main()
      File "C:\Users\mahaw\AppData\Local\Programs\Python\Python38\lib\site-packages\click\core.py", line 829, in __call__
        return self.main(*args, **kwargs)
      File "C:\Users\mahaw\AppData\Local\Programs\Python\Python38\lib\site-packages\click\core.py", line 782, in main
        rv = self.invoke(ctx)
      File "C:\Users\mahaw\AppData\Local\Programs\Python\Python38\lib\site-packages\click\core.py", line 1066, in invoke
        return ctx.invoke(self.callback, **ctx.params)
      File "C:\Users\mahaw\AppData\Local\Programs\Python\Python38\lib\site-packages\click\core.py", line 610, in invoke
        return callback(*args, **kwargs)
      File "src/resampling/cli_resampling.py", line 68, in main
        p.map(resampler, files_list)
      File "C:\Users\mahaw\AppData\Local\Programs\Python\Python38\lib\multiprocessing\pool.py", line 364, in map
        return self._map_async(func, iterable, mapstar, chunksize).get()
      File "C:\Users\mahaw\AppData\Local\Programs\Python\Python38\lib\multiprocessing\pool.py", line 771, in get
        raise self._value
    KeyError: 'hecktor'
    
    
    opened by Mahaals 5
  • No module named 'src.resampling.utils'

    No module named 'src.resampling.utils'

    When I try to run evaluate_predictions.ipynb in the notebook dir, I get the following message:

    No module named 'src.resampling.utils'
    

    I cannot find the module in the src directory. Is there any solution?

    opened by JYeonLee 4
  • update crop_dataset.ipynb

    update crop_dataset.ipynb

    Hi @voreille ,

    Thanks for organizing the great challenge. It seems that the cropping and resampling notebook has not been updated. https://github.com/voreille/hecktor/blob/master/notebooks/crop_dataset.ipynb

    Looking forward to your update:)

    opened by JunMa11 2
  • wrong when using

    wrong when using "evaluate_predictions.ipynb" to evaluate 3d dice

    I found the evaluated result wrong when I used the code "evaluate_predictions.ipynb".After,I made a test. I used resampled gtvt data generated by the operation “python src/resampling/cli_resampling.py” as "prediction_folder" to evaluate 3d dice score,and I got the dice of 0.9528 instead of 1.Why?

    opened by szhang963 2
  • Bump tensorflow-gpu from 1.15 to 2.4.0

    Bump tensorflow-gpu from 1.15 to 2.4.0

    Bumps tensorflow-gpu from 1.15 to 2.4.0.

    Release notes

    Sourced from tensorflow-gpu's releases.

    TensorFlow 2.4.0

    Release 2.4.0

    Major Features and Improvements

    • tf.distribute introduces experimental support for asynchronous training of models via the tf.distribute.experimental.ParameterServerStrategy API. Please see the tutorial to learn more.

    • MultiWorkerMirroredStrategy is now a stable API and is no longer considered experimental. Some of the major improvements involve handling peer failure and many bug fixes. Please check out the detailed tutorial on Multi-worker training with Keras.

    • Introduces experimental support for a new module named tf.experimental.numpy which is a NumPy-compatible API for writing TF programs. See the detailed guide to learn more. Additional details below.

    • Adds Support for TensorFloat-32 on Ampere based GPUs. TensorFloat-32, or TF32 for short, is a math mode for NVIDIA Ampere based GPUs and is enabled by default.

    • A major refactoring of the internals of the Keras Functional API has been completed, that should improve the reliability, stability, and performance of constructing Functional models.

    • Keras mixed precision API tf.keras.mixed_precision is no longer experimental and allows the use of 16-bit floating point formats during training, improving performance by up to 3x on GPUs and 60% on TPUs. Please see below for additional details.

    • TensorFlow Profiler now supports profiling MultiWorkerMirroredStrategy and tracing multiple workers using the sampling mode API.

    • TFLite Profiler for Android is available. See the detailed guide to learn more.

    • TensorFlow pip packages are now built with CUDA11 and cuDNN 8.0.2.

    Breaking Changes

    • TF Core:

      • Certain float32 ops run in lower precsion on Ampere based GPUs, including matmuls and convolutions, due to the use of TensorFloat-32. Specifically, inputs to such ops are rounded from 23 bits of precision to 10 bits of precision. This is unlikely to cause issues in practice for deep learning models. In some cases, TensorFloat-32 is also used for complex64 ops. TensorFloat-32 can be disabled by running tf.config.experimental.enable_tensor_float_32_execution(False).
      • The byte layout for string tensors across the C-API has been updated to match TF Core/C++; i.e., a contiguous array of tensorflow::tstring/TF_TStrings.
      • C-API functions TF_StringDecode, TF_StringEncode, and TF_StringEncodedSize are no longer relevant and have been removed; see core/platform/ctstring.h for string access/modification in C.
      • tensorflow.python, tensorflow.core and tensorflow.compiler modules are now hidden. These modules are not part of TensorFlow public API.
      • tf.raw_ops.Max and tf.raw_ops.Min no longer accept inputs of type tf.complex64 or tf.complex128, because the behavior of these ops is not well defined for complex types.
      • XLA:CPU and XLA:GPU devices are no longer registered by default. Use TF_XLA_FLAGS=--tf_xla_enable_xla_devices if you really need them, but this flag will eventually be removed in subsequent releases.
    • tf.keras:

      • The steps_per_execution argument in model.compile() is no longer experimental; if you were passing experimental_steps_per_execution, rename it to steps_per_execution in your code. This argument controls the number of batches to run during each tf.function call when calling model.fit(). Running multiple batches inside a single tf.function call can greatly improve performance on TPUs or small models with a large Python overhead.
      • A major refactoring of the internals of the Keras Functional API may affect code that is relying on certain internal details:
        • Code that uses isinstance(x, tf.Tensor) instead of tf.is_tensor when checking Keras symbolic inputs/outputs should switch to using tf.is_tensor.
        • Code that is overly dependent on the exact names attached to symbolic tensors (e.g. assumes there will be ":0" at the end of the inputs, treats names as unique identifiers instead of using tensor.ref(), etc.) may break.
        • Code that uses full path for get_concrete_function to trace Keras symbolic inputs directly should switch to building matching tf.TensorSpecs directly and tracing the TensorSpec objects.
        • Code that relies on the exact number and names of the op layers that TensorFlow operations were converted into may have changed.
        • Code that uses tf.map_fn/tf.cond/tf.while_loop/control flow as op layers and happens to work before TF 2.4. These will explicitly be unsupported now. Converting these ops to Functional API op layers was unreliable before TF 2.4, and prone to erroring incomprehensibly or being silently buggy.
        • Code that directly asserts on a Keras symbolic value in cases where ops like tf.rank used to return a static or symbolic value depending on if the input had a fully static shape or not. Now these ops always return symbolic values.
        • Code already susceptible to leaking tensors outside of graphs becomes slightly more likely to do so now.
        • Code that tries directly getting gradients with respect to symbolic Keras inputs/outputs. Use GradientTape on the actual Tensors passed to the already-constructed model instead.
        • Code that requires very tricky shape manipulation via converted op layers in order to work, where the Keras symbolic shape inference proves insufficient.
        • Code that tries manually walking a tf.keras.Model layer by layer and assumes layers only ever have one positional argument. This assumption doesn't hold true before TF 2.4 either, but is more likely to cause issues now.

    ... (truncated)

    Changelog

    Sourced from tensorflow-gpu's changelog.

    Release 2.4.0

    Major Features and Improvements

    Breaking Changes

    • TF Core:
      • Certain float32 ops run in lower precision on Ampere based GPUs, including

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • Bump tensorflow-gpu from 1.15 to 2.3.1

    Bump tensorflow-gpu from 1.15 to 2.3.1

    Bumps tensorflow-gpu from 1.15 to 2.3.1.

    Release notes

    Sourced from tensorflow-gpu's releases.

    TensorFlow 2.3.1

    Release 2.3.1

    Bug Fixes and Other Changes

    TensorFlow 2.3.0

    Release 2.3.0

    Major Features and Improvements

    • tf.data adds two new mechanisms to solve input pipeline bottlenecks and save resources:

    In addition checkout the detailed guide for analyzing input pipeline performance with TF Profiler.

    • tf.distribute.TPUStrategy is now a stable API and no longer considered experimental for TensorFlow. (earlier tf.distribute.experimental.TPUStrategy).

    • TF Profiler introduces two new tools: a memory profiler to visualize your model’s memory usage over time and a python tracer which allows you to trace python function calls in your model. Usability improvements include better diagnostic messages and profile options to customize the host and device trace verbosity level.

    • Introduces experimental support for Keras Preprocessing Layers API (tf.keras.layers.experimental.preprocessing.*) to handle data preprocessing operations, with support for composite tensor inputs. Please see below for additional details on these layers.

    • TFLite now properly supports dynamic shapes during conversion and inference. We’ve also added opt-in support on Android and iOS for XNNPACK, a highly optimized set of CPU kernels, as well as opt-in support for executing quantized models on the GPU.

    • Libtensorflow packages are available in GCS starting this release. We have also started to release a nightly version of these packages.

    • The experimental Python API tf.debugging.experimental.enable_dump_debug_info() now allows you to instrument a TensorFlow program and dump debugging information to a directory on the file system. The directory can be read and visualized by a new interactive dashboard in TensorBoard 2.3 called Debugger V2, which reveals the details of the TensorFlow program including graph structures, history of op executions at the Python (eager) and intra-graph levels, the runtime dtype, shape, and numerical composistion of tensors, as well as their code locations.

    Breaking Changes

    • Increases the minimum bazel version required to build TF to 3.1.0.
    • tf.data
      • Makes the following (breaking) changes to the tf.data.
      • C++ API: - IteratorBase::RestoreInternal, IteratorBase::SaveInternal, and DatasetBase::CheckExternalState become pure-virtual and subclasses are now expected to provide an implementation.
      • The deprecated DatasetBase::IsStateful method is removed in favor of DatasetBase::CheckExternalState.
      • Deprecated overrides of DatasetBase::MakeIterator and MakeIteratorFromInputElement are removed.

    ... (truncated)

    Changelog

    Sourced from tensorflow-gpu's changelog.

    Release 2.3.1

    Bug Fixes and Other Changes

    Release 2.2.1

    ... (truncated)

    Commits
    • fcc4b96 Merge pull request #43446 from tensorflow-jenkins/version-numbers-2.3.1-16251
    • 4cf2230 Update version numbers to 2.3.1
    • eee8224 Merge pull request #43441 from tensorflow-jenkins/relnotes-2.3.1-24672
    • 0d41b1d Update RELEASE.md
    • d99bd63 Insert release notes place-fill
    • d71d3ce Merge pull request #43414 from tensorflow/mihaimaruseac-patch-1-1
    • 9c91596 Fix missing import
    • f9f12f6 Merge pull request #43391 from tensorflow/mihaimaruseac-patch-4
    • 3ed271b Solve leftover from merge conflict
    • 9cf3773 Merge pull request #43358 from tensorflow/mm-patch-r2.3
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • Bump tensorflow-gpu from 1.15 to 1.15.4

    Bump tensorflow-gpu from 1.15 to 1.15.4

    Bumps tensorflow-gpu from 1.15 to 1.15.4.

    Release notes

    Sourced from tensorflow-gpu's releases.

    TensorFlow 1.15.4

    Release 1.15.4

    Bug Fixes and Other Changes

    TensorFlow 1.15.3

    Bug Fixes and Other Changes

    TensorFlow 1.15.2

    Release 1.15.2

    Note that this release no longer has a single pip package for GPU and CPU. Please see #36347 for history and details

    Bug Fixes and Other Changes

    Changelog

    Sourced from tensorflow-gpu's changelog.

    Release 1.15.4

    Bug Fixes and Other Changes

    Release 2.3.0

    Major Features and Improvements

    • tf.data adds two new mechanisms to solve input pipeline bottlenecks and save resources:

    ... (truncated)

    Commits
    • df8c55c Merge pull request #43442 from tensorflow-jenkins/version-numbers-1.15.4-31571
    • 0e8cbcb Update version numbers to 1.15.4
    • 5b65bf2 Merge pull request #43437 from tensorflow-jenkins/relnotes-1.15.4-10691
    • 814e8d8 Update RELEASE.md
    • 757085e Insert release notes place-fill
    • e99e53d Merge pull request #43410 from tensorflow/mm-fix-1.15
    • bad36df Add missing import
    • f3f1835 No disable_tfrt present on this branch
    • 7ef5c62 Merge pull request #43406 from tensorflow/mihaimaruseac-patch-1
    • abbf34a Remove import that is not needed
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • Bump tensorflow-gpu from 1.15 to 1.15.2

    Bump tensorflow-gpu from 1.15 to 1.15.2

    Bumps tensorflow-gpu from 1.15 to 1.15.2.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • Bump numpy from 1.13.0 to 1.22.0 in /src/aicrowd_evaluator

    Bump numpy from 1.13.0 to 1.22.0 in /src/aicrowd_evaluator

    Bumps numpy from 1.13.0 to 1.22.0.

    Release notes

    Sourced from numpy's releases.

    v1.22.0

    NumPy 1.22.0 Release Notes

    NumPy 1.22.0 is a big release featuring the work of 153 contributors spread over 609 pull requests. There have been many improvements, highlights are:

    • Annotations of the main namespace are essentially complete. Upstream is a moving target, so there will likely be further improvements, but the major work is done. This is probably the most user visible enhancement in this release.
    • A preliminary version of the proposed Array-API is provided. This is a step in creating a standard collection of functions that can be used across application such as CuPy and JAX.
    • NumPy now has a DLPack backend. DLPack provides a common interchange format for array (tensor) data.
    • New methods for quantile, percentile, and related functions. The new methods provide a complete set of the methods commonly found in the literature.
    • A new configurable allocator for use by downstream projects.

    These are in addition to the ongoing work to provide SIMD support for commonly used functions, improvements to F2PY, and better documentation.

    The Python versions supported in this release are 3.8-3.10, Python 3.7 has been dropped. Note that 32 bit wheels are only provided for Python 3.8 and 3.9 on Windows, all other wheels are 64 bits on account of Ubuntu, Fedora, and other Linux distributions dropping 32 bit support. All 64 bit wheels are also linked with 64 bit integer OpenBLAS, which should fix the occasional problems encountered by folks using truly huge arrays.

    Expired deprecations

    Deprecated numeric style dtype strings have been removed

    Using the strings "Bytes0", "Datetime64", "Str0", "Uint32", and "Uint64" as a dtype will now raise a TypeError.

    (gh-19539)

    Expired deprecations for loads, ndfromtxt, and mafromtxt in npyio

    numpy.loads was deprecated in v1.15, with the recommendation that users use pickle.loads instead. ndfromtxt and mafromtxt were both deprecated in v1.17 - users should use numpy.genfromtxt instead with the appropriate value for the usemask parameter.

    (gh-19615)

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Question about okapy.dicomconverter

    Question about okapy.dicomconverter

    Hi. I am getting "okapy.dicomconverter.converter" could not be resolved when I try to use make_dataset2022.py, (I have installed okapy using pip). Is the okapy library deprecated? Looking forward to your advice.

    opened by TravisL24 0
  • baseline CNN (niftynet)

    baseline CNN (niftynet)

    Hi, I am interested in hecktor challenges although it is closed now. I am quite new to the idea of CNN. I see that it says there is a baseline CNN (niftynet) available in this repository but I cannot find it here. Is that deleted or could you share it with me? That would be very helpful. Thank you.

    opened by Wenhui-Zhang-5 1
  • Error occurs when running baseline model

    Error occurs when running baseline model

    When I run "net_segment evaluation -c config3D.ini" after inference, "KeyError: ("label",)" was raised. I just followed the steps mentioned in README. Is there any solution to this problem?

    opened by charlieisacat 1
Releases(hecktor2021)
CVPR2020 Counterfactual Samples Synthesizing for Robust VQA

CVPR2020 Counterfactual Samples Synthesizing for Robust VQA This repo contains code for our paper "Counterfactual Samples Synthesizing for Robust Visu

72 Dec 22, 2022
Implement face detection, and age and gender classification, and emotion classification.

YOLO Keras Face Detection Implement Face detection, and Age and Gender Classification, and Emotion Classification. (image from wider face dataset) Ove

Chloe 10 Nov 14, 2022
Code for paper " AdderNet: Do We Really Need Multiplications in Deep Learning?"

AdderNet: Do We Really Need Multiplications in Deep Learning? This code is a demo of CVPR 2020 paper AdderNet: Do We Really Need Multiplications in De

HUAWEI Noah's Ark Lab 915 Jan 01, 2023
Perception-aware multi-sensor fusion for 3D LiDAR semantic segmentation (ICCV 2021)

Perception-Aware Multi-Sensor Fusion for 3D LiDAR Semantic Segmentation (ICCV 2021) [中文|EN] 概述 本工作主要探索一种高效的多传感器(激光雷达和摄像头)融合点云语义分割方法。现有的多传感器融合方法主要将点云投影

ICE 126 Dec 30, 2022
Using NumPy to solve the equations of fluid mechanics together with Finite Differences, explicit time stepping and Chorin's Projection methods

Computational Fluid Dynamics in Python Using NumPy to solve the equations of fluid mechanics 🌊 🌊 🌊 together with Finite Differences, explicit time

Felix Köhler 4 Nov 12, 2022
[WACV 2022] Contextual Gradient Scaling for Few-Shot Learning

CxGrad - Official PyTorch Implementation Contextual Gradient Scaling for Few-Shot Learning Sanghyuk Lee, Seunghyun Lee, and Byung Cheol Song In WACV 2

Sanghyuk Lee 4 Dec 05, 2022
[Preprint] "Bag of Tricks for Training Deeper Graph Neural Networks A Comprehensive Benchmark Study" by Tianlong Chen*, Kaixiong Zhou*, Keyu Duan, Wenqing Zheng, Peihao Wang, Xia Hu, Zhangyang Wang

Bag of Tricks for Training Deeper Graph Neural Networks: A Comprehensive Benchmark Study Codes for [Preprint] Bag of Tricks for Training Deeper Graph

VITA 101 Dec 29, 2022
GNN-based Recommendation Benchma

GRecX A Fair Benchmark for GNN-based Recommendation Preliminary Comparison DiffNet-Yelp dataset (featureless) Algo 73 Oct 17, 2022

Transparent Transformer Segmentation

Transparent Transformer Segmentation Introduction This repository contains the data and code for IJCAI 2021 paper Segmenting transparent object in the

谢恩泽 140 Jan 02, 2023
piSTAR Lab is a modular platform built to make AI experimentation accessible and fun. (pistar.ai)

piSTAR Lab WARNING: This is an early release. Overview piSTAR Lab is a modular deep reinforcement learning platform built to make AI experimentation a

piSTAR Lab 0 Aug 01, 2022
A PyTorch implementation for our paper "Dual Contrastive Learning: Text Classification via Label-Aware Data Augmentation".

Dual-Contrastive-Learning A PyTorch implementation for our paper "Dual Contrastive Learning: Text Classification via Label-Aware Data Augmentation". Y

hoshi-hiyouga 85 Dec 26, 2022
An implementation of IMLE-Net: An Interpretable Multi-level Multi-channel Model for ECG Classification

IMLE-Net: An Interpretable Multi-level Multi-channel Model for ECG Classification The repostiory consists of the code, results and data set links for

12 Dec 26, 2022
X-modaler is a versatile and high-performance codebase for cross-modal analytics.

X-modaler X-modaler is a versatile and high-performance codebase for cross-modal analytics. This codebase unifies comprehensive high-quality modules i

910 Dec 28, 2022
a reimplementation of LiteFlowNet in PyTorch that matches the official Caffe version

pytorch-liteflownet This is a personal reimplementation of LiteFlowNet [1] using PyTorch. Should you be making use of this work, please cite the paper

Simon Niklaus 365 Dec 31, 2022
SIEM Logstash parsing for more than hundred technologies

LogIndexer Pipeline Logstash Parsing Configurations for Elastisearch SIEM and OpenDistro for Elasticsearch SIEM Why this project exists The overhead o

146 Dec 29, 2022
A testcase generation tool for Persistent Memory Programs.

PMFuzz PMFuzz is a testcase generation tool to generate high-value tests cases for PM testing tools (XFDetector, PMDebugger, PMTest and Pmemcheck) If

Systems Research at ShiftLab 14 Jul 24, 2022
A PyTorch re-implementation of the paper 'Exploring Simple Siamese Representation Learning'. Reproduced the 67.8% Top1 Acc on ImageNet.

Exploring simple siamese representation learning This is a PyTorch re-implementation of the SimSiam paper on ImageNet dataset. The results match that

Taojiannan Yang 72 Nov 09, 2022
A simple python library for fast image generation of people who do not exist.

Random Face A simple python library for fast image generation of people who do not exist. For more details, please refer to the [paper](https://arxiv.

Sergei Belousov 170 Dec 15, 2022
Koç University deep learning framework.

Knet Knet (pronounced "kay-net") is the Koç University deep learning framework implemented in Julia by Deniz Yuret and collaborators. It supports GPU

1.4k Dec 31, 2022
Artstation-Artistic-face-HQ Dataset (AAHQ)

Artstation-Artistic-face-HQ Dataset (AAHQ) Artstation-Artistic-face-HQ (AAHQ) is a high-quality image dataset of artistic-face images. It is proposed

onion 105 Dec 16, 2022