nnDetection is a self-configuring framework for 3D (volumetric) medical object detection which can be applied to new data sets without manual intervention. It includes guides for 12 data sets that were used to develop and evaluate the performance of the proposed method.

Overview

Version Python CUDA license

What is nnDetection?

Simultaneous localisation and categorization of objects in medical images, also referred to as medical object detection, is of high clinical relevance because diagnostic decisions depend on rating of objects rather than e.g. pixels. For this task, the cumbersome and iterative process of method configuration constitutes a major research bottleneck. Recently, nnU-Net has tackled this challenge for the task of image segmentation with great success. Following nnU-Net’s agenda, in this work we systematize and automate the configuration process for medical object detection. The resulting self-configuring method, nnDetection, adapts itself without any manual intervention to arbitrary medical detection problems while achieving results en par with or superior to the state-of-the-art. We demonstrate the effectiveness of nnDetection on two public benchmarks, ADAM and LUNA16, and propose 10 further public data sets for a comprehensive evaluation of medical object detection methods.

If you use nnDetection please cite our paper:

Baumgartner, M., Jaeger, P. F., Isensee, F., & Maier-Hein, K. H. (2021). nnDetection: A Self-configuring Method for Medical Object Detection. arXiv preprint arXiv:2106.00817

🎉 nnDetection was early accepted to the International Conference on Medical Image Computing & Computer Assisted Intervention 2021 (MICCAI21) 🎉

Installation

Docker

The easiest way to get started with nnDetection is the provided is to build a Docker Container with the provided Dockerfile.

Please install docker and nvidia-docker2 before continuing.

All projects which are based on nnDetection assume that the base image was built with the following tagging scheme nnDetection:[version]. To build a container (nnDetection Version 0.1) run the following command from the base directory:

docker build -t nndetection:0.1 --build-arg env_det_num_threads=6 --build-arg env_det_verbose=1 .

(--build-arg env_det_num_threads=6 and --build-arg env_det_verbose=1 are optional and are used to overwrite the provided default parameters)

The docker container expects data and models in its own /opt/data and /opt/models directories respectively. The directories need to be mounted via docker -v. For simplicity and speed, the ENV variables det_data and det_models can be set in the host system to point to the desired directories. To run:

docker run --gpus all -v ${det_data}:/opt/data -v ${det_models}:/opt/models -it --shm-size=24gb nndetection:0.1 /bin/bash

Warning: When running a training inside the container it is necessary to increase the shared memory (via --shm-size).

Source

  1. Install CUDA (>10.1) and cudnn (make sure to select compatible versions!)
  2. [Optional] Depending on your GPU you might need to set TORCH_CUDA_ARCH_LIST, check compute capabilities here.
  3. Install torch (make sure to match the pytorch and CUDA versions!) (requires pytorch >1.7+) and torchvision(make sure to match the versions!).
  4. Clone nnDetection, cd [path_to_repo] and pip install -e .
  5. Set environment variables (more info can be found below):
    • det_data: [required] Path to the source directory where all the data will be located
    • det_models: [required] Path to directory where all models will be saved
    • OMP_NUM_THREADS=1 : [required] Needs to be set! Otherwise bad things will happen... Refer to batchgenerators documentation.
    • det_num_threads: [recommended] Number processes to use for augmentation (at least 6, default 12)
    • det_verbose: [optional] Can be used to deactivate progress bars (activated by default)
    • MLFLOW_TRACKING_URI: [optional] Specify the logging directory of mlflow. Refer to the mlflow documentation for more information.

Note: nnDetection was developed on Linux => Windows is not supported.

Test Installation
Run the following command in the terminal (!not! in pytorch root folder) to verify that the compilation of the C++/CUDA code was successfull:
python -c "import torch; import nndet._C; import nndet"

To test the whole installation please run the Toy Data set example.

Maximising Training Speed
To get the best possible performance we recommend using CUDA 11.0+ with cuDNN 8.1.X+ and a (!)locally compiled version(!) of Pytorch 1.7+

nnDetection

nnDetection Module Overview

nnDetection uses multiple Registries to keep track of different modules and easily switch between them via the config files.

Config Files nnDetection uses Hydra to dynamically configure and compose configurations. The configuration files are located in nndet.conf and can be overwritten to customize the behavior of the pipeline.

AUGMENTATION_REGISTRY The augmentation registry can be imported from nndet.io.augmentation and contains different augmentation configurations. Examples can be found in nndet.io.augmentation.bg_aug.

DATALOADER_REGISTRY The dataloader registry contains different dataloader classes to customize the IO of nnDetection. It can be imported from nndet.io.datamodule and examples can be found in nndet.io.datamodule.bg_loader.

PLANNER_REGISTRY New plans can be registered via the planner registry which contains classes to define and perform different architecture and preprocessing schemes. It can be imported from nndet.planning.experiment and examples can be found in nndet.planning.experiment.v001.

MODULE_REGISTRY The module registry contains the core modules of nnDetection which inherits from the Pytorch Lightning Module. It is the main module which is used for training and inference and contains all the necessary steps to build the final models. It can be imported from nndet.ptmodule and examples can be found in nndet.ptmodule.retinaunet.

nnDetection Functional Details

Experiments & Data

The data sets used for our experiments are not hosted or maintained by us, please give credit to the authors of the data sets. Some of the labels were corrected in data sets which we converted and can be downloaded (links can be found in the guides). The Experiments section contains multiple guides which explain the preparation of the data sets via the provided scripts.

Toy Data set

Running nndet_example will automatically generate an example data set with 3D squares and sqaures with holes which can be used to test the installation or experiment with prototype code (it is still necessary to run the other nndet commands to process/train/predict the data set).

# create data to test installation/environment (10 train 10 test)
nndet_example

# create full data set for prototyping (1000 train 1000 test)
nndet_example --full [--num_processes]

The full problem is very easy and the final results should be near perfect. After running the generation script follow the Planning, Training and Inference instructions below to construct the whole nnDetection pipeline.

Experiments

Besides the self-configuring method, nnDetection acts as a standard interface for many data sets. We provide guides to prepare all data sets from our evaluation to the correct and make it easy to reproduce our resutls. Furthermore, we provide pretrained models which can be used without investing large amounts of compute to rerun our experiments (see Section Pretrained Models).

Adding New Data sets

nnDetection relies on a standardized input format which is very similar to nnU-Net and allows easy integration of new data sets. More details about the format can be found below.

Folders

All data sets should reside inside Task[Number]_[Name] folders inside the specified detection data folder (the path to this folder can be set via the det_data environment flag). To avoid conflicts with our provided pretrained models we recommend to use task numbers starting from 100. An overview is provided below ([Name] symbolise folders, - symbolise files, indents refer to substructures)

${det_data}
    [Task000_Example]
        - dataset.yaml # dataset.json works too
        [raw_splitted_data]
            [imagesTr]
                - case0000_0000.nii.gz # case0000 modality 0
                - case0000_0001.nii.gz # case0000 modality 1
                - case0001_0000.nii.gz # case0001 modality 0
                - case0000_0001.nii.gz # case0001 modality 1
            [labelsTr]
                - case0000.nii.gz # instance segmentation case0000
                - case0000.json # properties of case0000
                - case0001.nii.gz # instance segmentation case0001
                - case0001.json # properties of case0001
            [imagesTs] # optional, same structure as imagesTr
             ...
            [labelsTs] # optional, same structure as labelsTr
             ...
    [Task001_Example1]
        ...

Data set Info

dataset.yaml or dataset.json provides general information about the data set: Note: [Important] Classes and modalities start with index 0!

task: Task000D3_Example

name: "Example" # [Optional]
dim: 3 # number of spatial dimensions of the data

# Note: need to use integer value which is defined below of target class!
target_class: 1 # [Optional] define class of interest for patient level evaluations
test_labels: True # manually splitted test set

labels: # classes of data set; need to start at 0
    "0": "Square"
    "1": "SquareHole"

modalities: # modalities of data set; need to start at 0
    "0": "CT"

Image Format

nnDetection uses the same image format as nnU-Net. Each case consists of at least one 3D nifty file with a single modality and are saved in the images folders. If multiple modalities are available, each modality uses a separate file and the sequence number at the end of the name indicates the modality (these need to correspond to the numbers specified in the data set file and be consistent across the whole data set).

An example with two modalities could look like this:

- case001_0000.nii.gz # Case ID: case001; Modality: 0
- case001_0001.nii.gz # Case ID: case001; Modality: 1

- case002_0000.nii.gz # Case ID: case002; Modality: 0
- case002_0001.nii.gz # Case ID: case002; Modality: 1

If multiple modalities are available, please check beforehand if they need to be registered and perform registration befor nnDetection preprocessing. nnDetection does (!)not(!) include automatic registration of multiple modalities.

Label Format

Labels are encoded with two files per case: one nifty file which contains the instance segmentation and one json file which includes the "meta" information of each instance. The nifty file should contain all annotated instances where each instance has a unique number and are in consecutive order (e.g. 0 ALWAYS refers to background, 1 refers to the first instance, 2 refers to the second instance ...) case[XXXX].json label files need to provide the class of every instance in the segmentation. In this example the first isntance is assigned to class 0 and the second instance is assigned to class 1:

{
    "instances": {
        "1": 0,
        "2": 1
    }
}

Each label file needs a corresponding json file to define the classes. We also wrote an Detection Annotation Guide which includes a dedicated section of the nnDetection format with additional visualizations :)

Using nnDetection

The following paragrah provides an high level overview of the functionality of nnDetection and which commands are available. A typical flow of commands would look like this:

nndet_prep -> nndet_unpack -> nndet_train -> nndet_consolidate -> nndet_predict

Eachs of this commands is explained below and more detailt information can be obtained by running nndet_[command] -h in the terminal.

Planning & Preprocessing

Before training the networks, nnDetection needs to preprocess and analyze the data. The preprocessing stage normalizes and resamples the data while the analyzed properties are used to create a plan which will be used for configuring the training. nnDetectionV0 requires a GPU with approximately the same amount of VRAM you are planning to use for training (we used a RTX2080TI; no monitor attached to it) to perform live estimation of the VRAM used by the network. Future releases aim at improving this process...

nndet_prep [tasks] [-o / --overwrites] [-np / --num_processes] [-npp / --num_processes_preprocessing] [--full_check]

# Example
nndet_prep 000

# Script
# /scripts/preprocess.py - main()

-o option can be used to overwrite parameters for planning and preprocessing (refer to the config files to see all parameters). The number of processes used for cropping and analysis can be adjusted by using -np and the number of processes used for resampling can be set via -npp. The current values are fairly save if 64GB of RAM is available. The --full_check will iterate over the data before starting any preprocessing and check correct formatting of the data and labels. If any problems occur during preprocessing please run the full check to make sure that the format is correct.

After planning and preprocessing the resulting data folder structure should look like this:

[Task000_Example]
    [raw_splitted]
    [raw_cropped] # only needed for different resampling strategies
        [imagesTr] # stores cropped image data; contains npz files
        [labelsTr] # stores labels
    [preprocessed]
        [analysis] # some plots to visualize properties of the underlying data set
        [properties] # sufficient for new plans
        [labelsTr] # labels in original format (original spacing)
        [labelsTs] # optional
        [Data identifier; e.g. D3V001_3d]
            [imagesTr] # preprocessed data
            [labelsTr] # preprocessed labels (resampled spacing)
        - {name of plan}.pkl e.g. D3V001_3d.pkl

Befor starting the training copy the data (Task Folder, data set info and preprocessed folder are needed) to a SSD (highly recommended) and unpack the image data with

nndet_unpack [path] [num_processes]

# Example (unpack example with 6 processes)
nndet_unpack ${det_data}/Task000D3_Example/preprocessed/D3V001_3d/imagesTr 6

# Script
# /scripts/utils.py - unpack()

Training and Evaluation

After the planning and preprocessing stage is finished the training phase can be started. The default setup of nnDetection is trained in a 5 fold cross-validation scheme. First, check which plans were generated during planning by checking the preprocessing folder and look for the pickled plan files. In most cases only the defaul plan will be generated (D3V001_3d) but there might be instances (e.g. Kits) where the low resolution plan will be generated too (D3V001_3dlr1).

nndet_train [task] [-o / --overwrites] [--sweep]

# Example (train default plan D3V001_3d and search best inference parameters)
nndet_train 000 --sweep

# Script
# /scripts/train.py - train()

Use -o exp.fold=X to overwrite the trained fold, this should be run for all folds X = 0, 1, 2, 3, 4! The --sweep option tells nnDetection to look for the best hyparameters for inference by empirically evaluating them on the validation set. Sweeping can also be performed later by running the following command:

nndet_sweep [task] [model] [fold]

# Example (sweep Task 000 of model RetinaUNetV001_D3V001_3d in fold 0)
nndet_sweep 000 RetinaUNetV001_D3V001_3d 0

# Script
# /experiments/train.py - sweep()

Evaluation can be invoked by the following command (requires access to the model and preprocessed data):

nndet_eval [task] [model] [fold] [--test] [--case] [--boxes] [--seg] [--instances] [--analyze_boxes]

# Example (evaluate and analyze box predictions of default model)
nndet_eval 000 RetinaUNetV001_D3V001_3d 0 --boxes --analyze_boxes

# Script
# /scripts/train.py - evaluate()

# Note: --test invokes evaluation of the test set
# Note: --seg, --instances are placeholders for future versions and not working yet

Inference

After running all folds it is time to collect the models and creat a unified inference plan. The following command will copy all the models and predictions from the folds. By adding the sweep_ options, the empiricaly hyperparameter optimization across all folds can be started. This will generate a unified plan for all models which will be used during inference.

nndet_consolidate [task] [model] [--overwrites] [--consolidate] [--num_folds] [--no_model] [--sweep_boxes] [--sweep_instances]

# Example
nndet_consolidate 000 RetinaUNetV001_D3V001_3d --sweep_boxes

# Script
# /scripts/consolidate.py - main()

For the final test set predictions simply select the best model according to the validation scores and run the prediction command below. Data which is located in raw_splitted/imagesTs will be automatically preprocessed and predicted by running the following command:

nndet_predict [task] [model] [--fold] [--num_tta] [--no_preprocess] [--check] [-npp / --num_processes_preprocessing] [--force_args]

# Example
nndet_predict 000 RetinaUNetV001_D3V001_3d --fold -1

# Script
# /scripts/predict.py - main()

If a self-made test set was used, evaluation can be performed by invoking nndet_eval with --test as described above.

nnU-Net for Detection

Besides nnDetection we also include the scripts to prepare and evaluate nnU-Net in the context of obejct detection. Both frameworks need to be configured correctly before running the scripts to assure correctness. After preparing the data set in the nnDetection format (which is a superset of the nnU-Net format) it is possible to export it to nnU-Net via scripts/nnunet/nnunet_export.py. Since nnU-Net needs task ids without any additions it may be necessary to overwrite the task name via the -nt option for some dataets (e.g. Task019FG_ADAM needs to be renamed to Task019_ADAM). Follow the usual nnU-Net preprocessing and training pipeline to generate the needed models. Use the --npz option during training to save the predicted probabilities which are needed to generate the detection results. After determining the best ensemble configuration from nnU-Net pass all paths to scripts/nnunet/nnunet_export.py which will ensemble and postprocess the predictions for object detection. Per default the nnU-Net Plus scheme will be used which incorporates the empirical parameter optimization step. Use --simple flag to switch to the nnU-Net basic configuration.

Pretrained models

Coming Soon

FAQ

GPU requirements
nnDetection v0.1 was developed for GPUs with at least 11GB of VRAM (e.g. RTX2080TI, TITAN RTX). All of our experiments were conducted with a RTX2080TI. While the memory can be adjusted by manipulating the correct setting we recommend using the default values for now. Future releases will refactor the planning stage to improve the VRAM estimation and add support for different memory budgets.
Error: Undefined CUDA symbols when importing `nndet._C`
Please double check CUDA version of your PC, pytorch, torchvision and nnDetection build! Follow the installation instruction at the beginning!
Error: No kernel image is available for execution"
You are probably executing the build on a machine with a GPU architecture which was not present/set during the build.

Please check link to find the correct SM architecture and set TORCH_CUDA_ARCH_LIST approriately (e.g. check Dockefile for example). Make sure to delete all caches before rebulding!

Training with bounding boxes
The first release of nnDetection focuses on 3d medical images and Retina U-Net. As a consequence training (specifically planning and augmentation) requrie segmentation annotations. In many cases this limitation can be circumvented by converting the bounding boxes into segmentations.
Mask RCNN and 2D Data sets
2D data sets and Mask R-CNN are not supported in the first release. We hope to provide these in the future.
Multi GPU Training
Multi GPU training is not officially supported yet. Inference and the metric computation are not properly designed to support these usecases!
Prebuild package
We are planning to provide prebuild wheels in the future but no prebuild wheels are available right now. Please use the provided Dockerfile or the installation instructions to run nnDetection.

Acknowledgements

nnDetection combines the information from multiple open source repositores we wish to acknoledge for their awesome work, please check them out!

nnU-Net

nnU-Net is self-configuring method for semantic segmentation and many steps of nnDetection follow in the footsteps of nnU-Net.

Medical Detection Toolkit

The Medical Detection Toolkit introduced the first codebase for 3D Object Detection and multiple tricks were transferred to nnDetection to assure optimal configuration for medical object detection.

Torchvision

nnDetection tried to follow the interfaces of torchvision to make it easy to understand for everyone coming from the 2D (and video) detection scene. As a result we used based our implementations of some of the core modules of the torchvision implementation.

Funding

Part of this work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – 410981386 and the Helmholtz Imaging Platform (HIP), a platform of the Helmholtz Incubator on Information and Data Science.

Comments
  • test error of undefined symbol[Question]

    test error of undefined symbol[Question]

    :question: Question

    python3.8 pytorch1.9.0 / 1.8.0 cuda11.1 /11.0 GCC 7.5.0 All install look successful, when I run python -c "import torch; import nndet._C; import nndet" Here is the error,
    ImportError: /nnDetection/nndet/_C.cpython-38-x86_64-linux-gnu.so: undefined symbol: _Z8nms_cudaRKN2at6TensorES2_f

    Is anybody has any idea about this error?

    opened by Aliengmx 11
  • [Bug] nndet_prep fails in instance.py with KeyError 'instances'

    [Bug] nndet_prep fails in instance.py with KeyError 'instances'

    :skull: Bug

    running nndet_prep 000 on example data set, that was created by nndet_example: Processing fails after several successful iterations with case_9 suddenly with the following error message. This also happens with my own data set.

    multiprocessing.pool.RemoteTraceback: 
    """
    Traceback (most recent call last):
      File "/home/USERNAME/miniconda3/lib/python3.9/multiprocessing/pool.py", line 125, in worker
        result = (True, func(*args, **kwds))
      File "/home/USERNAME/miniconda3/lib/python3.9/multiprocessing/pool.py", line 51, in starmapstar
        return list(itertools.starmap(args[0], args[1]))
      File "/home/USERNAME/git/nnDetection/nndet/planning/properties/instance.py", line 155, in analyze_instances_per_case
        props["num_instances"] = count_instances(props, all_classes)
      File "/home/USERNAME/git/nnDetection/nndet/planning/properties/instance.py", line 176, in count_instances
        instance_classes = list(map(int, props["instances"].values()))
    KeyError: 'instances'
    """
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "/home/USERNAME/miniconda3/bin/nndet_prep", line 33, in <module>
        sys.exit(load_entry_point('nndet', 'console_scripts', 'nndet_prep')())
      File "/home/USERNAME/git/nnDetection/nndet/utils/check.py", line 58, in wrapper
        return func(*args, **kwargs)
      File "/home/USERNAME/git/nnDetection/scripts/preprocess.py", line 406, in main
        run(OmegaConf.to_container(cfg, resolve=True),
      File "/home/USERNAME/git/nnDetection/scripts/preprocess.py", line 325, in run
        run_dataset_analysis(cropped_output_dir=Path(cfg["host"]["cropped_output_dir"]),
      File "/home/USERNAME/git/nnDetection/scripts/preprocess.py", line 129, in run_dataset_analysis
        _ = analyzer.analyze_dataset(properties)
      File "/home/USERNAME/git/nnDetection/nndet/planning/analyzer.py", line 80, in analyze_dataset
        props.update(property_fn(self))
      File "/home/USERNAME/git/nnDetection/nndet/planning/properties/instance.py", line 46, in analyze_instances
        props_per_case = run_analyze_instances(analyzer, all_classes)
      File "/home/USERNAME/git/nnDetection/nndet/planning/properties/instance.py", line 77, in run_analyze_instances
        props = p.starmap(analyze_instances_per_case, zip(
      File "/home/USERNAME/miniconda3/lib/python3.9/multiprocessing/pool.py", line 372, in starmap
        return self._map_async(func, iterable, starmapstar, chunksize).get()
      File "/home/USERNAME/miniconda3/lib/python3.9/multiprocessing/pool.py", line 771, in get
        raise self._value
    KeyError: 'instances'
    

    Environment

    Please provide some information about the used environment.

    ----- PyTorch Information -----
    PyTorch Version: 1.13.0
    PyTorch Debug: False
    PyTorch CUDA: 11.7
    PyTorch Backend cudnn: 8500
    PyTorch CUDA Arch List: ['sm_37', 'sm_50', 'sm_60', 'sm_61', 'sm_70', 'sm_75', 'sm_80', 'sm_86', 'compute_37']
    PyTorch Current Device Capability: (3, 7)
    PyTorch CUDA available: True
    
    
    ----- System Information -----
    System NVCC: nvcc: NVIDIA (R) Cuda compiler driver
    Copyright (c) 2005-2022 NVIDIA Corporation
    Built on Wed_Jun__8_16:49:14_PDT_2022
    Cuda compilation tools, release 11.7, V11.7.99
    Build cuda_11.7.r11.7/compiler.31442593_0
    
    System Arch List: None
    System OMP_NUM_THREADS: 1
    System CUDA_HOME is None: True
    System CPU Count: 12
    Python Version: 3.9.15 (main, Nov  4 2022, 16:13:54) 
    [GCC 11.2.0]
    
    
    ----- nnDetection Information -----
    det_num_threads 12
    det_data is set True
    det_models is set True
    

    How was nnDetection installed [docker | source]:

    source
    

    Environment Information:

    GPU: Tesla K80
    Nvidia driver 470
    CUDA 11.7
    CUDNN 8.6
    PyTorch 1.13
    Python 3.9.15
    

    If necessary, please provide the used run command with all overwrites:

    nndet_prep 000
    
    opened by xaratustrah 10
  • [Question] np.pad()  consumes a lot of time when the dataset is big?

    [Question] np.pad() consumes a lot of time when the dataset is big?

    :question: Question

    Hi! great job!: I've been using nnDetection in my experiments.I found a phenomenon: 1.one epoch costs about 30 minute (2600/2600,patch size [ 96 192 160]) ,when the number of dataset cases is 163 . 2.one epoch costs about 6 hour (2600/2600,patch size [ 96 160 160] ) ,when the number of dataset cases is 534 . Then I search what went wrong. I found top command: 1.%CPU 0.0wa,when the number of dataset cases is 163 . 2.%CPU 18.0~36.0wa,when the number of dataset cases is 534 . Then I know something wrong about data IO and debug /XXX/XXX/nndet/io/datamodule/bg_loader.py Function generate_train_batch(). Finally.found /XXX/XXX/nndet/io/patching.py:line 432 np.pad() was the reason. 1.about 20ms each time,when the number of dataset cases is 163 . 1.about 6-8s each time,when the number of dataset cases is 534 . How can I solve this problem?

    opened by xuelicheng1992 9
  • [Bug] ITK ERROR: ITK only supports orthonormal direction cosines.  No orthonormal definition found! (SimpleITK >= 2.1.0)

    [Bug] ITK ERROR: ITK only supports orthonormal direction cosines. No orthonormal definition found! (SimpleITK >= 2.1.0)

    Hello everyone,

    Thanks for this great tool :)

    :skull: Bug

    When using nndet_prep [task] --full-check command, the following error occured:

    File "<nnDetection>/nndet/utils/check.py", line 213, in _full_check
    	img_itk_seq = [load_sitk(cp) for cp in case_paths]
    File "<nnDetection>/nndet/utils/check.py", line 213, in <listcomp>
    	img_itk_seq = [load_sitk(cp) for cp in case_paths]
    File "<nnDetection>/nndet/io/itk.py", line 107, in load_sitk
    	return sitk.ReadImage(str(path), **kwargs)
    File "<CondaEnv>/lib/python3.9/site-packages/SimpleITK/extra.py", line 346, in ReadImage
    	return reader.Execute()
    File "<CondaEnv>/lib/python3.9/site-packages/SimpleITK/SimpleITK.py", line 8015, in Execute
    	return _SimpleITK.ImageFileReader_Execute(self)
    RuntimeError: Exception thrown in SimpleITK ImageFileReader_Execute: /tmp/SimpleITK-build/ITK/Modules/IO/NIFTI/src/itkNiftiImageIO.cxx:1980:
    ITK ERROR: ITK only supports orthonormal direction cosines.  No orthonormal definition found!
    

    This was noticed with the nnUNet framework when mixing different versions of SimpleITK an issue was raised (https://github.com/SimpleITK/SimpleITK/issues/1433).

    When using SimpleITK 2.0.2, this error does not occur. This seems to be due to recent changes with ITK when handling Nifti headers (see https://github.com/InsightSoftwareConsortium/ITK/issues/2674 for further details and ongoing conversation).

    I would think that freezing SimpleITK to SimpleITK < 2.1.0 would temporarily solve the issue.

    Best

    Environment

    Environment Information:

    ----- PyTorch Information -----
    PyTorch Version: 1.9.0
    PyTorch Debug: False
    PyTorch CUDA: 10.2
    PyTorch Backend cudnn: 7605
    PyTorch CUDA Arch List: ['sm_37', 'sm_50', 'sm_60', 'sm_61', 'sm_70', 'sm_75', 'compute_37']
    PyTorch Current Device Capability: (7, 0)
    PyTorch CUDA available: True
    
     
    
    
    ----- System Information -----
    System NVCC: nvcc: NVIDIA (R) Cuda compiler driver
    Copyright (c) 2005-2019 NVIDIA Corporation
    Built on Sun_Jul_28_19:07:16_PDT_2019
    Cuda compilation tools, release 10.1, V10.1.243
    
     
    
    System Arch List: None
    System OMP_NUM_THREADS: 1
    System CUDA_HOME is None: True
    System CPU Count: 6
    Python Version: 3.9.6 (default, Jul 30 2021, 16:35:19)
    [GCC 7.5.0]
    
     
    
    
    ----- nnDetection Information -----
    det_num_threads 12
    det_data is set True
    det_models is set True```
    
    opened by alexandreroutier 9
  • FileNotFoundError: [Errno 2] No such file or directory: 'saved/Task010_Colon/RetinaUNetV001_D3V001_3d/fold0/plan.pkl'

    FileNotFoundError: [Errno 2] No such file or directory: 'saved/Task010_Colon/RetinaUNetV001_D3V001_3d/fold0/plan.pkl'

    :question: Question

    I have strictly follow the instructions and get the error, could you please help me solve it?

    FileNotFoundError: [Errno 2] No such file or directory: 'saved/Task010_Colon/RetinaUNetV001_D3V001_3d/fold0/plan.pkl'
    
    opened by Kyfafyd 8
  •  raise RuntimeError(

    raise RuntimeError("MultiThreadedAugmenter.abort_event was set, something went wrong. Maybe one of " RuntimeError: MultiThreadedAugmenter.abort_event was set, something went wrong. Maybe one of your workers crashed. This is not the actual error message! Look further up your stdout to see what caused the error. Please also check whether your RAM was full

    Hello

    When I finish executing nndet_prep, there is no error message. I start to execute nndet_train. Usually, this error occurs when I run some epochs. I use A6000. I tried to modify the number of batches, but it seems to have no effect. Can you give me some advice?

    Thank you!

    warnings.warn("Detected call of lr_scheduler.step() before optimizer.step(). " Epoch 0: 1%|▌ | 24/2600 [00:58<1:41:10, 2.36s/it, loss=2.7, v_num=dc82]Exception in background worker 0: could not broadcast input array from shape (1,352,114,408) into shape (1,352,114,114) Epoch 0: 1%|▌ | 25/2600 [01:01<1:40:56, 2.35s/it, loss=2.7, v_num=dc82]Traceback (most recent call last):Traceback (most recent call last): File "/home/huangjun/anaconda3/envs/nndetection/bin/nndet_train", line 33, in sys.exit(load_entry_point('nndet', 'console_scripts', 'nndet_train')()) File "/data1/huangjun/git_nndetection/nnDetection/nndet/utils/check.py", line 58, in wrapper return func(*args, **kwargs) File "/data1/huangjun/git_nndetection/nnDetection/scripts/train.py", line 69, in train _train( File "/data1/huangjun/git_nndetection/nnDetection/scripts/train.py", line 289, in _train trainer.fit(module, datamodule=datamodule) File "/home/huangjun/anaconda3/envs/nndetection/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 553, in fit self._run(model) File "/home/huangjun/anaconda3/envs/nndetection/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 918, in _run self._dispatch() File "/home/huangjun/anaconda3/envs/nndetection/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _dispatch self.accelerator.start_training(self) File "/home/huangjun/anaconda3/envs/nndetection/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 92, in start_training self.training_type_plugin.start_training(trainer) File "/home/huangjun/anaconda3/envs/nndetection/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 161, in start_training self._results = trainer.run_stage() File "/home/huangjun/anaconda3/envs/nndetection/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 996, in run_stage return self._run_train() File "/home/huangjun/anaconda3/envs/nndetection/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1045, in _run_train self.fit_loop.run() File "/home/huangjun/anaconda3/envs/nndetection/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run self.advance(*args, **kwargs) File "/home/huangjun/anaconda3/envs/nndetection/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 200, in advance epoch_output = self.epoch_loop.run(train_dataloader) File "/home/huangjun/anaconda3/envs/nndetection/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run self.advance(*args, **kwargs) File "/home/huangjun/anaconda3/envs/nndetection/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 118, in advance _, (batch, is_last) = next(dataloader_iter) File "/home/huangjun/anaconda3/envs/nndetection/lib/python3.8/site-packages/pytorch_lightning/profiler/base.py", line 104, in profile_iterable value = next(iterator) File "/home/huangjun/anaconda3/envs/nndetection/lib/python3.8/site-packages/pytorch_lightning/trainer/supporters.py", line 629, in prefetch_iterator for val in it: File "/home/huangjun/anaconda3/envs/nndetection/lib/python3.8/site-packages/pytorch_lightning/trainer/supporters.py", line 546, in next return self.request_next_batch(self.loader_iters) File "/home/huangjun/anaconda3/envs/nndetection/lib/python3.8/site-packages/pytorch_lightning/trainer/supporters.py", line 574, in request_next_batch return apply_to_collection(loader_iters, Iterator, next_fn) File "/home/huangjun/anaconda3/envs/nndetection/lib/python3.8/site-packages/pytorch_lightning/utilities/apply_func.py", line 96, in apply_to_collection return function(data, *args, **kwargs) File "/home/huangjun/anaconda3/envs/nndetection/lib/python3.8/site-packages/pytorch_lightning/trainer/supporters.py", line 561, in next_fn batch = next(iterator) File "/home/huangjun/anaconda3/envs/nndetection/lib/python3.8/site-packages/batchgenerators/dataloading/multi_threaded_augmenter.py", line 205, in next item = self.__get_next_item() File "/home/huangjun/anaconda3/envs/nndetection/lib/python3.8/site-packages/batchgenerators/dataloading/multi_threaded_augmenter.py", line 189, in __get_next_item raise RuntimeError("MultiThreadedAugmenter.abort_event was set, something went wrong. Maybe one of " RuntimeError: MultiThreadedAugmenter.abort_event was set, something went wrong. Maybe one of your workers crashed. This is not the actual error message! Look further up your stdout to see what caused the error. Please also check whether your RAM was full Epoch 0: 1%| | 25/2600 [01:01<1:41:22, 2.36s/it, loss=2.7, v_num=dc82]

    opened by hj950815 8
  • [Question] Should training take so long?

    [Question] Should training take so long?

    :question: Question

    I prepared and preprocessed dataset Task010_Colon and there were no issues, but when I started training the net by nndet_train 010 --sweep it took extremely long time to finish Epoch 0 - almost 9 hours. I am using CUDA 11.2, GPU Nvidia Tesla K40 XL, Pytorch 1.7.1+cu110 and working on an univeristy cluster which should allow my to train my net fast. Is there anything I can do to make if faster? I'm not sure if it's my environment or is it my nnDetection configuration.

    opened by GabrielaProszczuk 8
  • [Question]Replicate result on paper

    [Question]Replicate result on paper

    Hello,

    I tried to replicate the results of nndetection on LUNA16 as in your paper, but I can not achieve the same performance marked in the paper (CPM=0.92). In fact what I obtain is CPM=0.84. Did you use different set of parameter as presented in the paper? Or did you fine tuning the model?

    Thanks,

    opened by vankhoa21991 7
  • Error during training

    Error during training

    Bug : During the training phase File "/anaconda/lib/python3.8/site-packages/torch/cuda/amp/grad_scaler.py", line 161, in scale assert outputs.is_cuda or outputs.device.type == 'xla' AssertionError Exception ignored in: <function tqdm.del at 0x7f9ba338de50> Traceback (most recent call last): File "/anaconda/lib/python3.8/site-packages/tqdm/std.py", line 1145, in del File "/anaconda/lib/python3.8/site-packages/tqdm/std.py", line 1299, in close File "/anaconda/lib/python3.8/site-packages/tqdm/std.py", line 1492, in display File "/anaconda/lib/python3.8/site-packages/tqdm/std.py", line 1148, in str File "/anaconda/lib/python3.8/site-packages/tqdm/std.py", line 1450, in format_dict TypeError: cannot unpack non-iterable NoneType object

    Environment Please provide some information about the used environment. Env from the set up using source and not docker Cmd : nndet_train 1000 --sweep

    It seems the issue is related to the fact that TensorMetric not updated to cuda device. The same issue as adressed on https://github.com/PyTorchLightning/pytorch-lightning/issues/2274.

    opened by theodupuis 7
  • Slow training speed

    Slow training speed

    Hello, I'm trying to train the task 16 with Luna dataset, but the training seems very slow, and the GPU is not used for most of the time because it has to wait for the volume to be prepared. How can I increase the volume preparation speed to take advantage of the GPU usage?

    question 
    opened by vankhoa21991 7
  • AttributeError

    AttributeError

    Hi,

    I have been running nnDetection up till epoch 57 and suddenly it crashes. I have also been trying to rerun nnDetection on backups that I have made of the data and models folder, but it also tends to crash sometimes. Do you have an idea why this could be?

    Thanks in advance!

    Epoch 50: 0%| | 0/2600 [00:00<00:02, 1282.27it/s]Traceback (most recent call last): File "/opt/conda/bin/nndet_train", line 33, in sys.exit(load_entry_point('nndet', 'console_scripts', 'nndet_train')()) File "/opt/nnDetection/nndet/utils/check.py", line 58, in wrapper return func(*args, **kwargs) File "/opt/nnDetection/scripts/train.py", line 69, in train _train( File "/opt/nnDetection/scripts/train.py", line 289, in _train trainer.fit(module, datamodule=datamodule) File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 553, in fit self._run(model) File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 918, in _run self._dispatch() File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _dispatch self.accelerator.start_training(self) File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 92, in start_training self.training_type_plugin.start_training(trainer) File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 161, in start_training self._results = trainer.run_stage() File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 996, in run_stage return self._run_train() File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1045, in _run_train self.fit_loop.run() File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run self.advance(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 200, in advance epoch_output = self.epoch_loop.run(train_dataloader) File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 106, in run self.on_run_start(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 106, in on_run_start self.trainer.call_hook("on_train_epoch_start") File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1217, in call_hook trainer_hook(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/callback_hook.py", line 92, in on_train_epoch_start callback.on_train_epoch_start(self, self.lightning_module) File "/opt/nnDetection/nndet/training/swa.py", line 102, in on_train_epoch_start self.update_parameters(self._average_model, pl_module, self.n_averaged, self.avg_fn) AttributeError: 'SWACycleLinear' object has no attribute 'n_averaged' bypassing sigterm bypassing sigterm bypassing sigterm

    opened by rooskraaijveld 6
  • [Question] VersionConflict

    [Question] VersionConflict

    Dear Dr. Jaeger,

    it is wonderful project, I have some questions:

    1. I would like to try with my own dataset and already build the docker container. however when I run nndet_prep i got the version conflict error. May I ask how can i address this issue? raise VersionConflict(dist, req).with_context(dependent_req) pkg_resources.ContextualVersionConflict: (packaging 20.4 (/opt/conda/lib/python3.8/site-packages), Requirement.parse('packaging>20.9'), {'shap'})

    2. For research purpose, would the MedicalDetection Toolkit still can be used? Thank you so much.

    opened by EvelyneCalista 1
  • when will the Mask-Rcnn be implemented

    when will the Mask-Rcnn be implemented

    :question: Question

    hello,because medicaldetectiontoolkit has been deprecated,I want to know when will the Mask-Rcnn 3d version be integrated into this framework,Looking forward to it.Thank you very much!

    opened by puppy2000 2
  • how these coordinates aranged in pred_boxes in val_predictions?

    how these coordinates aranged in pred_boxes in val_predictions?

    original_size_of_raw_data: array([ 82, 512, 512]), my pred_boxes in pkl file be like: array([[ 51, 225, 80, 291, 144, 199], [ 0, 192, 32, 291, 321, 397], [ 1, 223, 33, 295, 345, 394], [ 17, 220, 47, 307, 337, 406], [ 33, 271, 55, 316, 333, 376], ... how to order zmin, zmax, xmin, xmax, ymin, ymax, accordingly?

    opened by leetesua 1
  • [Question] 2D --> 3D dataset

    [Question] 2D --> 3D dataset

    :question: Question

    I understand in readme.md you write that 2D data sets are not supported in the first release. I was nevertheless trying it, similar to this approach, I created my own dataset and nifti files accordingly, i.e. they are 3D but contain only one slice. The first time nndet_prep complains is on this line where it tries to get the dimensions of a tensor([], size=(0, 3)) which fails with the torch error:

    IndexError: max(): Expected reduction dim 0 to have non-zero size.
    

    basically one can reproduce this error in a simple way like this:

    import torch
    a = torch.empty((0, 3))
    a.max(dim=0)
    

    my question is, regarding 2D data, is this approach (giving 2D arrays a higher dimension) tenable for usage of 2D datasets with nnDetection and only minor fixes are needed, or is there much more under the hood that would come to light one error after the other?

    I am not really an expert on this, and maybe this is too simplistic, but more specifically, would it be possible to change the stack call on this line in order to always return a 3D slice, in order to fix the issue with 2D datasets?

    I was really hoping to use nnDetection with my 2D dataset.

    Many thanks!

    opened by xaratustrah 3
  • [Question] Detection evaluation

    [Question] Detection evaluation

    Hi @mibaumgartner,

    I've run nnDetection on BRATS dataset to get familiar with the framework and I have the following questions:

    1. Do you save the learning curves somewhere? There are a lot of files and I'm not sure if I just can't find them or need to generate them by myself from training logs/pkls.
    2. I've been analyzing "analysis.csv" file and I don't understand the column "iou_tp" (3rd column). Would you please advise? I think all other metrics are pretty clear.
    3. From what I've seen the final results are available in file test_results\results_boxes.json, but these are aggregates. Do you save "partial" values per patient somewhere? I can't wrap my head around the transition from "analysis.csv" to "results_boxes.json". Any help would be much appreciated.
    opened by machur 1
  • change network

    change network

    First of all, thank you for your work. I have a question for you. I want to change the network structure and take maskrcnn instead of taking the RetinaUnet. What should I change?

    opened by yangqiqi 1
Releases(v0.1-alpha)
Owner
MIC-DKFZ
Division of Medical Image Computing, German Cancer Research Center (DKFZ)
MIC-DKFZ
🚗 INGI Dakar 2K21 - Be the first one on the finish line ! 🚗

🚗 INGI Dakar 2K21 - Be the first one on the finish line ! 🚗 This year's first semester Club Info challenge will put you at the head of a car racing

ClubINFO INGI (UCLouvain) 6 Dec 10, 2021
Exact Pareto Optimal solutions for preference based Multi-Objective Optimization

Exact Pareto Optimal solutions for preference based Multi-Objective Optimization

Debabrata Mahapatra 40 Dec 24, 2022
[ICCV'21] Learning Conditional Knowledge Distillation for Degraded-Reference Image Quality Assessment

CKDN The official implementation of the ICCV2021 paper "Learning Conditional Knowledge Distillation for Degraded-Reference Image Quality Assessment" O

Multimedia Research 50 Dec 13, 2022
[ICLR 2022] Contact Points Discovery for Soft-Body Manipulations with Differentiable Physics

CPDeform Code and data for paper Contact Points Discovery for Soft-Body Manipulations with Differentiable Physics at ICLR 2022 (Spotlight). @InProceed

(Lester) Sizhe Li 29 Nov 29, 2022
[ICCV 2021] Deep Hough Voting for Robust Global Registration

Deep Hough Voting for Robust Global Registration, ICCV, 2021 Project Page | Paper | Video Deep Hough Voting for Robust Global Registration Junha Lee1,

Junha Lee 10 Dec 02, 2022
2D&3D human pose estimation

Human Pose Estimation Papers [CVPR 2016] - 201511 [IJCAI 2016] - 201602 Other Action Recognition with Joints-Pooled 3D Deep Convolutional Descriptors

133 Jan 02, 2023
Citation Intent Classification in scientific papers using the Scicite dataset an Pytorch

Citation Intent Classification Table of Contents About the Project Built With Installation Usage Acknowledgments About The Project Citation Intent Cla

Federico Nocentini 4 Mar 04, 2022
MultiMix: Sparingly Supervised, Extreme Multitask Learning From Medical Images (ISBI 2021, MELBA 2021)

MultiMix This repository contains the implementation of MultiMix. Our publications for this project are listed below: "MultiMix: Sparingly Supervised,

Ayaan Haque 27 Dec 22, 2022
This is a Pytorch implementation of the paper: Self-Supervised Graph Transformer on Large-Scale Molecular Data.

This is a Pytorch implementation of the paper: Self-Supervised Graph Transformer on Large-Scale Molecular Data.

212 Dec 25, 2022
1st place solution in CCF BDCI 2021 ULSEG challenge

1st place solution in CCF BDCI 2021 ULSEG challenge This is the source code of the 1st place solution for ultrasound image angioma segmentation task (

Chenxu Peng 30 Nov 22, 2022
Virtual Dance Reality Stage: a feature that offers you to share a stage with another user virtually

Portrait Segmentation using Tensorflow This script removes the background from an input image. You can read more about segmentation here Setup The scr

291 Dec 24, 2022
Torch implementation of various types of GAN (e.g. DCGAN, ALI, Context-encoder, DiscoGAN, CycleGAN, EBGAN, LSGAN)

gans-collection.torch Torch implementation of various types of GANs (e.g. DCGAN, ALI, Context-encoder, DiscoGAN, CycleGAN, EBGAN). Note that EBGAN and

Minchul Shin 53 Jan 22, 2022
Deep Learning Algorithms for Hedging with Frictions

Deep Learning Algorithms for Hedging with Frictions This repository contains the Forward-Backward Stochastic Differential Equation (FBSDE) solver and

Xiaofei Shi 3 Dec 22, 2022
A little Python application to auto tag your photos with the power of machine learning.

Tag Machine A little Python application to auto tag your photos with the power of machine learning. Report a bug or request a feature Table of Content

Florian Torres 14 Dec 21, 2022
Real-Time-Student-Attendence-System - Real Time Student Attendence System

Real-Time-Student-Attendence-System The Student Attendance Management System Pro

Rounak Das 1 Feb 15, 2022
Pytorch implementation of "A simple neural network module for relational reasoning" (Relational Networks)

Pytorch implementation of Relational Networks - A simple neural network module for relational reasoning Implemented & tested on Sort-of-CLEVR task. So

Kim Heecheol 800 Dec 05, 2022
Orbivator AI - To Determine which features of data (measurements) are most important for diagnosing breast cancer and find out if breast cancer occurs or not.

Orbivator_AI Breast Cancer Wisconsin (Diagnostic) GOAL To Determine which features of data (measurements) are most important for diagnosing breast can

anurag kumar singh 1 Jan 02, 2022
Readings for "A Unified View of Relational Deep Learning for Polypharmacy Side Effect, Combination Therapy, and Drug-Drug Interaction Prediction."

Polypharmacy - DDI - Synergy Survey The Survey Paper This repository accompanies our survey paper A Unified View of Relational Deep Learning for Polyp

AstraZeneca 79 Jan 05, 2023
A fast Evolution Strategy implementation in Python

Evostra: Evolution Strategy for Python Evolution Strategy (ES) is an optimization technique based on ideas of adaptation and evolution. You can learn

Mika 251 Dec 08, 2022
Light-Head R-CNN

Light-head R-CNN Introduction We release code for Light-Head R-CNN. This is my best practice for my research. This repo is organized as follows: light

jemmy li 835 Dec 06, 2022