Clinica is a software platform for clinical research studies involving patients with neurological and psychiatric diseases and the acquisition of multimodal data

Overview

Logo
Clinica

Software platform for clinical neuroimaging studies

Build Status PyPI version platform Code style: black

Homepage | Documentation | Paper | Forum | See also: AD-ML, AD-DL ClinicaDL

About The Project

Clinica is a software platform for clinical research studies involving patients with neurological and psychiatric diseases and the acquisition of multimodal data (neuroimaging, clinical and cognitive evaluations, genetics...), most often with longitudinal follow-up.

Clinica is command-line driven and written in Python. It uses the Nipype system for pipelining and combines widely-used software packages for neuroimaging data analysis (ANTs, FreeSurfer, FSL, MRtrix, PETPVC, SPM), machine learning (Scikit-learn) and the BIDS standard for data organization.

Clinica provides tools to convert publicly available neuroimaging datasets into BIDS, namely:

Clinica can process any BIDS-compliant dataset with a set of complex processing pipelines involving different software packages for the analysis of neuroimaging data (T1-weighted MRI, diffusion MRI and PET data). It also provides integration between feature extraction and statistics, machine learning or deep learning.

ClinicaPipelines

Clinica is also showcased as a framework for the reproducible classification of Alzheimer's disease using machine learning and deep learning.

Getting Started

Full instructions for installation and additional information can be found in the user documentation.

Clinica currently supports macOS and Linux. It can be installed by typing the following command:

pip install clinica

To avoid conflicts with other versions of the dependency packages installed by pip, it is strongly recommended to create a virtual environment before the installation. For example, use Conda, to create a virtual environment and activate it before installing clinica (you can also use virtualenv):

conda create --name clinicaEnv python=3.7
conda activate clinicaEnv

Depending on the pipeline that you want to use, you need to install pipeline-specific interfaces. Not all the dependencies are necessary to run Clinica. Please refer to this page to determine which third-party libraries you need to install.

Example

Diagram illustrating the Clinica pipelines involved when performing a group comparison of FDG PET data projected on the cortical surface between patients with Alzheimer's disease and healthy controls from the ADNI database:

ClinicaExample

  1. Clinical and neuroimaging data are downloaded from the ADNI website and data are converted into BIDS with the adni-to-bids converter.
  2. Estimation of the cortical and white surface is then produced by the t1-freesurfer pipeline.
  3. FDG PET data can be projected on the subject’s cortical surface and normalized to the FsAverage template from FreeSurfer using the pet-surface pipeline.
  4. TSV file with demographic information of the population studied is given to the statistics-surface pipeline to generate the results of the group comparison.

For more examples and details, please refer to the Documentation.

Support

Contributing

We encourage you to contribute to Clinica! Please check out the Contributing to Clinica guide for guidelines about how to proceed. Do not hesitate to ask questions if something is not clear for you, report an issue, etc.

License

This software is distributed under the MIT License. See license file for more information.

Citing us

  • Routier, A., Burgos, N., Díaz, M., Bacci, M., Bottani, S., El-Rifai O., Fontanella, S., Gori, P., Guillon, J., Guyot, A., Hassanaly, R., Jacquemont, T., Lu, P., Marcoux, A., Moreau, T., Samper-González, J., Teichmann, M., Thibeau-Sutre, E., Vaillant G., Wen, J., Wild, A., Habert, M.-O., Durrleman, S., and Colliot, O.: Clinica: An Open Source Software Platform for Reproducible Clinical Neuroscience Studies Frontiers in Neuroinformatics, 2021 doi:10.3389/fninf.2021.689675

Related Repositories

Comments
  • CAPS outputs were not found for some image(s): FileNotFoundError: No such file or directory

    CAPS outputs were not found for some image(s): FileNotFoundError: No such file or directory

    Hi I am new to clinica and have been slowly working through a bunch of issues however I can not solve this one. Can some one help please?

    clinica run t1-volume BIDS/ test/ test-group The t1-volume pipeline is divided into 4 parts: t1-volume-tissue-segmentation pipeline: Tissue segmentation, bias correction and spatial normalization to MNI space t1-volume-create-dartel pipeline: Inter-subject registration with the creation of a new DARTEL template t1-volume-dartel2mni pipeline: DARTEL template to MNI t1-volume-parcellation pipeline: Atlas statistics

    Part 1/4: Running t1-volume-segmentation pipeline The pipeline will be run on the following 2 image(s): sub-S24022 | ses-01, sub-S45030 | ses-01, The pipeline will last approximately 10 minutes per image.

    [Warning] You did not specify the number of threads to run in parallel (--n_procs argument). Computation time can be shorten as you have 48 CPUs on this computer. We recommend using 47 threads.

    How many threads do you want to use? If you do not answer within 15 sec, default value of 47 will be taken. Use --n_procs argument if you want to disable this message next time. [00:07:15] Running pipeline for sub-S45030 | ses-01 [00:07:15] Running pipeline for sub-S24022 | ses-01

    [00:12:47] Pipeline finished with errors.

    CAPS outputs were not found for some image(s): Implementation on which image(s) failed will appear soon.

    Documentation can be found here: https://aramislab.paris.inria.fr/clinica/docs/public/latest/ If you need support, do not hesitate to ask: https://groups.google.com/forum/#!forum/clinica-user Alternatively, you can also open an issue on GitHub: https://github.com/aramis-lab/clinica/issues

    Here is the error file traceback

    nipypecli crash crash-20210430-001245-user-3-T1wToMni.a0-c9d67331-0f44-454b-b822-fe20293f8872.pklz

    File: /home/user/Documents/ADNI Data/RAW DATA/Uncompressed Data/AD/crash-20210430-001245-user-3-T1wToMni.a0-c9d67331-0f44-454b-b822-fe20293f8872.pklz Node: t1-volume-tissue-segmentation.3-T1wToMni Working directory: /tmp/tmp7jnpv21k/t1-volume-tissue-segmentation/da6acef93050a86f660ea26f80c21d4beb50efc7/3-T1wToMni

    Node inputs:

    deformation_field = fwhm = in_files = interpolation = mask = 0 matlab_cmd = mfile = True paths = use_mcr = use_v8struct = True

    Traceback: Traceback (most recent call last): File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 537, in aggregate_outputs setattr(outputs, key, val) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/nipype/interfaces/base/traits_extension.py", line 426, in validate value = super(MultiObject, self).validate(objekt, name, newvalue) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/traits/trait_types.py", line 2515, in validate return TraitListObject(self, object, name, value) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/traits/trait_list_object.py", line 585, in init notifiers=[self.notifier], File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/traits/trait_list_object.py", line 213, in init super().init(self.item_validator(item) for item in iterable) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/traits/trait_list_object.py", line 213, in super().init(self.item_validator(item) for item in iterable) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/traits/trait_list_object.py", line 865, in _item_validator return trait_validator(object, self.name, value) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/nipype/interfaces/base/traits_extension.py", line 330, in validate value = super(File, self).validate(objekt, name, value, return_pathlike=True) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/nipype/interfaces/base/traits_extension.py", line 135, in validate self.error(objekt, name, str(value)) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/traits/base_trait_handler.py", line 75, in error object, name, self.full_info(object, name, value), value traits.trait_errors.TraitError: Each element of the 'out_files' trait of an ApplySegmentationDeformationOutput instance must be a pathlike object or string representing an existing file, but a value of '/tmp/tmp7jnpv21k/t1-volume-tissue-segmentation/da6acef93050a86f660ea26f80c21d4beb50efc7/3-T1wToMni/wsub-S24022_ses-01_T1w.nii' <class 'str'> was specified.

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node result["result"] = node.run(updatehash=updatehash) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 516, in run result = self._run_interface(execute=True) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 635, in _run_interface return self._run_command(execute) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 741, in _run_command result = self._interface.run(cwd=outdir) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 436, in run outputs = self.aggregate_outputs(runtime) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 544, in aggregate_outputs raise FileNotFoundError(msg)

    FileNotFoundError: No such file or directory '['/tmp/tmp7jnpv21k/t1-volume-tissue-segmentation/da6acef93050a86f660ea26f80c21d4beb50efc7/3-T1wToMni/wsub-S24022_ses-01_T1w.nii']' for output 'out_files' of a ApplySegmentationDeformation interface

    The file that could not be found "wsub-S24022_ses-01_T1w.nii" was created and saved in my /Documents/MATLAB/ working directory, I'm not sure if this is the issue.

    bug 
    opened by jmcawood 24
  • CAPS outputs were not found for some image(s): Where to find the .pklz error files

    CAPS outputs were not found for some image(s): Where to find the .pklz error files

    Disscussion was slightly off topic so I move this to a new issue #218

    This disscussion was aimed at finding where the .pklz files were saved. See also #217


    Hi I am new to clinica and have been slowly working through a bunch of issues however I can not solve this one. Can some one help please?

    clinica run t1-volume BIDS/ test/ test-group The t1-volume pipeline is divided into 4 parts: t1-volume-tissue-segmentation pipeline: Tissue segmentation, bias correction and spatial normalization to MNI space t1-volume-create-dartel pipeline: Inter-subject registration with the creation of a new DARTEL template t1-volume-dartel2mni pipeline: DARTEL template to MNI t1-volume-parcellation pipeline: Atlas statistics

    Part 1/4: Running t1-volume-segmentation pipeline The pipeline will be run on the following 2 image(s): sub-S24022 | ses-01, sub-S45030 | ses-01, The pipeline will last approximately 10 minutes per image.

    [Warning] You did not specify the number of threads to run in parallel (--n_procs argument). Computation time can be shorten as you have 48 CPUs on this computer. We recommend using 47 threads.

    How many threads do you want to use? If you do not answer within 15 sec, default value of 47 will be taken. Use --n_procs argument if you want to disable this message next time. [00:07:15] Running pipeline for sub-S45030 | ses-01 [00:07:15] Running pipeline for sub-S24022 | ses-01

    [00:12:47] Pipeline finished with errors.

    CAPS outputs were not found for some image(s): Implementation on which image(s) failed will appear soon.

    Documentation can be found here: https://aramislab.paris.inria.fr/clinica/docs/public/latest/ If you need support, do not hesitate to ask: https://groups.google.com/forum/#!forum/clinica-user Alternatively, you can also open an issue on GitHub: https://github.com/aramis-lab/clinica/issues

    Here is the error file traceback


    nipypecli crash crash-20210430-001245-user-3-T1wToMni.a0-c9d67331-0f44-454b-b822-fe20293f8872.pklz

    File: /home/user/Documents/ADNI Data/RAW DATA/Uncompressed Data/AD/crash-20210430-001245-user-3-T1wToMni.a0-c9d67331-0f44-454b-b822-fe20293f8872.pklz Node: t1-volume-tissue-segmentation.3-T1wToMni Working directory: /tmp/tmp7jnpv21k/t1-volume-tissue-segmentation/da6acef93050a86f660ea26f80c21d4beb50efc7/3-T1wToMni

    Node inputs:

    deformation_field = fwhm = in_files = interpolation = mask = 0 matlab_cmd = mfile = True paths = use_mcr = use_v8struct = True

    Traceback: Traceback (most recent call last): File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 537, in aggregate_outputs setattr(outputs, key, val) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/nipype/interfaces/base/traits_extension.py", line 426, in validate value = super(MultiObject, self).validate(objekt, name, newvalue) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/traits/trait_types.py", line 2515, in validate return TraitListObject(self, object, name, value) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/traits/trait_list_object.py", line 585, in init notifiers=[self.notifier], File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/traits/trait_list_object.py", line 213, in init super().init(self.item_validator(item) for item in iterable) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/traits/trait_list_object.py", line 213, in super().init(self.item_validator(item) for item in iterable) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/traits/trait_list_object.py", line 865, in _item_validator return trait_validator(object, self.name, value) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/nipype/interfaces/base/traits_extension.py", line 330, in validate value = super(File, self).validate(objekt, name, value, return_pathlike=True) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/nipype/interfaces/base/traits_extension.py", line 135, in validate self.error(objekt, name, str(value)) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/traits/base_trait_handler.py", line 75, in error object, name, self.full_info(object, name, value), value traits.trait_errors.TraitError: Each element of the 'out_files' trait of an ApplySegmentationDeformationOutput instance must be a pathlike object or string representing an existing file, but a value of '/tmp/tmp7jnpv21k/t1-volume-tissue-segmentation/da6acef93050a86f660ea26f80c21d4beb50efc7/3-T1wToMni/wsub-S24022_ses-01_T1w.nii' <class 'str'> was specified.

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node result["result"] = node.run(updatehash=updatehash) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 516, in run result = self._run_interface(execute=True) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 635, in _run_interface return self._run_command(execute) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 741, in _run_command result = self._interface.run(cwd=outdir) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 436, in run outputs = self.aggregate_outputs(runtime) File "/home/user/miniconda3/envs/clinicaEnv/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 544, in aggregate_outputs raise FileNotFoundError(msg)

    FileNotFoundError: No such file or directory '['/tmp/tmp7jnpv21k/t1-volume-tissue-segmentation/da6acef93050a86f660ea26f80c21d4beb50efc7/3-T1wToMni/wsub-S24022_ses-01_T1w.nii']' for output 'out_files' of a ApplySegmentationDeformation interface


    The file that could not be found "wsub-S24022_ses-01_T1w.nii" was created and saved in my /Documents/MATLAB/ working directory, I'm not sure if this is the issue.

    opened by jmcawood 18
  • adni-to-bids saves incorrect T1_paths.tsv image IDs in certain cases

    adni-to-bids saves incorrect T1_paths.tsv image IDs in certain cases

    I have downloaded the images and put in the format as specified in the tutorial. However, adni-to-bids is unable to find the path, when the image is present. (The problem may lie with 002? since the few subjects that I tried all didnt work) An example will be 'No T1 image path found for subject 002_S_0295 in visit bl with image ID 45108' .

    It works well with other subjects that dont start with 002. ( I have only tried 2-3 subjects).

    enhancement converter 
    opened by Chengwei94 14
  • Logging breaks when launched from a directory with no write privileges

    Logging breaks when launched from a directory with no write privileges

    Describe the bug Given an adequately configured GPU cluster where you can't run as root in a Docker container neither can you write on at the following location "/" clinica adni-to-bids breaks due to permission issues.

    To Reproduce

    Dockerfile:

    FROM continuumio/miniconda3
    
    ARG USER_ID
    ARG GROUP_ID
    ARG USER
    RUN addgroup --gid $GROUP_ID $USER
    RUN adduser --disabled-password --gecos '' --uid $USER_ID --gid $GROUP_ID $USER
    
    
    RUN apt-get update && apt-get install -y build-essential
    
    RUN conda install python==3.9
    RUN conda install -c conda-forge dcm2niix
    RUN pip install clinica==0.7.2
    

    Run script

    
    runai submit clinica-adni-to-bids \
      --image 10.202.67.207:5000/danieltudosiu:clinica_0_7_2 \
      --volume /nfs/project/AMIGO/ADNI_MRI/metadata/:/clinica_data_directory/ \
      --volume /nfs/project/AMIGO/ADNI_MRI/sourcedata/ADNI/:/dataset_directory/ \
      --volume /nfs/project/AMIGO/ADNI_MRI/rawdata/:/bids_directory/ \
      --large-shm \
      --host-network \
      --backoff-limit 0 \
      --gpu 0 \
      --cpu 8 \
      --project danieltudosiu \
      --command \
      -- clinica convert adni-to-bids --working_directory /bids_directory --n_procs 16 /dataset_directory /clinica_data_directory /bids_directory
    

    Just run it in a Docker container as a non-root user and you should have the same issue.

    Expected behavior Be allowed to select the logging directory.

    bug 
    opened by danieltudosiu 13
  • Improve ci data management

    Improve ci data management

    This PR propose a better organization of the data in the CI agents.

    • [X] A fixture at the moment of launch the test setup the folders to find the input data and the working dir.
    • [X] An unique $TMPDIR is automatically generated by the tests to save outputs (by using tmp_path)
    • [X] Remove mac tests for build, install and instantiate.
    • [X] Activate non-regression tests for convertors and iotools in each PR.
    opened by mdiazmel 13
  • ADNI3 data not being converted (adni-to-bids)

    ADNI3 data not being converted (adni-to-bids)

    Dear Clinica team,

    I have been using Clinica version 0.4.1 to convert ADNI3 data to BIDS. I noticed that not all data are being converted, even if there are dicom data available. These are the steps that I took for one example subject:

    1. Download raw dicom T1 and resting state data for Subject 002_S_1261 (including both the raw and processed data)

    screenshot_01

    screenshot_02

    Note that I also manually unzipped the data in a separate folder and converted them to nifti images to ensure the data were not corrupted.

    The unzipped folder structure looks like:

    ./Accelerated_Sagittal_MPRAGE
    ./Accelerated_Sagittal_MPRAGE/2017-03-15_11_23_54.0
    ./Accelerated_Sagittal_MPRAGE/2017-03-15_11_23_54.0/S547079
    ./Accelerated_Sagittal_MPRAGE/2018-04-24_08_20_09.0
    ./Accelerated_Sagittal_MPRAGE/2018-04-24_08_20_09.0/S679617
    ./Field_Mapping_AP_Phase
    ./Field_Mapping_AP_Phase/2017-03-15_11_23_54.0
    ./Field_Mapping_AP_Phase/2017-03-15_11_23_54.0/S547085
    ./Field_Mapping_AP_Phase/2017-03-15_11_23_54.0/S547103
    ./Axial_T2_STAR
    ./Axial_T2_STAR/2017-03-15_11_23_54.0
    ./Axial_T2_STAR/2017-03-15_11_23_54.0/S547081
    ./Axial_T2_STAR/2018-04-24_08_20_09.0
    ./Axial_T2_STAR/2018-04-24_08_20_09.0/S679620
    ./Perfusion_Weighted
    ./Perfusion_Weighted/2017-03-15_11_23_54.0
    ./Perfusion_Weighted/2017-03-15_11_23_54.0/S547075
    ./Perfusion_Weighted/2018-04-24_08_20_09.0
    ./Perfusion_Weighted/2018-04-24_08_20_09.0/S679619
    ./HighResHippocampus
    ./HighResHippocampus/2017-03-15_11_23_54.0
    ./HighResHippocampus/2017-03-15_11_23_54.0/S547088
    ./HighResHippocampus/2018-04-24_08_20_09.0
    ./HighResHippocampus/2018-04-24_08_20_09.0/S679624
    ./Field_Mapping
    ./Field_Mapping/2018-04-24_08_20_09.0
    ./Field_Mapping/2018-04-24_08_20_09.0/S679622
    ./Field_Mapping/2018-04-24_08_20_09.0/S679626
    ./Axial_3TE_T2_STAR
    ./Axial_3TE_T2_STAR/2019-05-01_12_14_22.0
    ./Axial_3TE_T2_STAR/2019-05-01_12_14_22.0/S907777
    ./Axial_3D_PASL__Eyes_Open_
    ./Axial_3D_PASL__Eyes_Open_/2017-03-15_11_23_54.0
    ./Axial_3D_PASL__Eyes_Open_/2017-03-15_11_23_54.0/S547077
    ./Axial_3D_PASL__Eyes_Open_/2018-04-24_08_20_09.0
    ./Axial_3D_PASL__Eyes_Open_/2018-04-24_08_20_09.0/S679618
    ./Sagittal_3D_FLAIR
    ./Sagittal_3D_FLAIR/2017-03-15_11_23_54.0
    ./Sagittal_3D_FLAIR/2017-03-15_11_23_54.0/S547087
    ./Sagittal_3D_FLAIR/2018-04-24_08_20_09.0
    ./Sagittal_3D_FLAIR/2018-04-24_08_20_09.0/S679623
    ./Axial_rsfMRI__Eyes_Open_
    ./Axial_rsfMRI__Eyes_Open_/2017-03-15_11_23_54.0
    ./Axial_rsfMRI__Eyes_Open_/2017-03-15_11_23_54.0/S547083
    ./Axial_rsfMRI__Eyes_Open_/2018-04-24_08_20_09.0
    ./Axial_rsfMRI__Eyes_Open_/2018-04-24_08_20_09.0/S679621
    
    1. Download and unzip all tabular data from the ADNI website.

    2. Run clinica:

    # Create a text file with only the subject of interest
    echo "002_S_1261" > my_subject.txt
       
    # Enable the virtual environment
    eval "$(/home/Software/miniconda3/bin/conda shell.bash hook)"
    conda activate clinicaEnv
       
    # Run clinica
    clinica \
     convert \
     adni-to-bids \
     --subjects_list my_subject.txt \
     /input/folder/ADNI \
     /tabular/data/folder \
     /output/folder
    

    The standard output looked like this:

    Loading a subjects lists provided by the user...
    /thalia/data/WelshData/ADR012021/ImagingData/TEST3
    Calculating paths of T1 images. Output will be stored in /thalia/data/WelshData/ADR012021/ImagingData/TEST3/conversion_info/v0.
    More than 60 days for corresponding timepoint in ADNIMERGE for subject 002_S_1261 in visit ADNI3 Initial Visit-Cont Pt on 2017-03-15
    Timepoint 1: m96 - ADNI1 on 2015-06-02 (Distance: 652 days)
    Timepoint 2: m84 - ADNI1 on 2014-03-06 (Distance: 1105 days)
    We prefer m96
    [T1] Subject 002_S_1261 has multiple visits for one timepoint.
    More than 60 days for corresponding timepoint in ADNIMERGE for subject 002_S_1261 in visit ADNI3 Year 1 Visit on 2018-04-24
    Timepoint 1: m96 - ADNI1 on 2015-06-02 (Distance: 1057 days)
    Timepoint 2: m84 - ADNI1 on 2014-03-06 (Distance: 1510 days)
    We prefer m96
    [T1] Subject 002_S_1261 has multiple visits for one timepoint.
    More than 60 days for corresponding timepoint in ADNIMERGE for subject 002_S_1261 in visit ADNI3 Year 2 Visit on 2019-05-01
    Timepoint 1: m96 - ADNI1 on 2015-06-02 (Distance: 1429 days)
    Timepoint 2: m84 - ADNI1 on 2014-03-06 (Distance: 1882 days)
    We prefer m96
    [T1] Subject 002_S_1261 has multiple visits for one timepoint.
    No T1 image path found for subject 002_S_1261 in visit bl with image ID 40503
    No T1 image path found for subject 002_S_1261 in visit m06 with image ID 71097
    No T1 image path found for subject 002_S_1261 in visit m12 with image ID 108155
    No T1 image path found for subject 002_S_1261 in visit m24 with image ID 135395
    No T1 image path found for subject 002_S_1261 in visit m36 with image ID 166845
    No T1 image path found for subject 002_S_1261 in visit m48 with image ID 223901
    No T1 image path found for subject 002_S_1261 in visit m60 with image ID 286516
    No T1 image path found for subject 002_S_1261 in visit m72 with image ID 361610
    No T1 image path found for subject 002_S_1261 in visit m84 with image ID 418006
    No T1 image path found for subject 002_S_1261 in visit m96 with image ID 495946
    Paths of T1 images found. Exporting images into BIDS ...
    [T1] No path specified for 002_S_1261 in session m12 1/10
    [T1] No path specified for 002_S_1261 in session m24 2/10
    [T1] No path specified for 002_S_1261 in session bl 3/10
    [T1] No path specified for 002_S_1261 in session m96 4/10
    [T1] No path specified for 002_S_1261 in session m84 5/10
    [T1] No path specified for 002_S_1261 in session m36 6/10
    [T1] No path specified for 002_S_1261 in session m48 7/10
    [T1] No path specified for 002_S_1261 in session m60 8/10
    [T1] No path specified for 002_S_1261 in session m72 9/10
    [T1] No path specified for 002_S_1261 in session m06 10/10
    T1 conversion done.
    Creating modality agnostic files...
    Creating participants.tsv...
    (cut here, no errors follows)
    

    The output folder remained empty. When I viewed the *_paths files in the conversion_info/v0 folder, I noticed that no paths for ADNI3 data were identified. E.g.,

    cat t1_paths.tsv 
    Subject_ID      VISCODE Visit   Sequence        Scan_Date       Study_ID        Series_ID       Image_ID        Field_Strength  Original        Is_Dicom        Path
    002_S_1261      bl      ADNI Screening  MPR__GradWarp__B1_Correction__N3        2007-02-15      7147    26574   62377   1.5     False   True    
    002_S_1261      m06     ADNI1/GO Month 6        MPR__GradWarp__B1_Correction__N3        2007-08-30      11659   38700   79126   1.5     False   True    
    002_S_1261      m12     ADNI1/GO Month 12       MPR__GradWarp__B1_Correction__N3        2008-05-27      16190   50898   109394  1.5     False   True    
    002_S_1261      m24     ADNI1/GO Month 24       MPR__GradWarp__B1_Correction__N3        2009-02-05      19698   62722   139510  1.5     False   True    
    002_S_1261      m36     ADNI1/GO Month 36       MPR__GradWarp__B1_Correction__N3        2010-02-25      25140   80484   171106  1.5     False   True    
    002_S_1261      m48     ADNI1/GO Month 48       MT1__N3m        2011-03-14      32038   101641  225357  3.0     False   True    
    002_S_1261      m60     ADNI2 Initial Visit-Cont Pt     MT1__N3m        2012-02-23      43088   141744  296428  3.0     False   True    
    002_S_1261      m72     ADNI2 Year 1 Visit      MT1__N3m        2013-02-27      57126   183376  362927  3.0     False   True    
    002_S_1261      m84     ADNI2 Year 2 Visit      MT1__N3m        2014-03-13      64378   214915  418841  3.0     False   True    
    002_S_1261      m96     ADNI2 Year 3 Visit      MT1__N3m        2015-06-02      76235   262106  500243  3.0     False   True
    

    The tabular data was downloaded on August 2nd, so I assume that all ADNI3 information should be in the csv files.

    What could be the reason clinica is not picking up on these ADNI3 data?

    bug converter 
    opened by vnckppl 13
  • t1-linear method does nothing

    t1-linear method does nothing

    I have successfully converted my ADNI data to BIDS using the adni-to-bids method and would now use the t1 linear method to apply the pet linear method later.

    After a few small error fixes, I now have the problem that everything should seem to work correctly, but nothing comes of it.

    The pipeline has been running for hours now, but there is neither a result nor any visible progress. I have often tried only one picture, which should take 6 minutes, but even after a few hours there was no result.

    image

    In this picture I tried it with 24 Images, but it stuck at this point.

    So what can I do?

    opened by V-Krause 13
  • Statistics-Volume pipeline

    Statistics-Volume pipeline

    This PR adds 2 new pipelines : statistics-volume and statistics-volume-correction

    • statistics-volume performs a 2 sample t-test using spm, with covariates indicated in the provided tsv
    • statistics-volume-correction
    enhancement pipeline 
    opened by arnaudmarcoux 13
  • Python implementation for StatisticsSurface

    Python implementation for StatisticsSurface

    Fixes #643

    Description

    This PR proposes to replace the current implementation of the StatisticsSurface pipeline, which relies on the MATLAB SurfStats toolbox, by a pure Python implementation relying on BrainStat.

    There are several reasons motivating this migration, the main ones being that SurfStats isn't maintained anymore and that it is written in MATLAB. The current solution implemented in Clinica is to vendor the MATLAB toolbox with a custom wrapper to enable calling it from Python, which is obviously far from ideal...

    Test the PR

    POC repo

    First of all, here is the repo with the proof-of-concept I made before opening this PR: https://github.com/NicolasGensollen/POC_Stat_Pipeline

    It is possible to experiment with the code more easily than through Clinica's pipelining architecture.

    Run the StatisticsSurface pipeline in pure Python

    Obviously, this requires to have Brainstat installed (since it is not a dependency of Clinica yet). This can be done easily with pip:

    $ pip install brainstat
    

    For now, I made the minimum amount of work to integrate it into Clinica.

    So there is still a decent amount of work to do in order to have a clean integration into Clinica.

    Nonetheless, it should be possible to run the pipeline (without the plots which are crashing atm for some reason...):

    $ clinica run statistics-surface ./GitRepos/clinica_data_ci/data_ci/StatisticsSurface/in/caps/ UnitTest t1-freesurfer group_comparison ./GitRepos/clinica_data_ci/data_ci/StatisticsSurface/in/subjects.tsv group --covariates age --covariates sex -np 1 -wd $HOME/WD
    

    Feel free to try it and take a look at the code.

    Feedbacks are welcome as always! 😃

    opened by NicolasGensollen 12
  • Empty dataset detected. Clinical data cannot be extracted.

    Empty dataset detected. Clinical data cannot be extracted.

    When I convert adni to bids with following command clinica convert adni-to-bids -m T1 ./dataset/ADNI ./clinical ./BIDS. it says Empty dataset detected. Clinical data cannot be extracted. 1634782627(1)

    (CDL) [[email protected] CDL]$ clinica convert adni-to-bids -m T1 ./dataset/ADNI ./clinical ./BIDS /home/chenyuliu/anaconda3/anaconda3/envs/CDL/lib/python3.7/site-packages/clinica/iotools/converters/adni_to_bids/adni_to_bids_cli.py:67: DtypeWarning: Columns (19,20,21,104,105,106) have mixed types.Specify dtype option on import or set low_memory=False. force_new_extraction, /home/chenyuliu/anaconda3/anaconda3/envs/CDL/lib/python3.7/site-packages/clinica/iotools/converters/adni_to_bids/adni_to_bids.py:107: DtypeWarning: Columns (19,20,21,104,105,106) have mixed types.Specify dtype option on import or set low_memory=False. "ADNI", clinic_specs_path, clinical_data_dir, bids_ids Traceback (most recent call last): File "/home/chenyuliu/anaconda3/anaconda3/envs/CDL/bin/clinica", line 8, in <module> sys.exit(main()) File "/home/chenyuliu/anaconda3/anaconda3/envs/CDL/lib/python3.7/site-packages/clinica/cmdline.py", line 78, in main cli() File "/home/chenyuliu/anaconda3/anaconda3/envs/CDL/lib/python3.7/site-packages/click/core.py", line 1128, in __call__ return self.main(*args, **kwargs) File "/home/chenyuliu/anaconda3/anaconda3/envs/CDL/lib/python3.7/site-packages/click/core.py", line 1053, in main rv = self.invoke(ctx) File "/home/chenyuliu/anaconda3/anaconda3/envs/CDL/lib/python3.7/site-packages/click/core.py", line 1659, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/chenyuliu/anaconda3/anaconda3/envs/CDL/lib/python3.7/site-packages/click/core.py", line 1659, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/chenyuliu/anaconda3/anaconda3/envs/CDL/lib/python3.7/site-packages/click/core.py", line 1395, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/chenyuliu/anaconda3/anaconda3/envs/CDL/lib/python3.7/site-packages/click/core.py", line 754, in invoke return __callback(*args, **kwargs) File "/home/chenyuliu/anaconda3/anaconda3/envs/CDL/lib/python3.7/site-packages/clinica/iotools/converters/adni_to_bids/adni_to_bids_cli.py", line 74, in cli subjects_list_path=subjects_list, File "/home/chenyuliu/anaconda3/anaconda3/envs/CDL/lib/python3.7/site-packages/clinica/iotools/converters/adni_to_bids/adni_to_bids.py", line 129, in convert_clinical_data bids_ids, clinic_specs_path, clinical_data_dir, bids_subjs_paths File "/home/chenyuliu/anaconda3/anaconda3/envs/CDL/lib/python3.7/site-packages/clinica/iotools/converters/adni_to_bids/adni_utils.py", line 722, in create_adni_sessions_dict raise ValueError("Empty dataset detected. Clinical data cannot be extracted.") ValueError: Empty dataset detected. Clinical data cannot be extracted.

    bug duplicate 
    opened by theguardsgod 12
  • t1-volume pipeline crashing with (Matlab/SPMSegmentation) standard error

    t1-volume pipeline crashing with (Matlab/SPMSegmentation) standard error

    Description What is happening?

    Screenshot from 2021-04-30 17-20-58

    Screenshot from 2021-04-30 17-22-15

    Pipeline Which Clinica's Pipeline is concerned ?

    • t1-volume

    Converters Which Clinica's converter is concerned ?

    • adni-2-bids

    ...

    I/O tools Which Clinica's tool is concerned ? Sorry, I am not find yet.

    opened by asd567 12
  • [CI] Remove catch error and DWI non regression fast

    [CI] Remove catch error and DWI non regression fast

    • Remove the DWI non regression "fast" tests stage since there aren't any atm
    • Remove the catchError for stages of the linux pipeline in order to be consistent with the MACOS pipeline. Atm most stages of the linux pipeline get tagged as unstable for some reason. Now that we've increased the execution frequency of these tests, I'd prefer them to fail fast with a clear error message if they should fail.
    opened by NicolasGensollen 0
  • Ambigous statistics-volume arguments when not using t1-volume gray matter maps in input

    Ambigous statistics-volume arguments when not using t1-volume gray matter maps in input

    This is not a bug I guess but more an improvement request.

    I used statistics-volume on t1-volume output but not on gray matter maps. So I set measure_label to "whitematter" and the custom_file argument accordingly. It ran, but the results were stored in a statistics_volume/group_comparison_measure-graymatter folder. This is because t1-volume assumes that the input is going to be gray matter, which I find strange because other probability maps are outputted by the t1-volume pipeline.

    What I suggest would be either :

    • update t1-volume argument with something like t1-volume-gm
    • or to remove the assumption that t1-volume is necessarily gm maps use the measure_label anyway in the output folder / files names if it is set (and remove this line)
    opened by AudreyDuran 0
  • Factorize `install_nifti`

    Factorize `install_nifti`

    Both converters habs2bids and nifd2bids have their own implementation of install_nifti:

    https://github.com/aramis-lab/clinica/blob/83d6d9958b3b3e2ffe82e5fedf09cd70b3109a44/clinica/iotools/converters/habs_to_bids/habs_to_bids.py#L178-L185

    https://github.com/aramis-lab/clinica/blob/83d6d9958b3b3e2ffe82e5fedf09cd70b3109a44/clinica/iotools/converters/nifd_to_bids/nifd_utils.py#L320-L328

    I believe these could be unified and factored out (in bids_utils maybe ?).

    Also, the name of the function could be changed to something more explicit.

    good first issue iotools 
    opened by NicolasGensollen 2
Releases(v0.7.3)
  • v0.7.3(Nov 18, 2022)

    Clinica 0.7.3

    Enhanced

    • [CI] Add caching support for unit tests
    • [CI] Refactor testing tools
    • [Dependencies] Bump lxml from 4.9.0 to 4.9.1
    • [Dependencies] Upgrade joblib to 1.2.0
    • [Dependencies] build: Install nipype up to version 1.8.2
    • [SurfStat] Pure python implementation
    • [IOTools] Fix warnings in merge-tsv
    • [Adni2BIDS] Deal with new data from ADNI3
    • [DWIPreprocessingUsingT1] Optimized disk usage of Pipeline DWIPreprocessingUsingT1
    • [IOTools] Allow setting a custom logging directory via environment variable
    • [IOTools] Center all modalities if no modality is specified
    • [Pipelines] Report uncompliant BIDS subjects

    Added

    • [Converters] Add support for BIDS Readme
    • [IOTools] Extend the create-subjects-sessions iotool to CAPS directories
    • [IOTools] Add pet-linear to checks for missing processing

    Fixed

    • [UKB2BIDS] Add error if data is not found or filtered
    • [DWIPreprocessingUsingT1] Add missing out_file parameter to DWIBiasCorrect
    • [Converters] UKB2BIDS drop directories labeled as unusable
    • [Adni2BIDS] Handle empty lines in create_subs_sess_list
    • [IOTools] Fix vox_to_world_space_method_1
    Source code(tar.gz)
    Source code(zip)
  • v0.7.2(Jul 25, 2022)

    Clinica 0.7.2

    Fixed

    • [Pipelines] Fix bug introduced in previous version with the use of the gunzip interface
    • [DWIConnectome] Use ConstrainedSphericalDeconvolution instead of buggy EstimateFOD

    Enhanced

    • [Adni2Bids] Add compatibility for edge cases introduced in Adni3
    Source code(tar.gz)
    Source code(zip)
  • v0.7.1(Jun 14, 2022)

    Added

    • [Doc] add ukbiobank documentation
    • [DWIConnectome] Fetch meta data directly from MRtrix github repository

    Changed

    • [Core] Enable parallelelization when grabbing files

    Fixed

    • [Converters] Fix several warnings
    Source code(tar.gz)
    Source code(zip)
  • v0.7.0(May 13, 2022)

    New

    • [flair-linear] new pipeline to affinely align FLAIR images to the MNI space
    • [Ukbiobank] new converter to modify T1W/T2/DWI/SWI/tfmri/rsfMRI UKBiobank data into BIDS standard
    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(Apr 5, 2022)

    Changed

    • [PET*] Use trcinstead of acq for BIDS compliance
    • [Converters] Remove supperfluous use of acq entity in filenames for BIDS compliance

    Added

    • [adni-to-bids] allow extraction of metadata from xml
    • [CI] Initiate use of unit tests

    Fixed

    • [adni-to-bids] fix edge case for supporting nan session-ids
    Source code(tar.gz)
    Source code(zip)
  • v0.5.6(Mar 4, 2022)

    Fixed

    • [DWIPreprocessUsingT1] Updated call to antsApplyTransform
    • [Utils] Replace deprecated call to pandas append by concat

    Changed

    • Upgrade minimum Python versino to 3.8 and upgrade dependencies
    • Set BIDS version to 1.7.0 by default (overwritten for some converters)
    Source code(tar.gz)
    Source code(zip)
  • v0.5.5(Feb 2, 2022)

  • v0.5.4(Jan 28, 2022)

    Added

    • [merge-tsv] Add t1-freesurfer-longitudinal and dwi-dti results

    Changed

    • [t1-freesurfer] Enable t1-freesurfer to run with missing files
    • [all converters] Normalize subprocess calls to dcm2niix

    Fixed

    • [OASIS3/NIFD/HABS] add data_description file to BIDS
    • [DWI-DTI] Remove thresholding for DECFA
    • [adni-to-bids] Tighten check on session-id values
    • [adni-to-bids] Fix bug related to multiple conversions
    Source code(tar.gz)
    Source code(zip)
  • v0.5.3(Nov 25, 2021)

    Added

    • [t1-freesurfer] Add option to t1-freesurfer to project the results of recon-all onto another atlas
    • [CI] Use poetry for dependency management

    Changed

    • [CI] Refactor non-regression tests for easier parallelization
    • [Atlas] Update checksum to make pipelines compatible with fsl 6.0.5

    Fixed

    • [t1-volume*/pet*] Add command line argument yes for turning interactivity off
    • [`t1-volume-existing-template] Fix chained invocation
    • [t1-volume*/pet-volume*] Fix default value of--smooth` parameter for click compatibility
    • [dwi-connectome] Set --n_tracks's type for click compatibility
    • [dwi-preprocessing*] Change type of initrand and use_cuda to bool
    • [t1-freesurfer-longitudinal] Fix broken pipeline due to typo in code
    • [Documentation] Update OASIS3_to_bids instructions for conversion
    • [StatisticsSurface] Fix type in covariate argument
    • [StatisticsVolume] Fix bug in feature argument
    Source code(tar.gz)
    Source code(zip)
  • v0.5.2(Oct 12, 2021)

    Clinica 0.5.2

    Changed

    [DWI-preprocessing] Rewrite of DWI-preprocessing pipelines using FSL's eddy tool

    Removed

    [DeepLearningPrepareData] Migrated deeplearning-prepare-data to ClinicaDL

    Fixed

    [Oasis3/NIFD] Fix code for backward compatibility with pandas 1.1.x [T1-Freesurfer/DWI] Remove Typing for compatibility with Nipype [T1Volume] Add command line option to prevent interactive prompts

    Source code(tar.gz)
    Source code(zip)
  • v0.5.1(Sep 22, 2021)

    Added

    • [Oassis3-to-bids] add converter
    • [Github] Add citation file

    Changed

    • [adni-to-bids] Improve fetching of participants
    • [adni-to-bids] Image path finder more robust
    • [doc] Update the OASIS3 documentation
    • [CI] Code refactoring/cleanup

    Fixed

    • [Atlas] Fix ROI index for left amygdala in AAL2 atlas
    • [adni-to-bids] Prevent crash when files exists
    • [adni-to-bids] Revert behavior to encode Dementia as AD
    • [adni-to-bids] Remove entries with incoherent session names
    • [nifd-to-bids] Several bugfixes and enhancements
    • [iotools]Fix bug on empty dataframe
    • [CI] Fix bash instruction to init conda-
    • [doc] Correct DWI-Connectome description paragraph
    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Aug 6, 2021)

    Clinica 0.5 -

    Added

    • [Docs] Add missing documentation on check-missing-processing iotool
    • [deeplearning-prepare-data]: Add option to run pipeline with ROI for tensor_format option to extract region of interest according to a mask.

    Changed

    • [Core] Improve Logging for Clinica
    • [Core] Improve CLI through using click
    • [Core] Nibabel replace get_data() by get_fdata() method for dataobj_images (nibabel)
    • [ADNI converter] Optimization of adni2bids clincal data extraction
    • [ADNI converter] Replace xlsx by tsv files for clinical data specification
    • [Converters] Remove dcm2nii fallback
    • [AIBL converter] Remove freesurfer fallback and dependency

    Fixed

    • [Core] fix bug in write_scan_tsv
    • [Docs] Add documentation for check-missing-processing
    • [Docs] Fix several small typos
    • [Docs] Instructions for installing SPM dependency on MacOs Big Sur
    • [CI] Fix several small issues with non-regression tests
    • [CI] Fix typo in Jenkins script
    • [CI] Automatically delete conda environments after PR is merged
    • [Iotools] fix indices in merge-tsv
    • [ML] Fix unresolved reference in SVM pipeline
    • [ML] Fix typo in parameters for SVC pipeline
    Source code(tar.gz)
    Source code(zip)
  • v0.5.0rc1(Jul 23, 2021)

    Clinica 0.5rc1 -

    Added

    • [Docs] Add missing documentation on check-missing-processing iotool

    [deeplearning-prepare-data]: Add option to run pipeline with ROI for tensor_format option to extract region of interest according to a mask.

    Changed

    • [Core] Improve Logging for Clinica

    • [Core] Improve CLI through using click

    • [Core] Nibabel replace get_data() by get_fdata() method for dataobj_images (nibabel)

    • [Adni converter] Optimization of adni2bids clincal data extraction

    • [Adni converter] Replace xlsx by tsv files for clinical data specification

    Fixed

    • [Core] fix bug in write_scan_tsv
    • [Docs] Add documentation for check-missing-processing
    • [Docs] Fix several small typos
    • [Docs] Instructions for installing SPM dependency on MacOs Big Sur
    • [CI] Fix typo in Jenkins script
    • [CI] Automatically delete conda environments after PR is merged
    • [ML] Fix unresolved reference in SVM pipeline
    • [ML] Fix typo in parameters for SVC pipeline
    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(May 12, 2021)

    Added

    • [deeplearning-prepare-data]: Add option to run pipeline with pet-linear outputs

    Changed

    • [Oasis-to-bids]: Remove FSL library dependency for OASIS-to-bids conversion.
    • [Clinica]: Replace exception by warning when CAPs folder not recognized.
    • [AIBL-to-bids]: Center output nifti files of AIBL.
    • [AIBL-to-bids]: Extracts DICOM metadata in JSON files.
    • [ADNI-to-bids]: Update image selection to always consider non-processed (original) images
    • [merge-tsv]: Fetch sub-cortical volumes generated by t1-freesurfer pipelines and JSON files (if exists).

    Fixed

    • [pet-surface]: Verify SPM12 installation when running pipeline
    • [docs]: Update instruction to install some third party software
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Apr 13, 2021)

    Changes

    Clinica core:

    • [Enh] Code source was completely reformatted using the Black code style.
    • [Enh] Documentation for the project is now versioned (versions from 0.3.8 are publicly available).
    • [Enh] Functions used for multiple pipelines are now mutualized (e.g container_from_filename function).
    • [Enh] f-strings are used massively.

    Pipelines:

    Converters:

    • [Enh] Conversion information is added once the converter is run to facilitate traceability.
    • [Enh] Add new keywords available in ADNI3 to the adni-2-bids converter.

    IOtools:

    • [New] check-missing-processing tool allows creating a TSV file containing information about the pipelines executed into a specific CAPS folder.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.8(Dec 30, 2020)

    Added

    • Add option to run deeplearning-prepare-data in output of t1-extension pipeline and also custom pipelines (PR #150).
    • Add Build and publish documentation with CI (PR #146).
    • Add CHANGELOG.md file

    Changed

    • Harmonize PET tracers handling (ML/DL/Stats) (PR #137).
    • Behaviour of ADNI converter: some minor bugs and updated wrt ADNI3. E.g., the field age_bl was added. (PR #139, #138, #140, #142)

    Fixed

    • Add DataDictionary_NIFD_.xlsx file when using NIFD2BIDS.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.7(Oct 21, 2020)

    Changes

    Clinica Core:

    • [New] Remove CAT12 from dependencies
    • [New] Add checksum for volume atlases
    • [Fix] Remove duplicated lines in AICHA ROI file
    • [New] Integrate clinica.wiki repository into clinica repository: New versions of Clinica will now have their own version of documentation

    Pipelines:

    • [New] t1-freesurfer-longitudinal pipeline: FreeSurfer-based longitudinal processing of T1-weighted MR images [Reuter et al., 2012]. More info in the wiki: http://www.clinica.run/doc/Pipelines/T1_FreeSurfer_Longitudinal/
    • [Change] The fmri-preprocessing pipeline is removed from the Clinica software as we will not actively maintain it. It is now in a separate repository: https://github.com/aramis-lab/clinica_pipeline_fmri_preprocessing

    Converters:

    • [Enh] Improve how dependencies are checked for converters
    • [Fix] Add diagnosis conversion for ADNI3
    • [Fix] Avoid creation of sessions ses-V01 in *_sessions.tsv files
    Source code(tar.gz)
    Source code(zip)
  • v0.3.6(Aug 10, 2020)

    Changes

    Clinica Core:

    • [Change] Pip is the main way to install Clinica. Conda packages are not available for the new versions.
    • [Update] Set the minimalSet minimal version of Python to 3.7.
    • [Fix] Remove non-breaking spaces.

    Pipelines:

    • [New] Display failed image(s) when running t1-linear pipeline.

    Converters:

    • [Fix] The aibl-2-bids converter now handles new version of clinical data.
    • [Update] The oasis-2-bids converter now uses NiBabel instead of FreeSurfer to convert OASIS dataset.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.5(Aug 10, 2020)

    Changes

    Clinica Core:

    • [CI] Improve Jenkins configuration (Automatic generation of testing reports in order to be displayed in the CI interface; Recreate Python environment if requirements.txt changes)

    Pipelines:

    • [New] deeplearning-prepare-data pipeline: Prepare input data for deep learning with PyTorch. More info on the Wiki: http://www.clinica.run/doc/Pipelines/DeepLearning_PrepareData/
    • [Change] t1-linear pipeline now crops image on default. If --uncropped_image is added to the command line, the image is not cropped.
    • [Change] Refactor machine learning modules. Main changes involve use of CamelCase convention for classes and parameters used dictionaries.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.4(Apr 17, 2020)

    Changes

    Clinica Core:

    • [Improvement] Remove Clinica dependencies while updating and unfreezing some of them

    Pipelines:

    • [Enh] Improve how template files are downloaded for t1-linear and statistics-volume pipelines
    Source code(tar.gz)
    Source code(zip)
  • v0.3.3(Apr 1, 2020)

    Changes

    Clinica Core:

    • [New] Use PEP8 Speaks tool for PEP8 checks.
    • [Improvement] Improve Jinja2 template.

    Pipelines:

    • [New] t1-linear - Affine registration of T1w images to the MNI standard space. More info on the Wiki: http://www.clinica.run/doc/Pipelines/T1_Linear .
    • [New] statistics-volume and (experimental) statistics-volume-correction - Volume-based mass-univariate analysis with SPM. More info on the Wiki: http://www.clinica.run/doc/Pipelines/Stats_Volume .
    • [Fix] Error for pet-surface pipeline with unzipped files (Issue #68).
    • [Improvement] Improve error message in pet-surface when metadata are missing in JSON file (Issue #69).
    • [Fix] Prevent Clinica from running PET pipelines on 4D volumes (Issue #70).
    • [Fix] Display clear error message when trying to create a DARTEL template with one image (Issue #74).
    • [Fix] Fix bug where CAPS outputs for t1-freesurfer pipeline were not saved (Google Groups).

    Converters:

    • [Fix] Fix wrong path for BIDS FLAIR MR image, missing JSON files for DWI, error when generating paths for AV45 and Florbetaben PET images for adni-2-bids(Issue #50).
    • [New] Add new fields to clinical data in adni-2-bids e.g. Clinical Dementia Rating Scale (CDR), Montreal Cognitive Assessment (MOCA). The full list is located in the file clinica/iotools/data/clinical_specifications_adni.xlsx.
    • [Improvement] General improvements for adni-2-bids are detailed in PR #55 and merged in PR #64.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.2(Jan 31, 2020)

    Changes

    Clinica Core:

    • [Enh] Harmonize how working directory is handled in Clinica.
    • [New] For any pipeline, delete automatically working directory if not specified in the CLI.
    • [Fix] Catch case when no image is given to a pipeline (e.g. because they were already processed in CAPS directory).
    • [Enh] Harmonize use of pipeline parameters and how they are handled.

    Pipelines:

    • [Change] Pipeline classes associated to t1-volume and t1-volume-existing-template command lines are removed. They are simply replaced by successive calls of t1-volume* sub-pipelines.
    • [New] For t1-freesurfer, display failed images when an error occurs during the execution of the pipeline.
    • [New] For t1-freesurfer, add --overwrite-outputs flag. When used, images already run in CAPS directory will rewritten. Otherwise, they will be skipped before the execution of the pipeline.
    • [Fix] Remove mentions of modulation in t1-volume-parcellation arguments (modulated images are always chosen).
    • [Fix] Remove --smooth flag in t1-volume-tissue-segmentation (should not exist).
    • [Enh] (All pipelines) Remove discrepancy between CLI flag and pipeline parameter.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.1(Nov 19, 2019)

    Changes

    Clinica Core:

    • [Fix] Remove the double equal from conda environment file, to be compatible with conda > 4.7
    • [New] Add clinica_file_reader / clinica_group_reader functions to read input files for Clinica
    • [New] Add ux.py to centralize output message for each pipeline (currently used in t1-freesurfer pipeline)
    • [Enh] Improve verifications in check_bids_folder / check_caps_folder
    • [Enh] Improve how Clinica handles SPM/SPM-Standalone on Mac/Linux and how Clinica extracts TPM file.
    • [Enh] Replace duplicated code by calls of check_spm, check_cat12 and check_environment function
    • [Enh] Remove obsolete functions

    Pipelines:

    • [Fix] Check volume locations in t1-freesurfer and t1-volume pipelines
      • For t1-freesurfer, this check is useful if the user plans to use pet-surface afterwards.
      • For t1-volume, this check is important since SPM Segment() expects centers of T1w images to be closed to 0.
    • [Fix] Check relative volume locations in pet-surface and pet-volume pipelines. In these pipelines, SPM Coregister() method is used to register T1w and PET images. However, if the centers of these images are far from each other, this registration may fail
    • [Enh] Rewrite build_input_node method for all pipelines
    • [Enh] Harmonize exception handling when missing or duplicated BIDS/CAPS files are present for all pipelines
    • [Enh] Change how to find NaN values in images
    • [Fix] t1-freesurfer: Fix the sending of argument for -raa/--recon_all_args flag

    IOTools:

    • [New] Add center-nifti IOTool to center NIfTI files of a BIDS directory. This tool is mainly used when SPM is not able to segment some T1w images because the centers of these volumes are not aligned with the origin of the world coordinate system. By default, only problematic images are converted. The rest of the images are also copied to the new BIDS directory, but left untouched.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Sep 18, 2019)

    Changes

    Clinica Core:

    • [Improvement] General improvements include better error handling/message when executing Clinica (missing dependency, BIDS/CAPS handling, typos, warning messages).
    • [Fix] After fixing "Assuming non interactive session since isatty found missing" for Clinica 0.2.2, Clinica became too verbose when running a pipeline (Issue #7). An alternative bugfix was used.
    • [New] Add timeout when asking if pipeline starts even though disks are full
    • [New] Add timeout when asking user for n_procs
    • [New] Detect if extra flags are given and if -v is used after clinica {run|iotools|convert}
    • [New] Add warning message if PYTHONPATH is not empty (should avoid this situation)

    Pipelines:

    • [New] dwi-connectome pipeline: Construction of structural connectome with computation of fiber orientation distributions, tractogram and connectome. More info on the Wiki: http://www.clinica.run/doc/Pipelines/DWI_Connectome
    • [Change] t1-freesurfer pipeline was rewritten. It includes a complete refactoring and:
      • The pipeline is now executed in the working directory and outputs without errors from FreeSurfer or Nipype are now copied from the working directory to the CAPS folder;
      • the symbolic links (e.g. fsaverage) are not present in the CAPS folder anymore;
      • If Clinica fails on some images, it detects & displays the subjects who failed.
    • [Change] The TSV files generated by the t1-freesurfer* pipelines are now ordered in two columns (name, value)- [Fix] Fix parallelization in t1-volume-tissue-segmentation and t1-volume-existing-template pipelines: if SPM failed on an image, Clinica did not manage to save successful results of other images in CAPS directory
    • [Fix] Remove atlas statistics generation in t1-volume-dartel2mni pipeline
    • [Fix] Check the presence of CAT12 in volume pipelines. Otherwise, the pipeline failed when trying to generate statistics for atlases part of the CAT12 toolbox.
    • [Fix] Include Matlab files from ‘pipelines’ folder when using the user installation (i.e. without pip -e .). Otherwise, pet-volume pipeline failed.
    • [Fix] Sort diagnosis list to avoid mixing labels when testing for ML modules.
    • [Update] Edit path to search Tissue Probability Map in SPM standalone.
    • [Update] Remove several obsolete functions and workflows.

    Converters:

    • [Update] The adni-2-bids converter now includes ADNI3 and new modalities (fMRI, PET data)
    • [New] nifd-2-bids converter: You can now convert the NIFD dataset (http://4rtni-ftldni.ini.usc.edu) into BIDS. More info on the Wiki: http://www.clinica.run/doc/DatabasesToBIDS/#nifd-to-bids
    • [Fix] Fix age in sessions.tsv file for adni-2-bids (the age was the same across sessions)
    • [New] Compute age from date of birth and date of exam for aibl-2-bids
    • [Improvement] Deduce missing examination dates in aibl-2-bids
    • [Fix] Remove duplicates in oasis-2-bids converter

    IOTools:

    • [Fix] Fix missing modalities bug where FLAIR and T1w could be miscounted
    • [Improvement] Improve error message for iotools commands (e.g. when `-tsv is not a file, when BIDS parameter is not a folder)
    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Jun 6, 2019)

    Changes

    • [CI] Continuous integration updated with Jenkins following GitHub migration
    • [Update] adni-to-bids converter improvements: multiprocessing, less crashes, better display
    • [Update] Test folder for CI is reorganized
    • [Fix] pet-volume: tsv needed for partial volume correction can now be used
    • [Fix] Bug where SPM Standalone could not be used in pipelines is now fixed
    • [Fix] Bug in aibl-to-bids converter that could cause Clinica to crash is now fixed
    • [Fix] In t1-volume-existing-template, a better check is carried out on the group name
    • [Fix] The message "Assuming non interactive session since isatty found missing" has disappeared
    • [New] Clinica now handles cross sectional dataset by automatically proposing the user to convert it into a longitudinal dataset
    • [New] Implementation of a new multiclassification algorithm
    • [New] Improvement of information on crash display
    • [New] No Warning messages are displayed
    • [New] Python dependencies are now freezed to ensure non regression
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(May 21, 2019)

    Changes

    • [Fix] Fix bug in the aibl-to-bids converter.
    • [Improvement] Improvement of the converter oasis-to-bids
    • [Fix] Include duecredit as dependency.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(May 21, 2019)

    Changes

    Clinica Core:

    • [Update] Clinica is now using Python 3.6.0 & Nipype 1.1.2.
    • [New] A CI framework is set up for Clinica.
    • [Improvement] General improvements include better error handling/message when executing Clinica.
    • [New] Clinica checks if there is sufficient space before running a pipeline.
    • [New] Clinica ensures that multithreading is used before running a pipeline, and warns the user otherwise.
    • [Fix] Relative path in command line now works properly.
    • [New] The Clinica project is now PEP8 compliant.

    Pipelines:

    • [New] machinelearning-prepare-spatial-svm pipeline: Prepare input data for spatially regularized SVM [Cuingnet et al, 2013]. More info on the Wiki: http://www.clinica.run/doc/Pipelines/MachineLearning_PrepareSVM/
    • [New] fmri-preprocessing pipeline: fMRI pre-processing with slice timing and motion correction, brain extraction and spatial normalization. More info on the Wiki: http://www.clinica.run/doc/Pipelines/fMRI_Preprocessing/
    • [Change] The dwi-processing-noddi (and its prerequisites dwi-preprocessing-multi-shell) is removed from the Clinica software as we will not actively maintain it. It is now on a separate repository: https://github.com/aramis-lab/clinica_pipeline_noddi
    • [Fix] t1-freesurfer now handles FreeSurfer 6.0.0.
    • [Change] New pipeline names :
      • t1-freesurfer-cross-sectional becomes t1-freesurfer;
      • t1-spm-* pipelines become t1-volume-*. In particular, t1-spm-full-prep is now t1-volume and t1-volume-existing-dartel is now t1-volume-existing-template;
      • dwi-preprocessing-using-phasediff-fieldmap is now dwi-preprocessing-using-fieldmap;
      • dwi-processing-dti pipeline becomes dwi-dti;
      • pet-preprocessing-volume pipeline becomes pet-volume.

    Converters:

    [New] The ADNI-2-BIDS converter now includes the diffusion MRI and FLAIR sequences.

    IOTools:

    [Fix] Fix merge-tsv command line when no path was given to the TSV file. [New] The merge-tsv can now parse TSV files from CAPS. Currently, it only supports the t1-volume and pet-volume pipelines.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0b3(May 21, 2019)

  • v0.2.0b2(May 21, 2019)

  • v0.1.3(May 21, 2019)

Owner
ARAMIS Lab
The Aramis Lab is a joint research team between CNRS, Inria, Inserm and Sorbonne University and belongs to the Paris Brain Institute (ICM).
ARAMIS Lab
Code and experiments for "Deep Neural Networks for Rank Consistent Ordinal Regression based on Conditional Probabilities"

corn-ordinal-neuralnet This repository contains the orginal model code and experiment logs for the paper "Deep Neural Networks for Rank Consistent Ord

Raschka Research Group 14 Dec 27, 2022
Python implementation of Project Fluent

Project Fluent This is a collection of Python packages to use the Fluent localization system. python-fluent consists of these packages: fluent.syntax

Project Fluent 155 Dec 28, 2022
Official code of our work, Unified Pre-training for Program Understanding and Generation [NAACL 2021].

PLBART Code pre-release of our work, Unified Pre-training for Program Understanding and Generation accepted at NAACL 2021. Note. A detailed documentat

Wasi Ahmad 138 Dec 30, 2022
From a body shape, infer the anatomic skeleton.

OSSO: Obtaining Skeletal Shape from Outside (CVPR 2022) This repository contains the official implementation of the skeleton inference from: OSSO: Obt

Marilyn Keller 166 Dec 28, 2022
Data and analysis code for an MS on SK VOC genomes phenotyping/neutralisation assays

Description Summary of phylogenomic methods and analyses used in "Immunogenicity of convalescent and vaccinated sera against clinical isolates of ance

Finlay Maguire 1 Jan 06, 2022
Pytorch library for end-to-end transformer models training and serving

Pytorch library for end-to-end transformer models training and serving

Mikhail Grankin 768 Jan 01, 2023
Building Ellee — A GPT-3 and Computer Vision Powered Talking Robotic Teddy Bear With Human Level Conversation Intelligence

Using an object detection and facial recognition system built on MobileNetSSDV2 and Dlib and running on an NVIDIA Jetson Nano, a GPT-3 model, Google Speech Recognition, Amazon Polly and servo motors,

24 Oct 26, 2022
Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.

CycleGAN PyTorch | project page | paper Torch implementation for learning an image-to-image translation (i.e. pix2pix) without input-output pairs, for

Jun-Yan Zhu 11.5k Dec 30, 2022
Pre-trained model, code, and materials from the paper "Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation" (MICCAI 2019).

Adaptive Segmentation Mask Attack This repository contains the implementation of the Adaptive Segmentation Mask Attack (ASMA), a targeted adversarial

Utku Ozbulak 53 Jul 04, 2022
Implementation of the HMAX model of vision in PyTorch

PyTorch implementation of HMAX PyTorch implementation of the HMAX model that closely follows that of the MATLAB implementation of The Laboratory for C

Marijn van Vliet 52 Oct 13, 2022
💡 Type hints for Numpy

Type hints with dynamic checks for Numpy! (❒) Installation pip install nptyping (❒) Usage (❒) NDArray nptyping.NDArray lets you define the shape and

Ramon Hagenaars 377 Dec 28, 2022
Materials for my scikit-learn tutorial

Scikit-learn Tutorial Jake VanderPlas email: [email protected] twitter: @jakevdp gith

Jake Vanderplas 1.6k Dec 30, 2022
code for paper -- "Seamless Satellite-image Synthesis"

Seamless Satellite-image Synthesis by Jialin Zhu and Tom Kelly. Project site. The code of our models borrows heavily from the BicycleGAN repository an

Light 14 Apr 05, 2022
A simple Python configuration file operator.

A simple Python configuration file operator This project provides a common way to read configurations using config42. Installation It is possible to i

Scott Lau 2 Nov 08, 2021
MlTr: Multi-label Classification with Transformer

MlTr: Multi-label Classification with Transformer This is official implement of "MlTr: Multi-label Classification with Transformer". Abstract The task

程星 38 Nov 08, 2022
PyTorch implementation of SmoothGrad: removing noise by adding noise.

SmoothGrad implementation in PyTorch PyTorch implementation of SmoothGrad: removing noise by adding noise. Vanilla Gradients SmoothGrad Guided backpro

SSKH 143 Jan 05, 2023
KUIELAB-MDX-Net got the 2nd place on the Leaderboard A and the 3rd place on the Leaderboard B in the MDX-Challenge ISMIR 2021

KUIELAB-MDX-Net got the 2nd place on the Leaderboard A and the 3rd place on the Leaderboard B in the MDX-Challenge ISMIR 2021

IELab@ Korea University 74 Dec 28, 2022
Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

This is the official PyTorch implementation of our paper: "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks". Our project website and video demos are here.

Richard Wang 443 Dec 06, 2022
Tutorial repo for an end-to-end Data Science project

End-to-end Data Science project This is the repo with the notebooks, code, and additional material used in the ITI's workshop. The goal of the session

Deena Gergis 127 Dec 30, 2022
TorchGRL is the source code for our paper Graph Convolution-Based Deep Reinforcement Learning for Multi-Agent Decision-Making in Mixed Traffic Environments for IV 2022.

TorchGRL TorchGRL is the source code for our paper Graph Convolution-Based Deep Reinforcement Learning for Multi-Agent Decision-Making in Mixed Traffi

XXQQ 42 Dec 09, 2022