Evaluating deep transfer learning for whole-brain cognitive decoding

Overview

Evaluating deep transfer learning for whole-brain cognitive decoding

This README file contains the following sections:

Project description

This project provides two main packages (see src/) that allow to apply DeepLight (see below) to the task-fMRI data of the Human Connectome Project (HCP):

  • deeplight is a simple python package that provides easy access to two pre-trained DeepLight architectures (2D-DeepLight and 3D-DeepLight; see below), designed for cognitive decoding of whole-brain fMRI data. Both architecturs were pre-trained with the fMRI data of 400 individuals in six of the seven HCP experimental tasks (all tasks except for the working memory task, which we left out for testing purposes; click here for details on the HCP data).
  • hcprepis a simple python package that allows to easily download the HCP task-fMRI data in a preprocessed format via the Amazon Web Services (AWS) S3 storage system and to transform these data into the tensorflow records data format.

Repository organization

├── poetry.lock         <- Overview of project dependencies
├── pyproject.toml      <- Lists details of installed dependencies
├── README.md           <- This README file
├── .gitignore          <- Specifies files that git should ignore
|
├── scrips/
|    ├── decode.py      <- An example of how to decode fMRI data with `deeplight`
|    ├── download.py    <- An example of how to download the preprocessed HCP fMRI data with `hcprep`
|    ├── interpret.py   <- An example of how to interpret fMRI data with `deeplight`
|    └── preprocess.sh  <- An example of how to preprocess fMRI data with `hcprep`
|    └── train.py       <- An example of how to train with `hcprep`
|
└── src/
|    ├── deeplight/
|    |    └──           <- `deeplight` package
|    ├── hcprep/
|    |    └──           <- 'hcprep' package
|    ├── modules/
|    |    └──           <- 'modules' package
|    └── setup.py       <- Makes 'deeplight', `hcprep`, and `modules` pip-installable (pip install -e .)  

Installation

deeplight and hcprep are written for python 3.6 and require a working python environment running on your computer (we generally recommend pyenv for python version management).

First, clone and switch to this repository:

git clone https://github.com/athms/evaluating-deeplight-transfer.git
cd evaluating-deeplight-transfer

This project uses python poetry for dependency management. To install all required dependencies with poetry, run:

poetry install

To then install deeplight, hcprep, and modules in your poetry environment, run:

cd src/
poetry run pip3 install -e .

Packages

HCPrep

hcprep stores the HCP task-fMRI data data locally in the Brain Imaging Data Structure (BIDS) format.

To make fMRI data usable for DL analyses with TensorFlow, hcprep can clean the downloaded fMRI data and store these in the TFRecords data format.

Getting data access: To download the HCP task-fMRI data, you will need AWS access to the HCP public data directory. A detailed instruction can be found here. Make sure to safely store the ACCESS_KEY and SECRET_KEY; they are required to access the data via the AWS S3 storage system.

AWS configuration: Setup your local AWS client (as described here) and add the following profile to '~/.aws/config'

[profile hcp]
region=eu-central-1

Choose the region based on your location.

TFR data storage: hcprep stores the preprocessed fMRI data locally in TFRecords format, with one entry for each input fMRI volume of the data, each containing the following features:

  • volume: the flattened voxel activations with shape 91x109x91 (flattened over the X (91), Y (109), and Z (91) dimensions)
  • task_id, subject_id, run_id: numerical id of task, subject, and run
  • tr: TR of the volume in the underlying experimental task
  • state: numerical label of the cognive state associated with the volume within its task (e.g., [0,1,2,3] for the four cognitive states of the working memory task)
  • onehot: one-hot encoding of the state across all experimental tasks that are used for training (e.g., there are 20 cognitive tasks across the seven experimental tasks of the HCP; the four cognitive states of the working memory task could thus be mapped to the last four positions of the one-hot encoding, with indices [16: 0, 17: 1, 18: 2, 19: 3])

Note that hcprep also provides basic descriptive information about the HCP task-fMRI data in info.basics:

hcp_info = hcprep.info.basics()

basics contains the following information:

  • tasks: names of all HCP experimental tasks ('EMOTION', 'GAMBLING', 'LANGUAGE', 'MOTOR', 'RELATIONAL', 'SOCIAL', 'WM')
  • subjects: dictionary containing 1000 subject IDs for each task
  • runs: run IDs ('LR', 'RL')
  • t_r: repetition time of the fMRI data in seconds (0.72)
  • states_per_task: dictionary containing the label of each cognitive state of each task
  • onehot_idx_per_task: index that is used to assign cognitive states of each task to the onehotencoding of the TFR-files (see onehot above)

For further details on the experimental tasks and their cognitive states, click here.

DeepLight

deeplight implements two DeepLight architectures ("2D" and "3D"), which can be accessed as deeplight.two (2D) and deeplight.three (3D).

Importantly, both DeepLight architectures operate on the level of individual whole-brain fMRI volumes (e.g., individual TRs).

2D-DeepLight: A whole-brain fMRI volume is first sliced into a sequence of axial 2D-images (from bottom-to-top). These images are passed to a DL model, consisting of a 2D-convolutional feature extractor as well as an LSTM unit and output layer. First, the 2D-convolutional feature extractor reduces the dimensionality of the axial brain images through a sequence of 2D-convolution layers. The resulting sequence of higher-level slice representations is then fed to a bi-directional LSTM, modeling the spatial dependencies of brain activity within and across brain slices. Lastly, 2D-DeepLight outputs a decoding decision about the cognitive state underlying the fMRI volume, through a softmax output layer with one output unit per cognitive state in the data.

3D-DeepLight: The whole-brain fMRI volume is passed to a 3D-convolutional feature extractor, consisting of a sequence of twelve 3D-convolution layers. The 3D-convolutional feature extractor directly projects the fMRI volume into a higher-level, but lower dimensional, representation of whole-brain activity, without the need of an LSTM. To make a decoding decision, 3D-DeepLight utilizes an output layer that is composed of a 1D- convolution and global average pooling layer as well as a softmax activation function. The 1D-convolution layer maps the higher-level representation of whole-brain activity of the 3D-convolutional feature extractor to one representation for each cognitive state in the data, while the global average pooling layer and softmax function then reduce these to a decoding decision.

To interpret the decoding decisions of the two DeepLight architectures, relating their decoding decisions to the fMRI data, deeplight makes use of the LRP technique. The LRP technique decomposes individual decoding decisions of a DL model into the contributions of the individual input features (here individual voxel activities) to these decisions.

Both deeplight architectures implement basic fit, decode, and interpret methods, next to other functionalities. For details on how to {train, decode, interpret} with deeplight, see scripts/.

For further methdological details regarding the two DeepLight architectures, see the upcoming preprint.

Note that we currently recommend to run any applications of interpret with 2D-DeepLight on CPU instead of GPU, due to its high memory demand (assuming that your available CPU memory is larger than your available GPU memory). This switch can be made by setting the environment variable export CUDA_VISIBLE_DEVICES="". We are currently working on reducing the overall memory demand of interpret with 2D-DeepLight and will push a code update soon.

Modules

modules is a fork of the modules module from interprettensor, which deeplight uses to build the 2D-DeepLight architecture. Note that modules is licensed differently from the other python packages in this repository (see modules/LICENSE).

Basic usage

You can find a set of example python scripts in scripts/, which illustrate how to download and preprocess task-fMRI data from the Human Connectome Project with hcprep and how to {train on, decode, interpret} fMRI data with the two DeepLight architectures of deeplight.

You can run individual scripts in your poetryenvironment with:

cd scripts/
poetry run python <SCRIPT NAME>
Owner
Armin Thomas
Ram and Vijay Shriram Data Science Fellow at Stanford Data Science
Armin Thomas
This python-based package offers a way of creating a parametric OpenMC plasma source from plasma parameters.

openmc-plasma-source This python-based package offers a way of creating a parametric OpenMC plasma source from plasma parameters. The OpenMC sources a

Fusion Energy 10 Oct 18, 2022
Finite-temperature variational Monte Carlo calculation of uniform electron gas using neural canonical transformation.

CoulombGas This code implements the neural canonical transformation approach to the thermodynamic properties of uniform electron gas. Building on JAX,

FermiFlow 9 Mar 03, 2022
Pairwise model for commonlit competition

Pairwise model for commonlit competition To run: - install requirements - create input directory with train_folds.csv and other competition data - cd

abhishek thakur 45 Aug 31, 2022
A self-supervised 3D representation learning framework named viewpoint bottleneck.

Pointly-supervised 3D Scene Parsing with Viewpoint Bottleneck Paper Created by Liyi Luo, Beiwen Tian, Hao Zhao and Guyue Zhou from Institute for AI In

63 Aug 11, 2022
Official code for our ICCV paper: "From Continuity to Editability: Inverting GANs with Consecutive Images"

GANInversion_with_ConsecutiveImgs Official code for our ICCV paper: "From Continuity to Editability: Inverting GANs with Consecutive Images" https://a

QingyangXu 38 Dec 07, 2022
Tensorflow implementation of our method: "Triangle Graph Interest Network for Click-through Rate Prediction".

TGIN Tensorflow implementation of our method: "Triangle Graph Interest Network for Click-through Rate Prediction". Files in the folder dataset/ electr

Alibaba 21 Dec 21, 2022
Source code for "Interactive All-Hex Meshing via Cuboid Decomposition [SIGGRAPH Asia 2021]".

Interactive All-Hex Meshing via Cuboid Decomposition Video demonstration This repository contains an interactive software to the PolyCube-based hex-me

Lingxiao Li 131 Dec 05, 2022
The implemetation of Dynamic Nerual Garments proposed in Siggraph Asia 2021

DynamicNeuralGarments Introduction This repository contains the implemetation of Dynamic Nerual Garments proposed in Siggraph Asia 2021. ./GarmentMoti

42 Dec 27, 2022
Google Landmark Recogntion and Retrieval 2021 Solutions

Google Landmark Recogntion and Retrieval 2021 Solutions In this repository you can find solution and code for Google Landmark Recognition 2021 and Goo

Vadim Timakin 5 Nov 25, 2022
Scale-aware Automatic Augmentation for Object Detection (CVPR 2021)

SA-AutoAug Scale-aware Automatic Augmentation for Object Detection Yukang Chen, Yanwei Li, Tao Kong, Lu Qi, Ruihang Chu, Lei Li, Jiaya Jia [Paper] [Bi

DV Lab 182 Dec 29, 2022
Implementation of Retrieval-Augmented Denoising Diffusion Probabilistic Models in Pytorch

Retrieval-Augmented Denoising Diffusion Probabilistic Models (wip) Implementation of Retrieval-Augmented Denoising Diffusion Probabilistic Models in P

Phil Wang 55 Jan 01, 2023
Pytorch implementation of the paper Time-series Generative Adversarial Networks

TimeGAN-pytorch Pytorch implementation of the paper Time-series Generative Adversarial Networks presented at NeurIPS'19. Jinsung Yoon, Daniel Jarrett

Zhiwei ZHANG 21 Nov 24, 2022
Randstad Artificial Intelligence Challenge (powered by VGEN). Soluzione proposta da Stefano Fiorucci (anakin87) - primo classificato

Randstad Artificial Intelligence Challenge (powered by VGEN) Soluzione proposta da Stefano Fiorucci (anakin87) - primo classificato Struttura director

Stefano Fiorucci 1 Nov 13, 2021
Training DiffWave using variational method from Variational Diffusion Models.

Variational DiffWave Training DiffWave using variational method from Variational Diffusion Models. Quick Start python train_distributed.py discrete_10

Chin-Yun Yu 26 Dec 13, 2022
Adversarial-autoencoders - Tensorflow implementation of Adversarial Autoencoders

Adversarial Autoencoders (AAE) Tensorflow implementation of Adversarial Autoencoders (ICLR 2016) Similar to variational autoencoder (VAE), AAE imposes

Qian Ge 236 Nov 13, 2022
Company clustering with K-means/GMM and visualization with PCA, t-SNE, using SSAN relation extraction

RE results graph visualization and company clustering Installation pip install -r requirements.txt python -m nltk.downloader stopwords python3.7 main.

Jieun Han 1 Oct 06, 2022
SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems

The SLIDE package contains the source code for reproducing the main experiments in this paper. Dataset The Datasets can be downloaded in Amazon-

Intel Labs 72 Dec 16, 2022
Detecting and Tracking Small and Dense Moving Objects in Satellite Videos: A Benchmark

This dataset is a large-scale dataset for moving object detection and tracking in satellite videos, which consists of 40 satellite videos captured by Jilin-1 satellite platforms.

Qingyong 87 Dec 22, 2022
SafePicking: Learning Safe Object Extraction via Object-Level Mapping, ICRA 2022

SafePicking Learning Safe Object Extraction via Object-Level Mapping Kentaro Wad

Kentaro Wada 49 Oct 24, 2022
Benchmarks for the Optimal Power Flow Problem

Power Grid Lib - Optimal Power Flow This benchmark library is curated and maintained by the IEEE PES Task Force on Benchmarks for Validation of Emergi

A Library of IEEE PES Power Grid Benchmarks 207 Dec 08, 2022