Implementation of Neonatal Seizure Detection using EEG signals for deploying on edge devices including Raspberry Pi.

Overview

NeonatalSeizureDetection

Description

Link: https://arxiv.org/abs/2111.15569

Citation:

@misc{nagarajan2021scalable,
      title={Scalable Machine Learning Architecture for Neonatal Seizure Detection on Ultra-Edge Devices}, 
      author={Vishal Nagarajan and Ashwini Muralidharan and Deekshitha Sriraman and Pravin Kumar S},
      year={2021},
      eprint={2111.15569},
      archivePrefix={arXiv},
      primaryClass={eess.SP}
}

This repository contains code for the implementation of the paper titled "Scalable Machine Learning Architecture for Neonatal Seizure Detection on Ultra-Edge Devices", which has been accepted at the AISP '22: 2nd International Conference on Artificial Intelligence and Signal Processing. A typical neonatal seizure and non-seizure event is illustrated below. Continuous EEG signals are filtered and segmented with varying window lengths of 1, 2, 4, 8, and 16 seconds. The data used here for experimentation can be downloaded from here.

Seizure Event Non-seizure Event

This end-to-end architecture receives raw EEG signal, processes it and classifies it as ictal or normal activity. After preprocessing, the signal is passed to a feature extraction engine that extracts the necessary feature set Fd. It is followed by a scalable machine learning (ML) classifier that performs prediction as illustrated in the figure below.

Pipeline Architecture

Files description

  1. dataprocessing.ipynb -> Notebook for converting edf files to csv files.
  2. filtering.ipynb -> Notebook for filtering the input EEG signals in order to observe the specific frequencies.
  3. segmentation.ipynb -> Notebook for segmenting the input into appropriate windows lengths and overlaps.
  4. features_final.ipynb -> Notebook for extracting relevant features from segmented data.
  5. protoNN_example.py -> Script used for running protoNN model using .npy files.
  6. inference_time.py -> Script used to record and report inference times.
  7. knn.ipynb -> Notebook used to compare results of ProtoNN and kNN models.

Dependencies

If you are using conda, it is recommended to switch to a new environment.

    $ conda create -n myenv
    $ conda activate myenv
    $ conda install pip
    $ pip install -r requirements.txt

If you wish to use virtual environment,

    $ pip install virtualenv
    $ virtualenv myenv
    $ source myenv/bin/activate
    $ pip install -r requirements.txt

Usage

  1. Clone the ProtoNN package from here, antropy package from here, and envelope_derivative_operator package from here.

  2. Replace the protoNN_example.py with protoNN_example.py.

  3. Prepare the train and test data .npy files and save it in a DATA_DIR directory.

  4. Execute the following command in terminal after preparing the data files. Create an output directory should you need to save the weights of the ProtoNN object as OUT_DIR.

        $ python protoNN_example.py -d DATA_DIR -e 500 -o OUT_DIR
    

Authors

Vishal Nagarajan

Ashwini Muralidharan

Deekshitha Sriraman

Acknowledgements

ProtoNN built using EdgeML provided by Microsoft. Features extracted using antropy and otoolej repositories.

References

[1] Nathan Stevenson, Karoliina Tapani, Leena Lauronen, & Sampsa Vanhatalo. (2018). A dataset of neonatal EEG recordings with seizures annotations [Data set]. Zenodo. https://doi.org/10.5281/zenodo.1280684.

[2] Gupta, Ankit et al. "ProtoNN: Compressed and Accurate kNN for Resource-scarce Devices." Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, PMLR 70.

Owner
Vishal Nagarajan
Undergraduate ML Research Assistant at Solarillion Foundation B.E. (CSE) @ SSNCE
Vishal Nagarajan
ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation

ST++ This is the official PyTorch implementation of our paper: ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation. Lihe Ya

Lihe Yang 147 Jan 03, 2023
The self-supervised goal reaching benchmark introduced in Discovering and Achieving Goals via World Models

Lexa-Benchmark Codebase for the self-supervised goal reaching benchmark introduced in 'Discovering and Achieving Goals via World Models'. Setup Create

1 Oct 14, 2021
General Virtual Sketching Framework for Vector Line Art (SIGGRAPH 2021)

General Virtual Sketching Framework for Vector Line Art - SIGGRAPH 2021 Paper | Project Page Outline Dependencies Testing with Trained Weights Trainin

Haoran MO 118 Dec 27, 2022
EdiBERT, a generative model for image editing

EdiBERT, a generative model for image editing EdiBERT is a generative model based on a bi-directional transformer, suited for image manipulation. The

16 Dec 07, 2022
A PyTorch toolkit for 2D Human Pose Estimation.

PyTorch-Pose PyTorch-Pose is a PyTorch implementation of the general pipeline for 2D single human pose estimation. The aim is to provide the interface

Wei Yang 1.1k Dec 30, 2022
Code for the KDD 2021 paper 'Filtration Curves for Graph Representation'

Filtration Curves for Graph Representation This repository provides the code from the KDD'21 paper Filtration Curves for Graph Representation. Depende

Machine Learning and Computational Biology Lab 16 Oct 16, 2022
Plenoxels: Radiance Fields without Neural Networks, Code release WIP

Plenoxels: Radiance Fields without Neural Networks Alex Yu*, Sara Fridovich-Keil*, Matthew Tancik, Qinhong Chen, Benjamin Recht, Angjoo Kanazawa UC Be

Alex Yu 2.3k Dec 30, 2022
Code to generate datasets used in "How Useful is Self-Supervised Pretraining for Visual Tasks?"

Synthetic dataset rendering Framework for producing the synthetic datasets used in: How Useful is Self-Supervised Pretraining for Visual Tasks? Alejan

Princeton Vision & Learning Lab 21 Apr 29, 2022
TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-Captured Scenarios

TPH-YOLOv5 This repo is the implementation of "TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-Captured

cv516Buaa 439 Dec 22, 2022
Voila - Voilà turns Jupyter notebooks into standalone web applications

Rendering of live Jupyter notebooks with interactive widgets. Introduction Voilà turns Jupyter notebooks into standalone web applications. Unlike the

Voilà Dashboards 4.5k Jan 03, 2023
An open-source, low-cost, image-based weed detection device for fallow scenarios.

Welcome to the OpenWeedLocator (OWL) project, an opensource hardware and software green-on-brown weed detector that uses entirely off-the-shelf compon

Guy Coleman 145 Jan 05, 2023
CoSMA: Convolutional Semi-Regular Mesh Autoencoder. From Paper "Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes"

Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes Implementation of CoSMA: Convolutional Semi-Regular Mesh Autoencoder arXiv p

Fraunhofer SCAI 10 Oct 11, 2022
Project looking into use of autoencoder for semi-supervised learning and comparing data requirements compared to supervised learning.

Project looking into use of autoencoder for semi-supervised learning and comparing data requirements compared to supervised learning.

Tom-R.T.Kvalvaag 2 Dec 17, 2021
CCP dataset from Clothing Co-Parsing by Joint Image Segmentation and Labeling

Clothing Co-Parsing (CCP) Dataset Clothing Co-Parsing (CCP) dataset is a new clothing database including elaborately annotated clothing items. 2, 098

Wei Yang 434 Dec 24, 2022
Implementation of Pooling by Sliced-Wasserstein Embedding (NeurIPS 2021)

PSWE: Pooling by Sliced-Wasserstein Embedding (NeurIPS 2021) PSWE is a permutation-invariant feature aggregation/pooling method based on sliced-Wasser

Navid Naderializadeh 3 May 06, 2022
The implementation of "Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer"

Shuffle Transformer The implementation of "Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer" Introduction Very recently, window-

87 Nov 29, 2022
Tensorflow Implementation for "Pre-trained Deep Convolution Neural Network Model With Attention for Speech Emotion Recognition"

Tensorflow Implementation for "Pre-trained Deep Convolution Neural Network Model With Attention for Speech Emotion Recognition" Pre-trained Deep Convo

Ankush Malaker 5 Nov 11, 2022
Degree-Quant: Quantization-Aware Training for Graph Neural Networks.

Degree-Quant This repo provides a clean re-implementation of the code associated with the paper Degree-Quant: Quantization-Aware Training for Graph Ne

35 Oct 07, 2022
Tensorflow Repo for "DeepGCNs: Can GCNs Go as Deep as CNNs?"

DeepGCNs: Can GCNs Go as Deep as CNNs? In this work, we present new ways to successfully train very deep GCNs. We borrow concepts from CNNs, mainly re

Guohao Li 612 Nov 15, 2022
The repo for the paper "I3CL: Intra- and Inter-Instance Collaborative Learning for Arbitrary-shaped Scene Text Detection".

I3CL: Intra- and Inter-Instance Collaborative Learning for Arbitrary-shaped Scene Text Detection Updates | Introduction | Results | Usage | Citation |

33 Jan 05, 2023