Fight Recognition from Still Images in the Wild @ WACVW2022, Real-world Surveillance Workshop

Related tags

Deep LearningSMFI
Overview

Fight Detection from Still Images in the Wild

Detecting fights from still images is an important task required to limit the distribution of social media images with fight content, in order to prevent the negative effects of such violent media items. For this reason, in this study we addressed the problem of fight detection from still images collected from web and social media. We explored how well one can detect fights from just a single still image.

In this context, a new image dataset on the fight recognition from still images task is collected named Social Media Fight Images (SMFI) dataset. The dataset samples gathered from social media (Twitter and Google) and NTU-CCTV Fights 1 dataset. Since the main concern is recognizing fight actions in the wild, real-world scenarios are included in the dataset where a mass amount of them are spontaneous recordings of fight actions. Using different keywords while crawling the data, the regional diversity is also maintained since the social media uploadings are mostly regional where users share the content in their own language. Some example images from the dataset are given below:

samples

Both fight and non-fight samples are collected from the same domain where the non-fight samples are also content likely to be shared on social media. Hard non-fight samples are also included in the dataset which displays the actions that might be misinterpreted as fight such as hugging, throwing ball, dancing and more. This prevents the dataset bias, so that the trained models focuses on the actions and the performers on the scene instead of benefiting other characteristics such as motion blur. The distribution of the dataset samples among each class and source is given below:

Twitter Google NTU CCTV-Fights Total
Fight 2247 162 330 2739
Non-fight 2642 146 164 2952
Total 4889 308 494 5691

Due to the copyright issues the dataset images are not shared directly and the links to the images / videos are shared. As the dataset samples might be deleted in time by the users or the authorities, the size of the dataset is subject to change.

Dataset Format

The dataset samples are shared through a CSV file where the columns are as follows:

  • Image ID: Unique ID assigned to each image.
  • Class: class of the image as fight / nofight
  • Source: The source of the images or videos as twitter_img / twitter_video / google / ntu-cctv
  • URL: The link for the images / videos.
    • For Twitter and Google data, image and video URLs are shared.
    • For the NTU CCTV-Fights data, the path to the original video is shared.
  • Frame number: If the image is extracted from a video, this column indicates the number of frame within the video.
    • For Twitter videos, the frame number is the number of frame (0-9) out of 10 uniformly sampled frames from each video.
    • For NTU CCTV-Fight videos, the frame number is the number of frame (0-N) out of all frames (N) extracted from each video.

In order to retrieve the dataset, you should first download the NTU CCTV-Fights here.

Citation

TBA

References

1 Mauricio Perez, Alex C. Kot, Anderson Rocha, “Detection of Real-world Fights in Surveillance Videos”, in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2019

Owner
Şeymanur Aktı
Şeymanur Aktı
LUKE -- Language Understanding with Knowledge-based Embeddings

LUKE (Language Understanding with Knowledge-based Embeddings) is a new pre-trained contextualized representation of words and entities based on transf

Studio Ousia 587 Dec 30, 2022
An unofficial styleguide and best practices summary for PyTorch

A PyTorch Tools, best practices & Styleguide This is not an official style guide for PyTorch. This document summarizes best practices from more than a

IgorSusmelj 1.5k Jan 05, 2023
SwinTrack: A Simple and Strong Baseline for Transformer Tracking

SwinTrack This is the official repo for SwinTrack. A Simple and Strong Baseline Prerequisites Environment conda (recommended) conda create -y -n SwinT

LitingLin 196 Jan 04, 2023
Safe Local Motion Planning with Self-Supervised Freespace Forecasting, CVPR 2021

Safe Local Motion Planning with Self-Supervised Freespace Forecasting By Peiyun Hu, Aaron Huang, John Dolan, David Held, and Deva Ramanan Citing us Yo

Peiyun Hu 90 Dec 01, 2022
Using PyTorch Perform intent classification using three different models to see which one is better for this task

Using PyTorch Perform intent classification using three different models to see which one is better for this task

Yoel Graumann 1 Feb 14, 2022
Code in conjunction with the publication 'Contrastive Representation Learning for Hand Shape Estimation'

HanCo Dataset & Contrastive Representation Learning for Hand Shape Estimation Code in conjunction with the publication: Contrastive Representation Lea

Computer Vision Group, Albert-Ludwigs-Universität Freiburg 38 Dec 13, 2022
PyTorch implementations for our SIGGRAPH 2021 paper: Editable Free-viewpoint Video Using a Layered Neural Representation.

st-nerf We provide PyTorch implementations for our paper: Editable Free-viewpoint Video Using a Layered Neural Representation SIGGRAPH 2021 Jiakai Zha

Diplodocus 258 Jan 02, 2023
An Active Automata Learning Library Written in Python

AALpy An Active Automata Learning Library AALpy is a light-weight active automata learning library written in pure Python. You can start learning auto

TU Graz - SAL Dependable Embedded Systems Lab (DES Lab) 78 Dec 30, 2022
Code base for NeurIPS 2021 publication titled Kernel Functional Optimisation (KFO)

KernelFunctionalOptimisation Code base for NeurIPS 2021 publication titled Kernel Functional Optimisation (KFO) We have conducted all our experiments

2 Jun 29, 2022
FairMOT for Multi-Class MOT using YOLOX as Detector

FairMOT-X Project Overview FairMOT-X is a multi-class multi object tracker, which has been tailored for training on the BDD100K MOT Dataset. It makes

Jonathan Tan 33 Dec 28, 2022
App customer segmentation cohort rfm clustering

CUSTOMER SEGMENTATION COHORT RFM CLUSTERING TỔNG QUAN VỀ HỆ THỐNG DỮ LIỆU Nên chuyển qua theme màu dark thì sẽ nhìn đẹp hơn https://customer-segmentat

hieulmsc 3 Dec 18, 2021
Code release for NeuS

NeuS We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inpu

Peng Wang 813 Jan 04, 2023
Official implementation of GraphMask as presented in our paper Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking.

GraphMask This repository contains an implementation of GraphMask, the interpretability technique for graph neural networks presented in our ICLR 2021

Michael Schlichtkrull 29 Sep 02, 2022
Adaptive Pyramid Context Network for Semantic Segmentation (APCNet CVPR'2019)

Adaptive Pyramid Context Network for Semantic Segmentation (APCNet CVPR'2019) Introduction Official implementation of Adaptive Pyramid Context Network

21 Nov 09, 2022
The code for our NeurIPS 2021 paper "Kernelized Heterogeneous Risk Minimization".

Kernelized-HRM Jiashuo Liu, Zheyuan Hu The code for our NeurIPS 2021 paper "Kernelized Heterogeneous Risk Minimization"[1]. This repo contains the cod

Liu Jiashuo 8 Nov 20, 2022
CARMS: Categorical-Antithetic-REINFORCE Multi-Sample Gradient Estimator

CARMS: Categorical-Antithetic-REINFORCE Multi-Sample Gradient Estimator This is the official code repository for NeurIPS 2021 paper: CARMS: Categorica

Alek Dimitriev 1 Jul 09, 2022
Graph Representation Learning via Graphical Mutual Information Maximization

GMI (Graphical Mutual Information) Graph Representation Learning via Graphical Mutual Information Maximization (Peng Z, Huang W, Luo M, et al., WWW 20

93 Dec 29, 2022
Official code for 'Pixel-wise Energy-biased Abstention Learning for Anomaly Segmentationon Complex Urban Driving Scenes'

PEBAL This repo contains the Pytorch implementation of our paper: Pixel-wise Energy-biased Abstention Learning for Anomaly Segmentationon Complex Urba

Yu Tian 115 Dec 29, 2022
Image Fusion Transformer

Image-Fusion-Transformer Platform Python 3.7 Pytorch =1.0 Training Dataset MS-COCO 2014 (T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ram

Vibashan VS 68 Dec 23, 2022
TOOD: Task-aligned One-stage Object Detection, ICCV2021 Oral

One-stage object detection is commonly implemented by optimizing two sub-tasks: object classification and localization, using heads with two parallel branches, which might lead to a certain level of

264 Jan 09, 2023