Measures input lag without dedicated hardware, performing motion detection on recorded or live video

Overview

What is InputLagTimer?

This tool can measure input lag by analyzing a video where both the game controller and the game screen can be seen on a webcam or a video file.

Here's how it looks in action:

Usage demo

Even though the typical usage is game latency, InputLagTimer can measure any latency so long as it's captured on video. For example, if you point a camera at both your car key and its door lock, you can measure how fast that remote unlocks your car.

How does it measure input lag?

You first mark two rectangles in the video you provide:

  • 🟦 Input rectangle (blue): where the input motion happens. Such as a gamepad stick.
  • 🟪 Output rectangle (purple): where the response will be visible. Such as the middle left of your TV screen, where the front wheels can be seen turning in your car simulator.

InputLagTimer will detect motion on the input area, and time how long it takes to detect motion on the output area.

Things should work for latencies of up to 700ms; if you need to measure slower events, the limit can be trivially edited in code.

How to use it:

  1. Download InputLagTimer (some windows binaries are available on github if you prefer that)
  2. Open InputLagTimer:
    • Plug your webcam then run the program.
    • Or drag-and-drop your video file to the program.
    • Or, from command line, type InputLagTimer 2 to open the 3rd webcam, or InputLagTimer file.mp4 to open a file.
  3. Press S then follow screen instructions to select the 🟦 input and 🟪 output rectangles.
  4. Observe the input and output motion bars at the top, and press 1/2 and 3/4 to adjust the motion detection thresholds (white indicator). Latency timing will start when the input motion passes the threshold, and stop when the output motion does.

Note: a .cfg file will be created for each video, allowing to reproduce the same latency analysis.

Tips and gotchas

  • Use a tripod to hold the camera. The InputLagTimer is based on motion detection, therefore hand-held footage is doomed to spam false positives.
  • Disable gamepad vibration and put the gamepad in a table (unless you want to measure vibration-latency!): in other words,reduce unnecessary motion from both the input and output rectangles.
  • Select the 🟦 input and 🟪 output rectangles as accurately as possible. E.g. to measure keyboard key travel time, draw an input rectangle including the entire key height. If you don't want to include key travel latency, draw the input rectangle as close to the key activation point as possible.
  • If using certain artificial lights, enable camera's anti-flicker feature when available (press C in InputLagTimer when using a webcam), or choose a recording framerate different than the powerline frequency used in your country (often 50Hz or 60Hz). This removes video flicker, vastly improving motion detection.
  • Prefer higher recording framerate, this provides finer-grained latency measurements:
    • Some phones and actioncams can reach hundreds of FPS.
    • Recording equipment may not reach its advertised framerate if it's not bright enough. If in doubt, add more lighting.
  • If your camera cannot reach the requested framerate (e.g. it only manages to capture 120FPS out of 240FPS, due to lack of light), consider recording directly at the reachable framerate. This eliminates the useless filler frames your camera was forced to duplicate, making it easier to tune the motion detection thresholds in InputLagTimer.
  • Prefer global shutter over rolling shutter cameras. Rolling shutter can slightly skew latency measurements, as one corner of the image is recorded earlier than the oposite corner.

Rolling Shutter example

(source: Axel1963 - CC BY-SA 3.0)

  • Screens normally refresh pixels from the top earlier than pixels from the bottom (or left before right, etc). The location of 🟦 input/ 🟪 output rectangles in a screen can slightly skew latency measurements.
  • The pixels on a screen can take longer or shorter to update, depending on:
    • Pixel color. E.g. white-to-black response time might be longer than black-to-white.
    • Panel type. E.g. OLED will normally be much quicker than LCD panels.
    • Screen configuration. E.g. enabling 'overdrive', enabling 'game mode', etc.
  • Press A (Advanced mode) to see more keys and additional information.

Advanced Mode screenshot

Dependencies

To run the EXE, you don't need anythig else. So move along, nothing to see in this section :)

To run the python code directly, you'll need opencv for python, numpy, and whichever python interpreter you prefer.

To build the binary (with compile.py), you'll need PyInstaller.

Credits and licenses

InputLagTimer software:

Copyright 2021 Bruno Gonzalez Campo | [email protected] | @stenyak

Distributed under MIT license (see license.txt)

InputLagTimer icon:

Copyright 2021 Bruno Gonzalez Campo | [email protected] | @stenyak

Distributed under CC BY 3.0 license (see license_icon.txt)

Icon derived from:

You might also like...
Python scripts for performing object detection with the 1000 labels of the ImageNet dataset in ONNX.
Python scripts for performing object detection with the 1000 labels of the ImageNet dataset in ONNX.

Python scripts for performing object detection with the 1000 labels of the ImageNet dataset in ONNX. The repository combines a class agnostic object localizer to first detect the objects in the image, and next a ResNet50 model trained on ImageNet is used to label each box.

Python scripts for performing road segemtnation and car detection using the HybridNets multitask model in ONNX.
Python scripts for performing road segemtnation and car detection using the HybridNets multitask model in ONNX.

ONNX-HybridNets-Multitask-Road-Detection Python scripts for performing road segemtnation and car detection using the HybridNets multitask model in ONN

This project uses reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can learn to read tape. The project is dedicated to hero in life great Jesse Livermore.

Reinforcement-trading This project uses Reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can

NeuralCompression is a Python repository dedicated to research of neural networks that compress data

NeuralCompression is a Python repository dedicated to research of neural networks that compress data. The repository includes tools such as JAX-based entropy coders, image compression models, video compression models, and metrics for image and video evaluation.

SCAAML is a deep learning framwork dedicated to side-channel attacks run on top of TensorFlow 2.x.
SCAAML is a deep learning framwork dedicated to side-channel attacks run on top of TensorFlow 2.x.

SCAAML (Side Channel Attacks Assisted with Machine Learning) is a deep learning framwork dedicated to side-channel attacks. It is written in python and run on top of TensorFlow 2.x.

NeoPlay is the project dedicated to ESport events.

NeoPlay is the project dedicated to ESport events. On this platform users can participate in tournaments with prize pools as well as create their own tournaments.

This program was designed to detect whether someone is wearing a facemask through a live video stream.

This program was designed to detect whether someone is wearing a facemask through a live video stream. A custom lightweight CNN trained with TensorFlow on a public dataset provided by Kaggle is used to detect whether each face detected by the cv2 face detection dnn is wearing a mask

Video-Captioning - A machine Learning project to generate captions for video frames indicating the relationship between the objects in the video
Video-Captioning - A machine Learning project to generate captions for video frames indicating the relationship between the objects in the video

Video-Captioning - A machine Learning project to generate captions for video frames indicating the relationship between the objects in the video

Official implementation of the network presented in the paper
Official implementation of the network presented in the paper "M4Depth: A motion-based approach for monocular depth estimation on video sequences"

M4Depth This is the reference TensorFlow implementation for training and testing depth estimation models using the method described in M4Depth: A moti

Releases(v1.2)
  • v1.2(Mar 29, 2022)

    • Display summary of measured latencies: min/avg/max latencies and a histogram
    • Added display with the current framerate
    • Fixed incorrect timing when a webcam dropped below the advertised framerate
    • The 'a' key will now cycle between varying amounts of detail (more detail can lead to lower framerates)
    • Add CC license links on readme
    • Minor cleanups here and there

    Full Changelog: https://github.com/stenyak/inputLagTimer/compare/v1.1...v1.2

    Source code(tar.gz)
    Source code(zip)
    InputLagTimer.exe(50.81 MB)
  • v1.1(Jan 8, 2022)

    • Fix safety timeout kicking in too soon if using a custom maxLatency
    • Fix first webcam being ignored when running the program without arguments
    • Rename compiled file from camelCase to CamelCase

    Full Changelog: https://github.com/stenyak/inputLagTimer/compare/v1.0...v1.1

    Source code(tar.gz)
    Source code(zip)
    InputLagTimer.exe(49.22 MB)
  • v1.0(Jan 8, 2022)

Net2net - Network-to-Network Translation with Conditional Invertible Neural Networks

Net2Net Code accompanying the NeurIPS 2020 oral paper Network-to-Network Translation with Conditional Invertible Neural Networks Robin Rombach*, Patri

CompVis Heidelberg 206 Dec 20, 2022
Autolfads-tf2 - A TensorFlow 2.0 implementation of Latent Factor Analysis via Dynamical Systems (LFADS) and AutoLFADS

autolfads-tf2 A TensorFlow 2.0 implementation of LFADS and AutoLFADS. Installati

Systems Neural Engineering Lab 11 Oct 29, 2022
Recovering Brain Structure Network Using Functional Connectivity

Recovering-Brain-Structure-Network-Using-Functional-Connectivity Framework: Papers: This repository provides a PyTorch implementation of the models ad

5 Nov 30, 2022
Code for Robust Contrastive Learning against Noisy Views

Robust Contrastive Learning against Noisy Views This repository provides a PyTorch implementation of the Robust InfoNCE loss proposed in paper Robust

Ching-Yao Chuang 53 Jan 08, 2023
In this project, two programs can help you take full agvantage of time on the model training with a remote server

In this project, two programs can help you take full agvantage of time on the model training with a remote server, which can push notification to your phone about the information during model trainin

GrayLee 8 Dec 27, 2022
Official Pytorch Implementation of 3DV2021 paper: SAFA: Structure Aware Face Animation.

SAFA: Structure Aware Face Animation (3DV2021) Official Pytorch Implementation of 3DV2021 paper: SAFA: Structure Aware Face Animation. Getting Started

QiulinW 122 Dec 23, 2022
Implementation of our recent paper, WOOD: Wasserstein-based Out-of-Distribution Detection.

WOOD Implementation of our recent paper, WOOD: Wasserstein-based Out-of-Distribution Detection. Abstract The training and test data for deep-neural-ne

8 Dec 24, 2022
Flask101 - FullStack Web Development with Python & JS - From TAQWA

Task: Create a CLI Calculator Step 0: Creating Virtual Environment $ python -m

Hossain Foysal 1 May 31, 2022
Code for a real-time distributed cooperative slam(RDC-SLAM) system for ROS compatible platforms.

RDC-SLAM This repository contains code for a real-time distributed cooperative slam(RDC-SLAM) system for ROS compatible platforms. The system takes in

40 Nov 19, 2022
AdelaiDepth is an open source toolbox for monocular depth prediction.

AdelaiDepth is an open source toolbox for monocular depth prediction.

Adelaide Intelligent Machines (AIM) Group 743 Jan 01, 2023
MaRS - a recursive filtering framework that allows for truly modular multi-sensor integration

The Modular and Robust State-Estimation Framework, or short, MaRS, is a recursive filtering framework that allows for truly modular multi-sensor integration

Control of Networked Systems - University of Klagenfurt 143 Dec 29, 2022
Virtual hand gesture mouse using a webcam

NonMouse 日本語のREADMEはこちら This is an application that allows you to use your hand itself as a mouse. The program uses a web camera to recognize your han

Yuki Takeyama 55 Jan 01, 2023
Graph Convolutional Neural Networks with Data-driven Graph Filter (GCNN-DDGF)

Graph Convolutional Gated Recurrent Neural Network (GCGRNN) Improved from Graph Convolutional Neural Networks with Data-driven Graph Filter (GCNN-DDGF

Lei Lin 21 Dec 18, 2022
A Review of Deep Learning Techniques for Markerless Human Motion on Synthetic Datasets

HOW TO USE THIS PROJECT A Review of Deep Learning Techniques for Markerless Human Motion on Synthetic Datasets Based on DeepLabCut toolbox, we run wit

1 Jan 10, 2022
PyTorch implementation of Wide Residual Networks with 1-bit weights by McDonnell (ICLR 2018)

1-bit Wide ResNet PyTorch implementation of training 1-bit Wide ResNets from this paper: Training wide residual networks for deployment using a single

Sergey Zagoruyko 122 Dec 07, 2022
Learn other languages ​​using artificial intelligence with python.

The main idea of ​​the project is to facilitate the learning of other languages. We created a simple AI that will interact with you. Just ask questions that if she knows, she will answer.

Pedro Rodrigues 2 Jun 07, 2022
DyNet: The Dynamic Neural Network Toolkit

The Dynamic Neural Network Toolkit General Installation C++ Python Getting Started Citing Releases and Contributing General DyNet is a neural network

Chris Dyer's lab @ LTI/CMU 3.3k Jan 06, 2023
Weighted QMIX: Expanding Monotonic Value Function Factorisation

This repo contains the cleaned-up code that was used in "Weighted QMIX: Expanding Monotonic Value Function Factorisation"

whirl 82 Dec 29, 2022
一个目标检测的通用框架(不需要cuda编译),支持Yolo全系列(v2~v5)、EfficientDet、RetinaNet、Cascade-RCNN等SOTA网络。

一个目标检测的通用框架(不需要cuda编译),支持Yolo全系列(v2~v5)、EfficientDet、RetinaNet、Cascade-RCNN等SOTA网络。

Haoyu Xu 203 Jan 03, 2023
Scripts of Machine Learning Algorithms from Scratch. Implementations of machine learning models and algorithms using nothing but NumPy with a focus on accessibility. Aims to cover everything from basic to advance.

Algo-ScriptML Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The goal of this project is not t

Algo Phantoms 81 Nov 26, 2022