A look-ahead multi-entity Transformer for modeling coordinated agents.

Overview

baller2vec++

This is the repository for the paper:

Michael A. Alcorn and Anh Nguyen. baller2vec++: A Look-Ahead Multi-Entity Transformer For Modeling Coordinated Agents. arXiv. 2021.

To learn statistically dependent agent trajectories, baller2vec++ uses a specially designed self-attention mask to simultaneously process three different sets of features vectors in a single Transformer. The three sets of feature vectors consist of location feature vectors like those found in baller2vec, look-ahead trajectory feature vectors, and starting location feature vectors. This design allows the model to integrate information about concurrent agent trajectories through multiple Transformer layers without seeing the future (in contrast to baller2vec).
Training sample baller2vec baller2vec++

When trained on a dataset of perfectly coordinated agent trajectories, the trajectories generated by baller2vec are completely uncoordinated while the trajectories generated by baller2vec++ are perfectly coordinated.

Ground truth baller2vec baller2vec baller2vec
Ground truth baller2vec++ baller2vec++ baller2vec++

While baller2vec occasionally generates realistic trajectories for the red defender, it also makes egregious errors. In contrast, the trajectories generated by baller2vec++ often seem plausible. The red player was placed last in the player order when generating his trajectory with baller2vec++.

Citation

If you use this code for your own research, please cite:

@article{alcorn2021baller2vec,
   title={\texttt{baller2vec++}: A Look-Ahead Multi-Entity Transformer For Modeling Coordinated Agents},
   author={Alcorn, Michael A. and Nguyen, Anh},
   journal={arXiv preprint arXiv:2104.11980},
   year={2021}
}

Training baller2vec++

Setting up .basketball_profile

After you've cloned the repository to your desired location, create a file called .basketball_profile in your home directory:

nano ~/.basketball_profile

and copy and paste in the contents of .basketball_profile, replacing each of the variable values with paths relevant to your environment. Next, add the following line to the end of your ~/.bashrc:

source ~/.basketball_profile

and either log out and log back in again or run:

source ~/.bashrc

You should now be able to copy and paste all of the commands in the various instructions sections. For example:

echo ${PROJECT_DIR}

should print the path you set for PROJECT_DIR in .basketball_profile.

Installing the necessary Python packages

cd ${PROJECT_DIR}
pip3 install --upgrade -r requirements.txt

Organizing the play-by-play and tracking data

  1. Copy events.zip (which I acquired from here [mirror here] using https://downgit.github.io) to the DATA_DIR directory and unzip it:
mkdir -p ${DATA_DIR}
cp ${PROJECT_DIR}/events.zip ${DATA_DIR}
cd ${DATA_DIR}
unzip -q events.zip
rm events.zip

Descriptions for the various EVENTMSGTYPEs can be found here (mirror here).

  1. Clone the tracking data from here (mirror here) to the DATA_DIR directory:
cd ${DATA_DIR}
git clone [email protected]:linouk23/NBA-Player-Movements.git

A description of the tracking data can be found here.

Generating the training data

cd ${PROJECT_DIR}
nohup python3 generate_game_numpy_arrays.py > data.log &

You can monitor its progress with:

top

or:

ls -U ${GAMES_DIR} | wc -l

There should be 1,262 NumPy arrays (corresponding to 631 X/y pairs) when finished.

Running the training script

Run (or copy and paste) the following script, editing the variables as appropriate.

#!/usr/bin/env bash

JOB=$(date +%Y%m%d%H%M%S)

echo "train:" >> ${JOB}.yaml
task=basketball  # "basketball" or "toy".
echo "  task: ${task}" >> ${JOB}.yaml
if [[ "$task" = "basketball" ]]
then

    echo "  train_valid_prop: 0.95" >> ${JOB}.yaml
    echo "  train_prop: 0.95" >> ${JOB}.yaml
    echo "  train_samples_per_epoch: 20000" >> ${JOB}.yaml
    echo "  valid_samples: 1000" >> ${JOB}.yaml
    echo "  workers: 10" >> ${JOB}.yaml
    echo "  learning_rate: 1.0e-5" >> ${JOB}.yaml
    echo "  patience: 20" >> ${JOB}.yaml

    echo "dataset:" >> ${JOB}.yaml
    echo "  hz: 5" >> ${JOB}.yaml
    echo "  secs: 4.2" >> ${JOB}.yaml
    echo "  player_traj_n: 11" >> ${JOB}.yaml
    echo "  max_player_move: 4.5" >> ${JOB}.yaml

    echo "model:" >> ${JOB}.yaml
    echo "  embedding_dim: 20" >> ${JOB}.yaml
    echo "  sigmoid: none" >> ${JOB}.yaml
    echo "  mlp_layers: [128, 256, 512]" >> ${JOB}.yaml
    echo "  nhead: 8" >> ${JOB}.yaml
    echo "  dim_feedforward: 2048" >> ${JOB}.yaml
    echo "  num_layers: 6" >> ${JOB}.yaml
    echo "  dropout: 0.0" >> ${JOB}.yaml
    echo "  b2v: False" >> ${JOB}.yaml

else

    echo "  workers: 10" >> ${JOB}.yaml
    echo "  learning_rate: 1.0e-4" >> ${JOB}.yaml

    echo "model:" >> ${JOB}.yaml
    echo "  embedding_dim: 20" >> ${JOB}.yaml
    echo "  sigmoid: none" >> ${JOB}.yaml
    echo "  mlp_layers: [64, 128]" >> ${JOB}.yaml
    echo "  nhead: 4" >> ${JOB}.yaml
    echo "  dim_feedforward: 512" >> ${JOB}.yaml
    echo "  num_layers: 2" >> ${JOB}.yaml
    echo "  dropout: 0.0" >> ${JOB}.yaml
    echo "  b2v: True" >> ${JOB}.yaml

fi

# Save experiment settings.
mkdir -p ${EXPERIMENTS_DIR}/${JOB}
mv ${JOB}.yaml ${EXPERIMENTS_DIR}/${JOB}/

gpu=0
cd ${PROJECT_DIR}
nohup python3 train_baller2vecplusplus.py ${JOB} ${gpu} > ${EXPERIMENTS_DIR}/${JOB}/train.log &
Owner
Michael A. Alcorn
Brute-forcing my way through life.
Michael A. Alcorn
Extract Keywords from sentence or Replace keywords in sentences.

FlashText This module can be used to replace keywords in sentences or extract keywords from sentences. It is based on the FlashText algorithm. Install

Vikash Singh 5.3k Jan 01, 2023
ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files.

ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files.

Antlr Project 13.6k Jan 05, 2023
Natural Language Processing at EDHEC, 2022

Natural Language Processing Here you will find the teaching materials for the "Natural Language Processing" course at EDHEC Business School, 2022 What

1 Feb 04, 2022
Unet-TTS: Improving Unseen Speaker and Style Transfer in One-shot Voice Cloning

Unet-TTS: Improving Unseen Speaker and Style Transfer in One-shot Voice Cloning English | 中文 ❗ Now we provide inferencing code and pre-training models

164 Jan 02, 2023
HAIS_2GNN: 3D Visual Grounding with Graph and Attention

HAIS_2GNN: 3D Visual Grounding with Graph and Attention This repository is for the HAIS_2GNN research project. Tao Gu, Yue Chen Introduction The motiv

Yue Chen 1 Nov 26, 2022
TunBERT is the first release of a pre-trained BERT model for the Tunisian dialect using a Tunisian Common-Crawl-based dataset.

TunBERT is the first release of a pre-trained BERT model for the Tunisian dialect using a Tunisian Common-Crawl-based dataset. TunBERT was applied to three NLP downstream tasks: Sentiment Analysis (S

InstaDeep Ltd 72 Dec 09, 2022
Translate U is capable of translating the text present in an image from one language to the other.

Translate U is capable of translating the text present in an image from one language to the other. The app uses OCR and Google translate to identify and translate across 80+ languages.

Neelanjan Manna 1 Dec 22, 2021
translate using your voice

speech-to-text-translator Usage translate using your voice description this project makes translating a word easy, all you have to do is speak and...

1 Oct 18, 2021
An official repository for tutorials of Probabilistic Modelling and Reasoning (2021/2022) - a University of Edinburgh master's course.

PMR computer tutorials on HMMs (2021-2022) This is a repository for computer tutorials of Probabilistic Modelling and Reasoning (2021/2022) - a Univer

Vaidotas Šimkus 10 Dec 06, 2022
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.

Tensor2Tensor Tensor2Tensor, or T2T for short, is a library of deep learning models and datasets designed to make deep learning more accessible and ac

12.9k Jan 07, 2023
Code for the paper "BERT Loses Patience: Fast and Robust Inference with Early Exit".

Patience-based Early Exit Code for the paper "BERT Loses Patience: Fast and Robust Inference with Early Exit". NEWS: We now have a better and tidier i

Kevin Canwen Xu 54 Jan 04, 2023
A python project made to generate code using either OpenAI's codex or GPT-J (Although not as good as codex)

CodeJ A python project made to generate code using either OpenAI's codex or GPT-J (Although not as good as codex) Install requirements pip install -r

TheProtagonist 1 Dec 06, 2021
초성 해석기 based on ko-BART

초성 해석기 개요 한국어 초성만으로 이루어진 문장을 입력하면, 완성된 문장을 예측하는 초성 해석기입니다. 초성: ㄴㄴ ㄴㄹ ㅈㅇㅎ 예측 문장: 나는 너를 좋아해 모델 모델은 SKT-AI에서 공개한 Ko-BART를 이용합니다. 데이터 문장 단위로 이루어진 아무 코퍼스나

Dawoon Jung 29 Oct 28, 2022
apple's universal binaries BUT MUCH WORSE (PRACTICAL SHITPOST) (NOT PRODUCTION READY)

hyperuniversality investment opportunity: what if we could run multiple architectures in a single file, again apple universal binaries, but worse how

luna 2 Oct 19, 2021
中文問句產生器;使用台達電閱讀理解資料集(DRCD)

Transformer QG on DRCD The inputs of the model refers to we integrate C and A into a new C' in the following form. C' = [c1, c2, ..., [HL], a1, ..., a

Philip 1 Oct 22, 2021
customer care chatbot made with Rasa Open Source.

Customer Care Bot Customer care bot for ecomm company which can solve faq and chitchat with users, can contact directly to team. 🛠 Features Basic E-c

Dishant Gandhi 23 Oct 27, 2022
Text classification on IMDB dataset using Keras and Bi-LSTM network

Text classification on IMDB dataset using Keras and Bi-LSTM Text classification on IMDB dataset using Keras and Bi-LSTM network. Usage python3 main.py

Hamza Rashid 2 Sep 27, 2022
Incorporating KenLM language model with HuggingFace implementation of Wav2Vec2CTC Model using beam search decoding

Wav2Vec2CTC With KenLM Using KenLM ARPA language model with beam search to decode audio files and show the most probable transcription. Assuming you'v

farisalasmary 65 Sep 21, 2022
A combination of autoregressors and autoencoders using XLNet for sentiment analysis

A combination of autoregressors and autoencoders using XLNet for sentiment analysis Abstract In this paper sentiment analysis has been performed in or

James Zaridis 2 Nov 20, 2021