A graph adversarial learning toolbox based on PyTorch and DGL.

Related tags

Deep LearningGraphWar
Overview

GraphWar: Arms Race in Graph Adversarial Learning

NOTE: GraphWar is still in the early stages and the API will likely continue to change.

πŸš€ Installation

Please make sure you have installed PyTorch and Deep Graph Library(DGL).

# Comming soon
pip install -U graphwar

or

# Recommended
git clone https://github.com/EdisonLeeeee/GraphWar.git && cd GraphWar
pip install -e . --verbose

where -e means "editable" mode so you don't have to reinstall every time you make changes.

Get Started

Assume that you have a dgl.DGLgraph instance g that describes you dataset.

A simple targeted attack

from graphwar.attack.targeted import RandomAttack
attacker = RandomAttack(g)
attacker.attack(1, num_budgets=3) # attacking target node `1` with `3` edges 
attacked_g = attacker.g()
edge_flips = attacker.edge_flips()

A simple untargeted attack

from graphwar.attack.untargeted import RandomAttack
attacker = RandomAttack(g)
attacker.attack(num_budgets=0.05) # attacking the graph with 5% edges perturbations
attacked_g = attacker.g()
edge_flips = attacker.edge_flips()

Implementations

In detail, the following methods are currently implemented:

Attack

Targeted Attack

Methods Venue
RandomAttack A simple random method that chooses edges to flip randomly.
DICEAttack Marcin Waniek et al. πŸ“ Hiding Individuals and Communities in a Social Network, Nature Human Behavior'16
Nettack Daniel ZΓΌgner et al. πŸ“ Adversarial Attacks on Neural Networks for Graph Data, KDD'18
FGAttack Jinyin Chen et al. πŸ“ Fast Gradient Attack on Network Embedding, arXiv'18
Jinyin Chen et al. πŸ“ Link Prediction Adversarial Attack Via Iterative Gradient Attack, IEEE Trans'20
Hanjun Dai et al. πŸ“ Adversarial Attack on Graph Structured Data, ICML'18
GFAttack Heng Chang et al. πŸ“ A Restricted Black - box Adversarial Framework Towards Attacking Graph Embedding Models, AAAI'20
IGAttack Huijun Wu et al. πŸ“ Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19
SGAttack Jintang Li et al. πŸ“ Adversarial Attack on Large Scale Graph, TKDE'21

Untargeted Attack

Methods Venue
RandomAttack A simple random method that chooses edges to flip randomly
DICEAttack Marcin Waniek et al. πŸ“ Hiding Individuals and Communities in a Social Network, Nature Human Behavior'16
FGAttack Jinyin Chen et al. πŸ“ Fast Gradient Attack on Network Embedding, arXiv'18
Jinyin Chen et al. πŸ“ Link Prediction Adversarial Attack Via Iterative Gradient Attack, IEEE Trans'20
Hanjun Dai et al. πŸ“ Adversarial Attack on Graph Structured Data, ICML'18
Metattack Daniel ZΓΌgner et al. πŸ“ Adversarial Attacks on Graph Neural Networks via Meta Learning, ICLR'19
PGD, MinmaxAttack Kaidi Xu et al. πŸ“ Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective, IJCAI'19

Defense

Model-Level

Methods Venue
MedianGCN Liang Chen et al. πŸ“ Understanding Structural Vulnerability in Graph Convolutional Networks, IJCAI'21
RobustGCN Dingyuan Zhu et al. πŸ“ Robust Graph Convolutional Networks Against Adversarial Attacks, KDD'19

Data-Level

Methods Venue
JaccardPurification Huijun Wu et al. πŸ“ Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19

More details of literatures and the official codes can be found in Awesome Graph Adversarial Learning.

Comments
  • Benchmark Results of Attack Performance

    Benchmark Results of Attack Performance

    Hi, thanks for sharing the awesome repo with us! I recently run the attack sample code but the resultpgd_attack.py and random_attack.py under examples/attack/untargeted, but the accuracies of both evasion and poison attack seem not to decrease.

    I'm pretty confused by the attack results. For CV models, pgd attack easily decreases the accuracy to nearly random guesses, but the results of GreatX seem not to consent with CV models. Is it because the number of the perturbed edges is too small?

    Here are the results of pgd_attack.py

    Processing...
    Done!
    Training...
    100/100 [==============================] - Total: 874.37ms - 8ms/step- loss: 0.0524 - acc: 0.996 - val_loss: 0.625 - val_acc: 0.815
    Evaluating...
    1/1 [==============================] - Total: 1.82ms - 1ms/step- loss: 0.597 - acc: 0.843
    Before attack
     Objects in BunchDict:
    ╒═════════╀═══════════╕
    β”‚ Names   β”‚   Objects β”‚
    β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
    β”‚ loss    β”‚  0.59718  β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ acc     β”‚  0.842555 β”‚
    β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
    PGD training...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 200/200 [00:02<00:00, 69.74it/s]
    Bernoulli sampling...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 20/20 [00:00<00:00, 804.86it/s]
    Evaluating...
    1/1 [==============================] - Total: 2.11ms - 2ms/step- loss: 0.603 - acc: 0.842
    After evasion attack
     Objects in BunchDict:
    ╒═════════╀═══════════╕
    β”‚ Names   β”‚   Objects β”‚
    β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
    β”‚ loss    β”‚  0.603293 β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ acc     β”‚  0.842052 β”‚
    β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
    Training...
    100/100 [==============================] - Total: 535.83ms - 5ms/step- loss: 0.124 - acc: 0.976 - val_loss: 0.728 - val_acc: 0.779
    Evaluating...
    1/1 [==============================] - Total: 1.74ms - 1ms/step- loss: 0.766 - acc: 0.827
    After poisoning attack
     Objects in BunchDict:
    ╒═════════╀═══════════╕
    β”‚ Names   β”‚   Objects β”‚
    β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
    β”‚ loss    β”‚  0.76604  β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ acc     β”‚  0.826962 β”‚
    β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
    

    Here are the results of random_attack.py

    Training...
    100/100 [==============================] - Total: 600.92ms - 6ms/step- loss: 0.0615 - acc: 0.984 - val_loss: 0.626 - val_acc: 0.811
    Evaluating...
    1/1 [==============================] - Total: 1.93ms - 1ms/step- loss: 0.564 - acc: 0.832
    Before attack
     Objects in BunchDict:
    ╒═════════╀═══════════╕
    β”‚ Names   β”‚   Objects β”‚
    β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
    β”‚ loss    β”‚  0.564449 β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ acc     β”‚  0.832495 β”‚
    β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
    Peturbing graph...: 253it [00:00, 4588.44it/s]
    Evaluating...
    1/1 [==============================] - Total: 2.14ms - 2ms/step- loss: 0.585 - acc: 0.826
    After evasion attack
     Objects in BunchDict:
    ╒═════════╀═══════════╕
    β”‚ Names   β”‚   Objects β”‚
    β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
    β”‚ loss    β”‚  0.584646 β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ acc     β”‚  0.826459 β”‚
    β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
    Training...
    100/100 [==============================] - Total: 530.04ms - 5ms/step- loss: 0.0767 - acc: 0.98 - val_loss: 0.574 - val_acc: 0.791
    Evaluating...
    1/1 [==============================] - Total: 1.77ms - 1ms/step- loss: 0.695 - acc: 0.813
    After poisoning attack
     Objects in BunchDict:
    ╒═════════╀═══════════╕
    β”‚ Names   β”‚   Objects β”‚
    β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║
    β”‚ loss    β”‚  0.695349 β”‚
    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
    β”‚ acc     β”‚  0.81338  β”‚
    β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›
    
    opened by ziqi-zhang 2
  • SG Attack example cannot run as expected on cuda

    SG Attack example cannot run as expected on cuda

    Hello, I got some error when I run SG Attack's example code on cuda device:

    Traceback (most recent call last):
      File "src/test.py", line 50, in <module>
        attacker.attack(target)
      File "/greatx/attack/targeted/sg_attack.py", line 212, in attack
        subgraph = self.get_subgraph(target, target_label, best_wrong_label)
      File "/greatx/attack/targeted/sg_attack.py", line 124, in get_subgraph
        self.label == best_wrong_label)[0].cpu().numpy()
    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:3 and cpu!
    

    I found self.label is on cuda device, but best_wrong_label is on cpu. https://github.com/EdisonLeeeee/GreatX/blob/73eac351fdae842dbd74967622bd0e573194c765/greatx/attack/targeted/sg_attack.py#L123-L124

    I remove line94 .cpu(), everything is going well and no error report

    https://github.com/EdisonLeeeee/GreatX/blob/73eac351fdae842dbd74967622bd0e573194c765/greatx/attack/targeted/sg_attack.py#L94-L96

    I found there is a commit that adds .cpu end of line 94, so I dont know it's a bug or something else🀨

    opened by beiyanpiki 1
  • problem with metattack

    problem with metattack

    Thanks for this wonderful repo. However, when I run the metattack example ,the result is not promising Here is my result when attack Cora with metattack Training... 100/100 [====================] - Total: 520.68ms - 5ms/step- loss: 0.0713 - acc: 0.996 - val_loss: 0.574 - val_acc: 0.847 Evaluating... 1/1 [====================] - Total: 2.01ms - 2ms/step- loss: 0.522 - acc: 0.847 Before attack ╒═════════╀═══════════╕ β”‚ Names β”‚ Objects β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ loss β”‚ 0.521524 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ acc β”‚ 0.846579 β”‚ β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•› Peturbing graph...: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 253/253 [01:00<00:00, 4.17it/s]Evaluating... 1/1 [====================] - Total: 2.08ms - 2ms/step- loss: 0.528 - acc: 0.844 After evasion attack ╒═════════╀═══════════╕ β”‚ Names β”‚ Objects β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ loss β”‚ 0.528431 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ acc β”‚ 0.844064 β”‚ β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•› Training... 32/100 [=====>..............] - ETA: 0s- loss: 0.212 - acc: 0.956 - val_loss: 0.634 - val_acc: 0.807 100/100 [====================] - Total: 407.58ms - 4ms/step- loss: 0.0601 - acc: 0.996 - val_loss: 0.704 - val_acc: 0.787 Evaluating... 1/1 [====================] - Total: 1.66ms - 1ms/step- loss: 0.711 - acc: 0.819 After poisoning attack ╒═════════╀═══════════╕ β”‚ Names β”‚ Objects β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ loss β”‚ 0.710625 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ acc β”‚ 0.818913 β”‚ β•˜β•β•β•β•β•β•β•β•β•β•§β•β•β•β•β•β•β•β•β•β•β•β•›

    opened by shanzhiq 1
  • Fix a venue of FeaturePropagation

    Fix a venue of FeaturePropagation

    Rossi's FeaturePropagation paper was submitted to ICLR'21 but was rejected. So how about updating with arXiv? This paper has not yet been accepted by other conferences.

    bug documentation 
    opened by jeongwhanchoi 1
  • Add ratio for `attacker.data()`, `attacker.edge_flips()`, and `attacker.feat_flips()`

    Add ratio for `attacker.data()`, `attacker.edge_flips()`, and `attacker.feat_flips()`

    This PR allows specifying an argument ratio for attacker.edge_flips(), and attacker.feat_flips(), which determines how many generated perturbations were used for further evaluation/visualization. Correspondingly, attacker.data() holds edge_ratio and feat_ratio for these two methods when constructing perturbed graph.

    • Example
    attacker = ...
    attacker.reset()
    attacker.attack(...)
    
    # Case1: Only 50% of generated edge perturbations were used
    trainer.evaluate(attacker.data(edge_ratio=0.5), mask=...)
    
    # Case2: Only 50% of generated feature perturbations were used
    trainer.evaluate(attacker.data(feat_ratio=0.5), mask=...)
    
    # NOTE: both arguments can be used simultaneously
    
    enhancement 
    opened by EdisonLeeeee 0
Releases(0.1.0)
  • 0.1.0(Jun 9, 2022)

    GraphWar 0.1.0 πŸŽ‰

    The first major release, built upon PyTorch and PyTorch Geometric (PyG).

    About GraphWar

    GraphWar is a graph adversarial learning toolbox based on PyTorch and PyTorch Geometric (PyG). It implements a wide range of adversarial attacks and defense methods focused on graph data. To facilitate the benchmark evaluation on graphs, we also provide a set of implementations on popular Graph Neural Networks (GNNs).

    Usages

    For more details, please refer to the documentation and examples.

    How fast can we train and evaluate your own GNN?

    Take GCN as an example:

    from graphwar.nn.models import GCN
    from graphwar.training import Trainer
    from torch_geometric.datasets import Planetoid
    dataset = Planetoid(root='.', name='Cora') # Any PyG dataset is available!
    data = dataset[0]
    model = GCN(dataset.num_features, dataset.num_classes)
    trainer = Trainer(model, device='cuda:0')
    trainer.fit({'data': data, 'mask': data.train_mask})
    trainer.evaluate({'data': data, 'mask': data.test_mask})
    

    A simple targeted manipulation attack

    from graphwar.attack.targeted import RandomAttack
    attacker = RandomAttack(data)
    attacker.attack(1, num_budgets=3) # attacking target node `1` with `3` edges 
    attacked_data = attacker.data()
    edge_flips = attacker.edge_flips()
    
    

    A simple untargeted (non-targeted) manipulation attack

    from graphwar.attack.untargeted import RandomAttack
    attacker = RandomAttack(data)
    attacker.attack(num_budgets=0.05) # attacking the graph with 5% edges perturbations
    attacked_data = attacker.data()
    edge_flips = attacker.edge_flips()
    

    We will continue to develop this project and introduce more state-of-the-art implementations of papers in the field of graph adversarial attacks and defenses.

    Source code(tar.gz)
    Source code(zip)
    graphwar-0.1.0-py3-none-any.whl(155.84 KB)
Owner
Jintang Li
Ph.D. student @ Sun Yat-sen University (SYSU), China.
Jintang Li
Pytorch implementation of the unsupervised object discovery method LOST.

LOST Pytorch implementation of the unsupervised object discovery method LOST. More details can be found in the paper: Localizing Objects with Self-Sup

Valeo.ai 189 Dec 25, 2022
Bottom-up attention model for image captioning and VQA, based on Faster R-CNN and Visual Genome

bottom-up-attention This code implements a bottom-up attention model, based on multi-gpu training of Faster R-CNN with ResNet-101, using object and at

Peter Anderson 1.3k Jan 09, 2023
Pomodoro timer that acknowledges the inexorable, infinite passage of time

Pomodouroboros Most pomodoro trackers assume you're going to start them. But time and tide wait for no one - the great pomodoro of the cosmos is cold

Glyph 66 Dec 13, 2022
Facial Action Unit Intensity Estimation via Semantic Correspondence Learning with Dynamic Graph Convolution

FAU Implementation of the paper: Facial Action Unit Intensity Estimation via Semantic Correspondence Learning with Dynamic Graph Convolution. Yingruo

Evelyn 78 Nov 29, 2022
Language Models for the legal domain in Spanish done @ BSC-TEMU within the "Plan de las TecnologΓ­as del Lenguaje" (Plan-TL).

Spanish legal domain Language Model βš–οΈ This repository contains the page for two main resources for the Spanish legal domain: A RoBERTa model: https:/

Plan de TecnologΓ­as del Lenguaje - Gobierno de EspaΓ±a 12 Nov 14, 2022
Clockwork Variational Autoencoder

Clockwork Variational Autoencoders (CW-VAE) Vaibhav Saxena, Jimmy Ba, Danijar Hafner If you find this code useful, please reference in your paper: @ar

Vaibhav Saxena 35 Nov 06, 2022
Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation

Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation [Arxiv] [Video] Evaluation code for Unrestricted Facial Geometry Reconstr

Matan Sela 242 Dec 30, 2022
Codeflare - Scale complex AI/ML pipelines anywhere

Scale complex AI/ML pipelines anywhere CodeFlare is a framework to simplify the integration, scaling and acceleration of complex multi-step analytics

CodeFlare 169 Nov 29, 2022
Implementation of Sequence Generative Adversarial Nets with Policy Gradient

SeqGAN Requirements: Tensorflow r1.0.1 Python 2.7 CUDA 7.5+ (For GPU) Introduction Apply Generative Adversarial Nets to generating sequences of discre

Lantao Yu 2k Dec 29, 2022
An end-to-end library for editing and rendering motion of 3D characters with deep learning [SIGGRAPH 2020]

Deep-motion-editing This library provides fundamental and advanced functions to work with 3D character animation in deep learning with Pytorch. The co

1.2k Dec 29, 2022
Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"

TR-BERT Source code and dataset for "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference". The code is based on huggaface's transformers.

THUNLP 37 Oct 30, 2022
This is a re-implementation of TransGAN: Two Pure Transformers Can Make One Strong GAN (CVPR 2021) in PyTorch.

TransGAN: Two Transformers Can Make One Strong GAN [YouTube Video] Paper Authors: Yifan Jiang, Shiyu Chang, Zhangyang Wang CVPR 2021 This is re-implem

Ahmet Sarigun 79 Jan 05, 2023
[ICCV21] Official implementation of the "Social NCE: Contrastive Learning of Socially-aware Motion Representations" in PyTorch.

Social-NCE + CrowdNav Website | Paper | Video | Social NCE + Trajectron | Social NCE + STGCNN This is an official implementation for Social NCE: Contr

VITA lab at EPFL 125 Dec 23, 2022
Image Segmentation and Object Detection in Pytorch

Image Segmentation and Object Detection in Pytorch Pytorch-Segmentation-Detection is a library for image segmentation and object detection with report

Daniil Pakhomov 732 Dec 10, 2022
Code and real data for the paper "Counterfactual Temporal Point Processes", available at arXiv.

counterfactual-tpp This is a repository containing code and real data for the paper Counterfactual Temporal Point Processes. Pre-requisites This code

Networks Learning 11 Dec 09, 2022
Python Rapid Artificial Intelligence Ab Initio Molecular Dynamics

Python Rapid Artificial Intelligence Ab Initio Molecular Dynamics

14 Nov 06, 2022
ELSED: Enhanced Line SEgment Drawing

ELSED: Enhanced Line SEgment Drawing This repository contains the source code of ELSED: Enhanced Line SEgment Drawing the fastest line segment detecto

Iago SuΓ‘rez 125 Dec 31, 2022
Pytorch implementation of PCT: Point Cloud Transformer

PCT: Point Cloud Transformer This is a Pytorch implementation of PCT: Point Cloud Transformer.

Yi_Zhang 265 Dec 22, 2022
Lbl2Vec learns jointly embedded label, document and word vectors to retrieve documents with predefined topics from an unlabeled document corpus.

Lbl2Vec Lbl2Vec is an algorithm for unsupervised document classification and unsupervised document retrieval. It automatically generates jointly embed

sebis - TUM - Germany 61 Dec 20, 2022
FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection

FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection arXi

59 Nov 29, 2022