The official PyTorch implementation of the paper: *Xili Dai, Xiaojun Yuan, Haigang Gong, Yi Ma. "Fully Convolutional Line Parsing." *.

Related tags

Deep LearningF-Clip
Overview

F-Clip — Fully Convolutional Line Parsing

This repository contains the official PyTorch implementation of the paper: *Xili Dai, Xiaojun Yuan, Haigang Gong, Yi Ma. "Fully Convolutional Line Parsing." *.

Introduction

Our method (F-Clip) is a simple and effective neural network for detecting the line from a given image and video. It outperforms the previous state-of-the-art wireframe and line detectors by a large margin on both accuracy and speed. We hope that this repository serves as a new reproducible baseline for future researches in this area.

Main results

The accuracy and speed trade-off among most recent wireframe detection methods on ShanghaiTech dataset

Qualitative Measures

More random sampled results can be found in the paper.

Quantitative Measures

The following table reports the performance metrics of several wireframes and line detectors on the ShanghaiTech dataset.

Reproducing Results

Installation

For the ease of reproducibility, you are suggested to install miniconda (or anaconda if you prefer) before following executing the following commands.

git clone https://github.com/Delay-Xili/F-Clip
cd F-Clip
conda create -y -n fclip
source activate fclip
# Replace cudatoolkit=10.1 with your CUDA version: https://pytorch.org/
conda install -y pytorch cudatoolkit=10.1 -c pytorch
conda install -y pyyaml docopt matplotlib scikit-image opencv
mkdir data logs post

Testing Pre-trained Models

You can download our reference 6 pre-trained models HG1_D2, HG1_D3, HG1, HG2, HG2_LB, and HR from Google Drive. Those models were trained with their corresponding settings config/fclip_xxx.yaml.
To generate wireframes on the validation dataset with the pretrained model, execute

python test.py -d 0 -i <directory-to-storage-results> config/fclip_xxx.yaml <path-to-xxx-ckpt-file> shanghaiTech/york <path-to-validation-set>

Detect Wireframes for Your Own Images or Videos

To test F-Clip on your own images or videos, you need to download the pre-trained models and execute

CUDA_VISIBLE_DEVICES=0 python demo.py <path-to-image-or-video> --model HR --output_dir logs/demo_result --ckpt <path-to-pretrained-pth> --display True

Here, --output_dir is specifying the directory where the results will store, and you can specify --display to see the results on time.

Downloading the Processed Dataset

You can download the processed dataset wireframe.zip and york.zip manually from Google Drive (link1, link2).

Processing the Dataset

Optionally, you can pre-process (e.g., generate heat maps, do data augmentation) the dataset from scratch rather than downloading the processed one.

dataset/wireframe.py data/wireframe_raw data/wireframe
dataset/wireframe_line.py data/wireframe_raw data/wireframe

Evaluation

To evaluate the sAP (recommended) of all your checkpoints under logs/, execute

python eval-sAP.py logs/*/npz/*

MATLAB is required for APH evaluation and matlab should be under your $PATH. The parallel computing toolbox is highly suggested due to the usage of parfor. After post processing, execute

python eval-APH.py pth/to/input/npz pth/to/output/dir

Due to the usage of pixel-wise matching, the evaluation of APH may take up to an hour depending on your CPUs. See the source code of eval-sAP.py, eval-APH.py, and FClip/postprocess.py for more details on evaluation.

Training

To train the neural network on GPU 0 (specified by -d 0) with the different 6 parameters, execute

python train.py -d 0 -i HG1_D2 config/fclip_HG1_D2.yaml
python train.py -d 0 -i HG1_D3 config/fclip_HG1_D3.yaml
python train.py -d 0 -i HG1 config/fclip_HG1.yaml
python train.py -d 0 -i HG2 config/fclip_HG2.yaml
python train.py -d 0 -i HG2_LB config/fclip_HG2_LB.yaml
python train.py -d 0 -i HR config/fclip_HR.yaml

Citation

If you find F-Clip useful in your research, please consider citing:

@inproceedings{dai2021fully,
 author={Xili Dai, Xiaojun Yuan, Haigang Gong, and Yi Ma},
 title={Fully Convolutional Line Parsing},
 journal={CoRR},
 year={2021}
}
Owner
Xili Dai
UC Berkeley, California, USA. [email protected]
Xili Dai
LIMEcraft: Handcrafted superpixel selectionand inspection for Visual eXplanations

LIMEcraft LIMEcraft: Handcrafted superpixel selectionand inspection for Visual eXplanations The LIMEcraft algorithm is an explanatory method based on

MI^2 DataLab 4 Aug 01, 2022
🐸STT integration examples

🐸 STT 0.9.x Examples These are various examples on how to use or integrate 🐸 STT using our packages. It is a good way to just try out 🐸 STT before

coqui 92 Dec 19, 2022
A wrapper around SageMaker ML Lineage Tracking extending ML Lineage to end-to-end ML lifecycles, including additional capabilities around Feature Store groups, queries, and other relevant artifacts.

ML Lineage Helper This library is a wrapper around the SageMaker SDK to support ease of lineage tracking across the ML lifecycle. Lineage artifacts in

AWS Samples 12 Nov 01, 2022
This repository contains the segmentation user interface from the OpenSurfaces project, extracted as a lightweight tool

OpenSurfaces Segmentation UI This repository contains the segmentation user interface from the OpenSurfaces project, extracted as a lightweight tool.

Sean Bell 66 Jul 11, 2022
Developing your First ML Workflow of the AWS Machine Learning Engineer Nanodegree Program

Exercises and project documentation for the 3. Developing your First ML Workflow of the AWS Machine Learning Engineer Nanodegree Program

Simona Mircheva 1 Jan 13, 2022
UCSD Oasis platform

oasis UCSD Oasis platform Local project setup Install Docker Compose and make sure you have Pip installed Clone the project and go to the project fold

InSTEDD 4 Jun 16, 2021
EigenGAN Tensorflow, EigenGAN: Layer-Wise Eigen-Learning for GANs

Gender Bangs Body Side Pose (Yaw) Lighting Smile Face Shape Lipstick Color Painting Style Pose (Yaw) Pose (Pitch) Zoom & Rotate Flush & Eye Color Mout

Zhenliang He 321 Dec 01, 2022
Unofficial PyTorch reimplementation of the paper Swin Transformer V2: Scaling Up Capacity and Resolution

PyTorch reimplementation of the paper Swin Transformer V2: Scaling Up Capacity and Resolution [arXiv 2021].

Christoph Reich 122 Dec 12, 2022
Simulation of the solar system using various nummerical methods

solar-system Simulation of the solar system using various nummerical methods Download the repo Make shure matplotlib, scipy etc. are installed execute

Caspar 7 Jul 15, 2022
공공장소에서 눈만 돌리면 CCTV가 보인다는 말이 과언이 아닐 정도로 CCTV가 우리 생활에 깊숙이 자리 잡았습니다.

ObsCare_Main 소개 공공장소에서 눈만 돌리면 CCTV가 보인다는 말이 과언이 아닐 정도로 CCTV가 우리 생활에 깊숙이 자리 잡았습니다. CCTV의 대수가 급격히 늘어나면서 관리와 효율성 문제와 더불어, 곳곳에 설치된 CCTV를 개별 관제하는 것으로는 응급 상

5 Jul 07, 2022
An AutoML Library made with Optuna and PyTorch Lightning

An AutoML Library made with Optuna and PyTorch Lightning Installation Recommended pip install -U gradsflow From source pip install git+https://github.

GradsFlow 294 Dec 17, 2022
Library for converting from RGB / GrayScale image to base64 and back.

Library for converting RGB / Grayscale numpy images from to base64 and back. Installation pip install -U image_to_base_64 Conversion RGB to base 64 b

Vladimir Iglovikov 16 Aug 28, 2022
Code for the TASLP paper "PSLA: Improving Audio Tagging With Pretraining, Sampling, Labeling, and Aggregation".

PSLA: Improving Audio Tagging with Pretraining, Sampling, Labeling, and Aggregation Introduction Getting Started FSD50K Recipe AudioSet Recipe Label E

Yuan Gong 84 Dec 27, 2022
A more easy-to-use implementation of KPConv

A more easy-to-use implementation of KPConv This repo contains a more easy-to-use implementation of KPConv based on PyTorch. Introduction KPConv is a

Zheng Qin 35 Dec 14, 2022
Metrics to evaluate quality and efficacy of synthetic datasets.

An Open Source Project from the Data to AI Lab, at MIT Metrics for Synthetic Data Generation Projects Website: https://sdv.dev Documentation: https://

The Synthetic Data Vault Project 129 Jan 03, 2023
Laplace Redux -- Effortless Bayesian Deep Learning

Laplace Redux - Effortless Bayesian Deep Learning This repository contains the code to run the experiments for the paper Laplace Redux - Effortless Ba

Runa Eschenhagen 28 Dec 07, 2022
Learning 3D Part Assembly from a Single Image

Learning 3D Part Assembly from a Single Image This repository contains a PyTorch implementation of the paper: Learning 3D Part Assembly from A Single

18 Dec 21, 2022
An efficient implementation of GPNN

Efficient-GPNN An efficient implementation of GPNN as depicted in "Drop the GAN: In Defense of Patches Nearest Neighbors as Single Image Generative Mo

7 Apr 16, 2022
Similarity-based Gray-box Adversarial Attack Against Deep Face Recognition

Similarity-based Gray-box Adversarial Attack Against Deep Face Recognition Introduction Run attack: SGADV.py Objective function: foolbox/attacks/gradi

1 Jul 18, 2022
The official PyTorch implementation of Curriculum by Smoothing (NeurIPS 2020, Spotlight).

Curriculum by Smoothing (NeurIPS 2020) The official PyTorch implementation of Curriculum by Smoothing (NeurIPS 2020, Spotlight). For any questions reg

PAIR Lab 36 Nov 23, 2022