Scene-Text-Detection-and-Recognition (Pytorch)

Overview

Scene-Text-Detection-and-Recognition (Pytorch)

1. Proposed Method

The models

Our model comprises two parts: scene text detection and scene text recognition. the descriptions of these two models are as follow:

  • Scene Text Detection
    We employ YoloV5 [1] to detect the ROI (Region Of Interest) from an image and Resnet50 [2] to implement the ROI transformation algorithm. This algorithm transforms the coordinates detected by YoloV5 to the proper location, which fits the text well. YoloV5 can detect all ROIs that might be strings while ROI transformation can make the bbox more fit the region of the string. The visualization result is illustrated below, where the bbox of the dark green is ROI detected by YoloV5 and the bbox of the red is ROI after ROI transformation.

  • Scene Text Recognition
    We employ ViT [3] to recognize the string of bbox detected by YoloV5 since our task is not a single text recognition. The transformer-based model achieves the state-of-the-art performance in Natural Language Processing (NLP). The attention mechanism can make the model pay attention to the words that need to be output at the moment. The model architecture is demonstrated below.

The whole training process is shown in the figure below.

Data augmentation

  • Random Scale Resize
    We found that the sizes of the images in the public dataset are different. Therefore, if we resize the small image to the large, most of the image features will be lost. To solve this problem, we apply the random scale resize algorithm to obtain the low-resolution image from the high-resolution image in the training phase. The visualization results are demonstrated as follows.
Original image 72x72 --> 224x224 96x96 --> 224x224 121x121 --> 224x224 146x146 --> 224x224 196x196 --> 224x224
  • ColorJitter
    In the training phase, the model's input is RGB channel. To enhance the reliability of the model, we appply the collorjitter algorithm to make the model see the images with different contrast, brightness, saturation and hue value. And this kind of method is also widely used in image classification. The visualization results are demonstrated as follows.
Input image brightness=0.5 contrast=0.5 saturation=0.5 hue=0.5 brightness=0.5 contrast=0.5 saturation=0.5 hue=0.5
  • Random Rotaion
    After we observe the training data, we found that most of the images in training data are square-shaped (original image), while some of the testing data is a little skewed. Therefore, we apply the random rotation algorithm to make the model more generalization. The visualization results are demonstrated as follows.
Original image Random Rotation Random Horizontal Flip Both

2. Demo

  • Predicted results
    Before we recognize the string bbox detected by YoloV5, we filter out the bbox with a size less than 45*45. Because the image resolution of a bbox with a size less than 45*45 is too low to recognize the correct string.
Input image Scene Text detection Scene Text recognition
驗車
委託汽車代檢
元力汽車公司
新竹區監理所
3c配件
玻璃貼
專業包膜
台灣大哥大
myfone
新店中正
加盟門市
西門町

排骨酥麵
非常感謝
tvbs食尚玩家
蘋果日報
壹週刊
財訊
錢櫃雜誌
聯合報
飛碟電台
等報導
排骨酥專賣店
西門町

排骨酥麵
排骨酥麵
嘉義店
永晟
電動工具行
492913338
  • Attention maps in ViT
    We also visualize the attention maps in ViT, to check whether the model focus on the correct location of the image. The visualization results are demonstrated as follows.
Original image Attention map

3. Competition Results

  • Public Scores
    We conducted extensive experiments, and The results are demonstrated below. From the results, we can see the improvement of the results by adding each module at each stage. At first, we only employed YoloV5 to detect all the ROI in the images, and the result of detection is not good enough. We also compare the result of ViT with data augmentation or not, the results show that our data augmentation is effective to solve this task (compare the last row and the sixth row). In addition, we filter out the bbox with a size less than 45*45 since the resolution of bbox is too low to recognize the correct strings.
Models(Detection/Recognition) Final score Precision Recall
YoloV5(L) / ViT(aug) 0.60926 0.7794 0.9084
YoloV5(L) +
ROI_transformation(Resnet50) / ViT(aug)
0.73148 0.9261 0.9017
YoloV5(L) +
ROI_transformation(Resnet50) +
reduce overlap bbox / ViT(aug)
0.78254 0.9324 0.9072
YoloV5(L) +
ROI_transformation(SEResnet50) +
reduce overlap bbox / ViT(aug)
0.78527 0.9324 0.9072
YoloV5(L) +
ROI_transformation(SEResnet50) +
reduce overlap bbox / ViT(aug) + filter bbox(40 * 40)
0.79373 0.9333 0.9029
YoloV5(L) +
ROI_transformation(SEResnet50) +
reduce overlap bbox / ViT(aug) + filter bbox(45 * 45)
0.79466 0.9335 0.9011
YoloV5(L) +
ROI_transformation(SEResnet50) +
reduce overlap bbox / ViT(aug) + filter bbox(50 * 50)
0.79431 0.9338 0.8991
YoloV5(L) +
ROI_transformation(SEResnet50) +
reduce overlap bbox / ViT(no aug) + filter bbox(45 * 45)
0.73802 0.9335 0.9011
  • Private Scores
Models(Detection/Recognition) Final score Precision Recall
YoloV5(L) +
ROI_transformation(SEResnet50) +
reduce overlap bbox / ViT(aug) + filter bbox(40 * 40)
0.7828 0.9328 0.8919
YoloV5(L) +
ROI_transformation(SEResnet50) +
reduce overlap bbox / ViT(aug) + filter bbox(45 * 45)
0.7833 0.9323 0.8968
YoloV5(L) +
ROI_transformation(SEResnet50) +
reduce overlap bbox / ViT(aug) + filter bbox(50 * 50)
0.7830 0.9325 0.8944

4. Computer Equipment

  • System: Windows10、Ubuntu20.04

  • Pytorch version: Pytorch 1.7 or higher

  • Python version: Python 3.6

  • Testing:
    CPU: AMR Ryzen 7 4800H with Radeon Graphics RAM: 32GB
    GPU: NVIDIA GeForce RTX 1660Ti 6GB

  • Training:
    CPU: Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz
    RAM: 256GB
    GPU: NVIDIA GeForce RTX 3090 24GB * 2

5. Getting Started

  • Clone this repo to your local
git clone https://github.com/come880412/Scene-Text-Detection-and-Recognition.git
cd Scene-Text-Detection-and-Recognition

Download pretrained models

  • Scene Text Detection
    Please download pretrained models from Scene_Text_Detection. There are three folders, "ROI_transformation", "yolo_models" and "yolo_weight". First, please put the weights in "ROI_transformation" to the path ./Scene_Text_Detection/Tranform_card/models/. Second, please put all the models in "yolo_models" to the ./Scene_Text_Detection/yolov5-master/. Finally, please put the weight in "yolo_weight" to the path ./Scene_Text_Detection/yolov5-master/runs/train/expl/weights/.

  • Scene Text Recogniton
    Please download pretrained models from Scene_Text_Recognition. There are two files in this foler, "best_accuracy.pth" and "character.txt". Please put the files to the path ./Scene_Text_Recogtion/saved_models/.

Inference

  • You should first download the pretrained models and change your path to ./Scene_Text_Detection/yolov5-master/
$ python Text_detection.py
  • The result will be saved in the path '../output/'. Where the folder "example" is the images detected by YoloV5 and after ROI transformation, the file "example.csv" records the coordinates of the bbox, starting from the upper left corner of the coordinates clockwise, respectively (x1, y1), (x2, y2), (x3, y3), and (x4, y4), and the file "exmaple_45.csv" is the predicted result.
  • If you would like to visualize the bbox detected by yoloV5, you can use the function public_crop() in the script ../../data_process.py to extract the bbox from images.

Training

  • You should first download the dataset provided by official, then put the data in the path '../dataset/'. After that, you could use the following script to transform the original data to the training format.
$ python data_process.py
  • Scene_Text_Detection
    There are two models for the scene text detection task: ROI transformation and YoloV5. You could use the follow script to train these two models.
$ cd ./Scene_Text_Detection/yolov5-master # YoloV5
$ python train.py

$ cd ../Tranform_card/ # ROI Transformation
$ python Trainer.py
  • Scene_Text_Recognition
$ cd ./Scene_Text_Recogtion # ViT for text recognition
$ python train.py

References

[1] https://github.com/ultralytics/yolov5
[2] https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py
[3] https://github.com/roatienza/deep-text-recognition-benchmark
[4] https://www.pyimagesearch.com/2014/08/25/4-point-opencv-getperspective-transform-example/
[5] Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7132-7141).

Owner
Gi-Luen Huang
Gi-Luen Huang
[WWW 2022] Zero-Shot Stance Detection via Contrastive Learning

PT-HCL for Zero-Shot Stance Detection The code of this repository is constantly being updated... Please look forward to it! Introduction This reposito

Akuchi 12 Dec 21, 2022
This repository contains Prior-RObust Bayesian Optimization (PROBO) as introduced in our paper "Accounting for Gaussian Process Imprecision in Bayesian Optimization"

Prior-RObust Bayesian Optimization (PROBO) Introduction, TOC This repository contains Prior-RObust Bayesian Optimization (PROBO) as introduced in our

Julian Rodemann 2 Mar 19, 2022
Code for "CloudAAE: Learning 6D Object Pose Regression with On-line Data Synthesis on Point Clouds" @ICRA2021

CloudAAE This is an tensorflow implementation of "CloudAAE: Learning 6D Object Pose Regression with On-line Data Synthesis on Point Clouds" Files log:

Gee 35 Nov 14, 2022
This repository contains all source code, pre-trained models related to the paper "An Empirical Study on GANs with Margin Cosine Loss and Relativistic Discriminator"

An Empirical Study on GANs with Margin Cosine Loss and Relativistic Discriminator This is a Pytorch implementation for the paper "An Empirical Study o

Cuong Nguyen 3 Nov 15, 2021
This is a repository for a No-Code object detection inference API using the OpenVINO. It's supported on both Windows and Linux Operating systems.

OpenVINO Inference API This is a repository for an object detection inference API using the OpenVINO. It's supported on both Windows and Linux Operati

BMW TechOffice MUNICH 68 Nov 24, 2022
Text and code for the forthcoming second edition of Think Bayes, by Allen Downey.

Think Bayes 2 by Allen B. Downey The HTML version of this book is here. Think Bayes is an introduction to Bayesian statistics using computational meth

Allen Downey 1.5k Jan 08, 2023
Earth Vision Foundation

EVer - A Library for Earth Vision Researcher EVer is a Pytorch-based Python library to simplify the training and inference of the deep learning model.

Zhuo Zheng 34 Nov 26, 2022
The materials used in the SaxonJS tutorial presented at Declarative Amsterdam, 2021

SaxonJS-Tutorial-2021, version 1.0.4 Last updated on 4 November, 2021. Table of contents Background Prerequisites Starting a web server Running a Java

Saxonica 11 Oct 23, 2022
Collection of common code that's shared among different research projects in FAIR computer vision team.

fvcore fvcore is a light-weight core library that provides the most common and essential functionality shared in various computer vision frameworks de

Meta Research 1.5k Jan 07, 2023
A multi-entity Transformer for multi-agent spatiotemporal modeling.

baller2vec This is the repository for the paper: Michael A. Alcorn and Anh Nguyen. baller2vec: A Multi-Entity Transformer For Multi-Agent Spatiotempor

Michael A. Alcorn 56 Nov 15, 2022
Learning Features with Parameter-Free Layers (ICLR 2022)

Learning Features with Parameter-Free Layers (ICLR 2022) Dongyoon Han, YoungJoon Yoo, Beomyoung Kim, Byeongho Heo | Paper NAVER AI Lab, NAVER CLOVA Up

NAVER AI 65 Dec 07, 2022
3D AffordanceNet is a 3D point cloud benchmark consisting of 23k shapes from 23 semantic object categories, annotated with 56k affordance annotations and covering 18 visual affordance categories.

3D AffordanceNet This repository is the official experiment implementation of 3D AffordanceNet benchmark. 3D AffordanceNet is a 3D point cloud benchma

49 Dec 01, 2022
The 2nd place solution of 2021 google landmark retrieval on kaggle.

Google_Landmark_Retrieval_2021_2nd_Place_Solution The 2nd place solution of 2021 google landmark retrieval on kaggle. Environment We use cuda 11.1/pyt

229 Dec 13, 2022
PyTorch implementation of "A Simple Baseline for Low-Budget Active Learning".

A Simple Baseline for Low-Budget Active Learning This repository is the implementation of A Simple Baseline for Low-Budget Active Learning. In this pa

10 Nov 14, 2022
Real-time LIDAR-based Urban Road and Sidewalk detection for Autonomous Vehicles 🚗

urban_road_filter: a real-time LIDAR-based urban road and sidewalk detection algorithm for autonomous vehicles Dependency ROS (tested with Kinetic and

JKK - Vehicle Industry Research Center 180 Dec 12, 2022
Human Detection - Pedestrian Detection using OpenCV Python

Pedestrian Detection using OpenCV Python Follow us on Instagram for Machine Lear

Hrishikesh Dutta 1 Jan 23, 2022
Learn the Deep Learning for Computer Vision in three steps: theory from base to SotA, code in PyTorch, and space-repetition with Anki

DeepCourse: Deep Learning for Computer Vision arthurdouillard.com/deepcourse/ This is a course I'm giving to the French engineering school EPITA each

Arthur Douillard 113 Nov 29, 2022
Tandem Mass Spectrum Prediction with Graph Transformers

MassFormer This is the original implementation of MassFormer, a graph transformer for small molecule MS/MS prediction. Check out the preprint on arxiv

Röst Lab 13 Oct 27, 2022
covid question answering datasets and fine tuned models

Covid-QA Fine tuned models for question answering on Covid-19 data. Hosted Inference This model has been contributed to huggingface.Click here to see

Abhijith Neil Abraham 19 Sep 09, 2021
Implementation of SwinTransformerV2 in TensorFlow.

SwinTransformerV2-TensorFlow A TensorFlow implementation of SwinTransformerV2 by Microsoft Research Asia, based on their official implementation of Sw

Phan Nguyen 2 May 30, 2022