当前位置:网站首页>"Target detection" + "visual understanding" to realize the understanding and translation of the input image (with source code)
"Target detection" + "visual understanding" to realize the understanding and translation of the input image (with source code)
2022-07-01 11:01:00 【Computer Vision Research Institute】
Pay attention to the parallel stars
Never get lost
Institute of computer vision
official account ID|ComputerVisionGzq
Study Group | Scan the code to get the join mode on the homepage
Address of thesis :https://arxiv.org/pdf/2206.05836.pdf
Code address :https://github.com/microsoft/GLIP
Computer Vision Institute column
author :Edison_G
Put forward GLIPv2, Based on the VL Understanding model of , It serves localization Mission ( for example , object detection 、 Instance segmentation ) And visual language (VL) Understand the task ( for example ,VQA、 Image caption ).
01
summary
GLIPv2 Gracefully localization Pre training and visual language pre training (VLP) Combined with three pre training tasks :phrase grounding As a detection task VL restructure , Area - Word contrast learning as a new area - Word level contrastive learning task and mask language modeling . This unification not only simplifies the previous multi-stage VLP Program , And it realizes the mutual benefit between positioning and understanding tasks . Experimental results show that , Single GLIPv2 Model ( All model weights are shared ) The approach has been realized on various positioning and understanding tasks SoTA Performance of . The model also shows :
Strong zero sample and small sample adaptive performance on open vocabulary target detection task ;
stay VL Understand the excellence of the task grounding Ability
02
background
lately , People generally pay attention to the construction of general vision system , Also known as visual basic model , It can solve various visual tasks at the same time , For example, image classification 、 Object detection , And visual language (VL) understand . Of particular interest is the positioning task ( for example , Target detection and segmentation ) and VL Understand the task ( for example ,VQA And image captions ) Unity between .
localization Pre training is good for VL Mission ,“localization->VLP” The two-stage pre training process is VL Community . A long-standing challenge is localization And understanding , It aims to achieve mutual benefit between these two tasks , Simplify pre training procedures and reduce pre training costs .
However , The two tasks seem very different : The positioning task is only a visual task , Fine grained output is required ( for example , Bounding box or pixel mask ), and VL Understanding tasks emphasizes the integration between the two modes , High level semantic output is required . for example , Answer or title ).
03
New framework
Left: GLIPv2, a pre-trained grounded VL understanding model, unifies various localization and VL understanding tasks. These two kinds of tasks mutually benefit each other, and enables new capabilities such as language-guided detection/segmentation and grounded VQA/captioning. Right: Additional examples from ODinW (detection), LVIS (segmentation), VQA, and COCO Captioning.
A Unified VL Formulation and Architecture
GLIPv2 The core of the unified formula is the classification matching technique , It rephrases any task specific fixed vocabulary classification problem as a task independent open vocabulary visual language matching problem . The best example is in CLIP Rephrase image classification as image - Text matching , This enables the model to be directly extracted from the original image - Learning from text data , And achieve strong zero sample results on the open vocabulary classification task . stay GLIPv2 in , We use visual language matching point product layer to replace each semantic classification linear layer in the traditional unimodal visual model .
GLIPv2 Pre-training
GLIPv2 Use three pre training losses for pre training : Visual language reconstruction from target detection task phrase grounding Loss Lground、 Regional word contrast loss from the new regional word level contrast learning task Linter, And standard mask BERT Language modeling loss proposed in Lmlm.
Transfer GLIPv2 to Localization and VL Tasks
We have introduced two easy ways to GLIPv2 Methods of transmitting to various downstream tasks . Besides ,GLIPv2 Traditional VL Mission ( for example VQA), Effectively make every task we think become “ The basis of VL understand ” Mission .
GLIPv2 pre-training losses: the intra-image alignment loss Lintra (right) takes features after VL fusion and compute loss over region-word pairs within each image-text pair; the inter-image contrastive loss (left) Linter takes features before VL fusion and compute loss over all region-word pairs across a batch of image-text pairs. Label propagation is used to determine the off-diagonal blocks of the Linter target matrix.
04
Experiment and visualization
THE END
Please contact the official account for authorization.
The learning group of computer vision research institute is waiting for you to join !
ABOUT
Institute of computer vision
The Institute of computer vision is mainly involved in the field of deep learning , Mainly devoted to face detection 、 Face recognition , Multi target detection 、 Target tracking 、 Image segmentation and other research directions . The Research Institute will continue to share the latest paper algorithm new framework , The difference of our reform this time is , We need to focus on ” Research “. After that, we will share the practice process for the corresponding fields , Let us really experience the real scene of getting rid of the theory , Develop the habit of hands-on programming and brain thinking !
VX:2311123606
Previous recommendation
AI Help social security , The latest video abnormal behavior detection method framework
Improved shadow suppression for illumination robust face recognition
Text driven for creating and editing images ( With source code )
Based on hierarchical self - supervised learning, vision Transformer Scale to gigapixel images
YOLOS: Rethink through target detection Transformer( With source code )
Fast YOLO: For real-time embedded target detection ( Attached thesis download )
边栏推荐
- Project0: Games
- CVPR 2022 | Virtual Correspondence: Humans as a Cue for Extreme-View Geometry
- Ten years of sharpening a sword: unveiling the secrets of ant group's observability platform antmonitor
- bash: ln: command not found
- What are the advantages and disadvantages of PHP
- Packet mode and three streaming modes in SDP protocol
- 华为HMS Core携手超图为三维GIS注入新动能
- 华泰证券网上开户安全吗?
- LeetCode 438. Find all letter ectopic words in the string__ sliding window
- Valgrind usage of memory leak locating tool
猜你喜欢
英特尔实验室公布集成光子学研究新进展
华为设备配置大型网络WLAN基本业务
Sqlachemy common operations
Database experiment report (II)
CVPR 2022 | Virtual Correspondence: Humans as a Cue for Extreme-View Geometry
Ten years of sharpening a sword: unveiling the secrets of ant group's observability platform antmonitor
The list of winners of the digital collection of "century master" was announced
[paper reading] trajectory guided control prediction for end to end autonomous driving: a simple yet strong Ba
Design and practice of new generation cloud native database
Applymiddleware principle
随机推荐
基金国际化的发展概况
442. duplicate data in array
Personal mall two open Xiaoyao B2C mall system source code - Commercial Version / group shopping discount seckill source code
Huawei HMS core joins hands with hypergraph to inject new momentum into 3D GIS
个人商城二开逍遥B2C商城系统源码-可商用版/拼团拼购优惠折扣秒杀源码
Neurips 2022 | cell image segmentation competition officially launched!
Intel Labs annonce de nouveaux progrès en photonique intégrée
CRC check
2022年现在在网上开通股票账户安全吗?会不会有什么危险?
CVPR 2022 | 基于密度与深度分解的自增强非成对图像去雾
bash: ln: command not found
Half of 2022 has passed, isn't it sudden?
sdp 协议中的packetization-mode方式和三种流传输模式
Yoda unified data application -- Exploration and practice of fusion computing in ant risk scenarios
When is testing not unit testing- When is a Test not a Unit-test?
Internal control of fund managers
PHP realizes lottery function
CRC 校验
CRC 校驗
[encounter Django] - (II) database configuration