当前位置:网站首页>"Target detection" + "visual understanding" realizes the understanding of the input image
"Target detection" + "visual understanding" realizes the understanding of the input image
2022-07-01 04:00:00 【AI vision netqi】
Put forward GLIPv2, Based on the VL Understanding model of , It serves localization Mission ( for example , object detection 、 Instance segmentation ) And visual language (VL) Understand the task ( for example ,VQA、 Image caption ).
Address of thesis :https://arxiv.org/pdf/2206.05836.pdf
Code address :https://github.com/microsoft/GLIP
The minimum of the pre training model 2.5G,
01 summary
GLIPv2 Gracefully localization Pre training and visual language pre training (VLP) Combined with three pre training tasks :phrase grounding As a detection task VL restructure , Area - Word contrast learning as a new area - Word level contrastive learning task and mask language modeling . This unification not only simplifies the previous multi-stage VLP Program , And it realizes the mutual benefit between positioning and understanding tasks . Experimental results show that , Single GLIPv2 Model ( All model weights are shared ) The approach has been realized on various positioning and understanding tasks SoTA Performance of . The model also shows :
Strong zero sample and small sample adaptive performance on open vocabulary target detection task ;
stay VL Understand the excellence of the task grounding Ability
02 background
lately , People generally pay attention to the construction of general vision system , Also known as visual basic model , It can solve various visual tasks at the same time , For example, image classification 、 Object detection , And visual language (VL) understand . Of particular interest is the positioning task ( for example , Target detection and segmentation ) and VL Understand the task ( for example ,VQA And image captions ) Unity between .
localization Pre training is good for VL Mission ,“localization->VLP” The two-stage pre training process is VL Community . A long-standing challenge is localization And understanding , It aims to achieve mutual benefit between these two tasks , Simplify pre training procedures and reduce pre training costs .
However , The two tasks seem very different : The positioning task is only a visual task , Fine grained output is required ( for example , Bounding box or pixel mask ), and VL Understanding tasks emphasizes the integration between the two modes , High level semantic output is required . for example , Answer or title ).
03 New framework

Left: GLIPv2, a pre-trained grounded VL understanding model, unifies various localization and VL understanding tasks. These two kinds of tasks mutually benefit each other, and enables new capabilities such as language-guided detection/segmentation and grounded VQA/captioning. Right: Additional examples from ODinW (detection), LVIS (segmentation), VQA, and COCO Captioning.
A Unified VL Formulation and Architecture
GLIPv2 The core of the unified formula is the classification matching technique , It rephrases any task specific fixed vocabulary classification problem as a task independent open vocabulary visual language matching problem . The best example is in CLIP Rephrase image classification as image - Text matching , This enables the model to be directly extracted from the original image - Learning from text data , And achieve strong zero sample results on the open vocabulary classification task . stay GLIPv2 in , We use visual language matching point product layer to replace each semantic classification linear layer in the traditional unimodal visual model .

GLIPv2 Pre-training
GLIPv2 Use three pre training losses for pre training : Visual language reconstruction from target detection task phrase grounding Loss Lground、 Regional word contrast loss from the new regional word level contrast learning task Linter, And standard mask BERT Language modeling loss proposed in Lmlm.

Transfer GLIPv2 to Localization and VL Tasks
We have introduced two easy ways to GLIPv2 Methods of transmitting to various downstream tasks . Besides ,GLIPv2 Traditional VL Mission ( for example VQA), Effectively make every task we think become “ The basis of VL understand ” Mission .

GLIPv2 pre-training losses: the intra-image alignment loss Lintra (right) takes features after VL fusion and compute loss over region-word pairs within each image-text pair; the inter-image contrastive loss (left) Linter takes features before VL fusion and compute loss over all region-word pairs across a batch of image-text pairs. Label propagation is used to determine the off-diagonal blocks of the Linter target matrix.
04
Experiment and visualization



边栏推荐
- 【发送邮件报错】535 Error:authentication failed
- Valentine's Day is nothing.
- Usage of AfxMessageBox and MessageBox
- 242. valid Letter heteronyms
- 241. 为运算表达式设计优先级
- [TA frost wolf \u may- hundred people plan] 1.3 secret of texture
- 程序员女友给我做了一个疲劳驾驶检测
- Appium automation test foundation -- supplement: c/s architecture and b/s architecture description
- LeetCode 1828. Count the number of points in a circle
- Jeecgboot output log, how to use @slf4j
猜你喜欢
随机推荐
高并发下接口幂等性如何保证?
168. excel table column name
[JPCs publication] the Third International Conference on control theory and application in 2022 (icocta 2022)
【TA-霜狼_may-《百人计划》】2.1 色彩空间
Unity's 3D multi-point arrow navigation
多次跳槽后,月薪等于老同事的年薪
In the innovation community, the "100 cities Tour" of the gold warehouse of the National People's Congress of 2022 was launched
10. regular expression matching
431. 将 N 叉树编码为二叉树 DFS
LeetCode 1399. Count the maximum number of groups
674. longest continuous increasing sequence force buckle JS
Unity之三维空间多点箭头导航
205. isomorphic string
How keil displays Chinese annotations (simple with pictures)
[shortcut key]
[party benefits] jsonobject to string, leave blank
LeetCode 1380. Lucky number in matrix
165. 比较版本号
Appium fundamentals of automated testing - basic principles of appium
【TA-霜狼_may-《百人計劃》】1.2.1 向量基礎








![[TA frost wolf \u may- hundred talents plan] 1.2.3 MVP matrix operation](/img/4e/8cf60bc816441967c04f97c64685a1.png)