当前位置:网站首页>Thesis unscramble TransFG: A Transformer Architecture for Fine - grained Recognition
Thesis unscramble TransFG: A Transformer Architecture for Fine - grained Recognition
2022-08-11 06:32:00 【pontoon】
This article is the application of transformers in fine-grained fields.
Problem: Transformer has not been used in the field of image segmentation
Contribution points: 1. The input of the vision transformer divides the image into patches, but there is no overlap. The article is changed to split patches and use overlap (this can only be counted as a trick)
2.Part Selection Module

In layman's terms, the input of the last layer is different from the vision transformer, that is, the weights of all layers before the last layer (shown in the red box) are multiplied, and then the tokens with great weight are filtered and spliced together as the L-th layer.enter.
First of all, the output of the L-1 layer is originally like this:
![]()
The weight of a previous layer is as follows:
![]()
The value range of the subscript l is (1,2,...,L-1)
Assuming there are K self-attention heads, the weight in each head is:
![]()
The value range of the superscript i is (0,1,...,K)
The weights are multiplied to all layers before the last layer:

Then select the A_k tokens with the largest weight as the input of the last layer.
So after processing, its input can be expressed as:
![]()

From the perspective of the model architecture, it can be found that the token with the arrow in the red box is selected, and it is also the token with a large weight after the weight is multiplied. The blue box on the right represents the patch corresponding to the selected token.
3.Contrastive loss
The author said that the features between different categories in the fine-grained field are very similar, so it is not enough to use the cross-entropy loss to learn the features. After the cross-entropy loss, a new Contrastive loss is added, which introduces the cosine similarity.(Used to estimate the similarity of two vectors), the more similar the vectors, the greater the cosine similarity.
The purpose of the author's proposal of this loss function is to reduce the similarity of "classification tokens" of different categories, and maximize the similarity of the same "classification tokens".The formula of Contrastive loss is as follows:

Where a is an artificially set constant.
So the overall function is:
![]()
Experiment:
Comparison with CNN and ViT on several datasets for sub-classification, SOTA
边栏推荐
猜你喜欢

typescript学习日记,从基础到进阶(第二章)

OpenMLDB v0.5.0 发布 | 性能、成本、灵活性再攀高峰

活动预告 | 4月23日,多场OpenMLDB精彩分享来袭,不负周末好时光

gerrit 配置SSH Key和账号、邮箱信息

Mei cole studios - deep learning second BP neural network

Safety helmet recognition - construction safety "regulator"

Diagnostic Log and Trace——为应用程序和上下文设置日志级别的方法

The latest safety helmet wearing recognition system in 2022

weex入门踩坑

栈stack
随机推荐
目标检测——卷积神经网路基础知识
Mei cole studios - fifth training DjangoWeb application framework + MySQL database
ActiveReports报表分类之页面报表
CMT2380F32模块开发11-RTC例程
关于mmdetection框架实用小工具说明
The selection points you need to know about the helmet identification system
实时特征计算平台架构方法论和基于 OpenMLDB 的实践
STM32学习笔记(白话文理解版)—小灯的点亮、闪烁、呼吸
promise.all 学习(多个promise对象回调)
pip安装报错:is not a supported wheel on this platform
物联网IOT 固件升级
net6 的Web MVC项目中事务功能的应用
Diagnostic Log and Trace——开发人员如何使用 DLT
typescript学习日记,从基础到进阶(第二章)
产品经理与演员有着天然的相似
Introduction of safety helmet wearing recognition system
STM32-中断优先级管理NVIC
C语言中switch的嵌套
博客目录
CMT2380F32模块开发5-CLK例程