当前位置:网站首页>Vit (vision transformer) principle and code elaboration

Vit (vision transformer) principle and code elaboration

2022-07-06 07:39:00 bai666

Course link : ​​https://edu.51cto.com/course/30169.html​

Transformer In many NLP( natural language processing ) In the mission SOTA The results of . ViT (Vision Transformer) yes Transformer be applied to CV( Computer vision ) Milestone work in the field , Later, more variants have been developed , Such as Swin Transformer. 


ViT (Vision Transformer) Model published in paper An Image is Worth 16X16 Words: Transformer For Image Recognition At Scale, Use pure Transformer Image classification .ViT stay JFT-300M After pre training on the dataset , It can exceed convolutional neural network ResNet Performance of , And the training computing resources used can be less . 


This course is right ViT Principle and PyTorch The implementation code is refined , To help you master its detailed principle and specific implementation . The code implementation includes two code implementation methods , One is to adopt timm library , The other is to adopt einops/einsum.  


The principle part includes :Transformer An overview of the architecture 、Transformer Of Encoder 、Transformer Of Decoder、ViT Architecture Overview 、ViT The model, 、ViT Performance and analysis .  


The refined part of the code uses Jupyter Notebook Yes ViT Of PyTorch Read the code line by line , Include : install PyTorch、ViT Of timm Library implementation code interpretation 、 einops/einsum 、ViT Of einops/einsum Implement code interpretation .

原网站

版权声明
本文为[bai666]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/02/202202131908081214.html