当前位置:网站首页>NLP generation model 2017: Why are those in transformer
NLP generation model 2017: Why are those in transformer
2022-07-06 00:23:00 【Ninja luantaro】
1、 Briefly describe Transformer Feedforward neural network in ? What activation function is used ? Related advantages and disadvantages ?
Feedforward neural network adopts two linear transformations , The activation function is Relu, The formula is as follows :
F F N ( x ) = m a x ( 0 , x W 1 + b 1 ) W 2 + b 2 FFN(x) = max(0, xW_1 + b_1) W_2 + b_2 FFN(x)=max(0,xW1+b1)W2+b2
advantage :
- SGD The convergence rate ratio of the algorithm sigmoid and tanh fast ;( The gradient is not saturated , Solved the problem of gradient disappearance )
- Low computational complexity , There's no need to do exponential operations ;
- Suitable for backward propagation .
shortcoming :
- ReLU The output of is not zero-centered;
- ReLU It's very... During training ” fragile ”, Carelessness can cause neurons ” Necrosis ”. for instance : because ReLU stay x<0 Time gradient is 0, This leads to a negative gradient at this ReLU Set to zero , And this neuron may never be activated by any data . If this happens , So the gradient behind this neuron is always 0 了 , That is to say ReLU Neurons are dead , No longer responding to any data . In practice , If your learning rate It's big , It's very possible that you are on the Internet 40% All the neurons are dead . Of course , If you set a suitable smaller learning rate, This problem doesn't happen too often .,Dead ReLU Problem( Neuronal necrosis ): Some neurons may never be activated , Causes the corresponding parameters to never be updated ( In the negative part , The gradient of 0). There are two reasons for this phenomenon : Parameter initialization problem ;learning rate Too high results in too large parameter update during training . resolvent : use Xavier Initialization method , And avoid learning rate Set too large or use adagrad And so on learning rate The algorithm of .
- ReLU No amplitude compression of data , Therefore, the range of data will continue to expand with the increase of the number of model layers .
2、 Why Join in Layer normalization modular ?
motivation : because transformer Stacked A lot of layers , Easy to Gradient disappearance or gradient explosion ;
reason :
After the data passes through the function of the network layer , No longer normalized , The deviation will be bigger and bigger , So we need to data again Do normalization ;
Purpose :
Before the data is sent into the activation function normalization( normalization ) Before , You need to use the input information normalization Converted to an average of 0 The variance of 1 The data of , Avoid the input data falling in the saturation area of the activation function The gradient disappears problem
3、 Why? transformer Block use LayerNorm instead of BatchNorm?LayerNorm stay Transformer Where is the location of ?
Normalization The goal is to stabilize the distribution ( Reduce the variance of each dimension data ).
Another way to ask this question is : Why is image processing used batch normalization The effect is good , Natural language processing uses layer normalization good ?
LayerNorm Is to normalize the hidden layer state dimension , and batch norm Yes sample batch size Dimensions are normalized . stay NLP The task is not like the image task batch size Is constant , Usually changing , therefore batch norm The variance of will be larger . and layer norm Can alleviate the problem .
Reference material :
About Transformer Those why
边栏推荐
- Go learning - dependency injection
- 如何解决ecology9.0执行导入流程流程产生的问题
- 认识提取与显示梅尔谱图的小实验(观察不同y_axis和x_axis的区别)
- LeetCode 8. String conversion integer (ATOI)
- notepad++正则表达式替换字符串
- Choose to pay tribute to the spirit behind continuous struggle -- Dialogue will values [Issue 4]
- XML Configuration File
- MySQL global lock and table lock
- Opencv classic 100 questions
- JS can really prohibit constant modification this time!
猜你喜欢
About the slmgr command
提升工作效率工具:SQL批量生成工具思想
Determinant learning notes (I)
【DesignMode】组合模式(composite mode)
MySQL functions
Classical concurrency problem: the dining problem of philosophers
wx. Getlocation (object object) application method, latest version
LeetCode 1189. Maximum number of "balloons"
权限问题:source .bash_profile permission denied
What are Yunna's fixed asset management systems?
随机推荐
[designmode] adapter pattern
FFmpeg学习——核心模块
NSSA area where OSPF is configured for Huawei equipment
常用API类及异常体系
FFMPEG关键结构体——AVFrame
如何利用Flutter框架开发运行小程序
FFmpeg抓取RTSP图像进行图像分析
DEJA_VU3D - Cesium功能集 之 055-国内外各厂商地图服务地址汇总说明
notepad++正则表达式替换字符串
Recognize the small experiment of extracting and displaying Mel spectrum (observe the difference between different y_axis and x_axis)
Spark DF增加一列
Search (DFS and BFS)
Yolov5、Pycharm、Anaconda环境安装
AtCoder Beginner Contest 254【VP记录】
【线上小工具】开发过程中会用到的线上小工具合集
Wechat applet -- wxml template syntax (with notes)
【NOI模拟赛】Anaid 的树(莫比乌斯反演,指数型生成函数,埃氏筛,虚树)
MySQL functions
Pointer pointer array, array pointer
关于slmgr命令的那些事