当前位置:网站首页>NLP generation model 2017: Why are those in transformer
NLP generation model 2017: Why are those in transformer
2022-07-06 00:23:00 【Ninja luantaro】
1、 Briefly describe Transformer Feedforward neural network in ? What activation function is used ? Related advantages and disadvantages ?
Feedforward neural network adopts two linear transformations , The activation function is Relu, The formula is as follows :
F F N ( x ) = m a x ( 0 , x W 1 + b 1 ) W 2 + b 2 FFN(x) = max(0, xW_1 + b_1) W_2 + b_2 FFN(x)=max(0,xW1+b1)W2+b2
advantage :
- SGD The convergence rate ratio of the algorithm sigmoid and tanh fast ;( The gradient is not saturated , Solved the problem of gradient disappearance )
- Low computational complexity , There's no need to do exponential operations ;
- Suitable for backward propagation .
shortcoming :
- ReLU The output of is not zero-centered;
- ReLU It's very... During training ” fragile ”, Carelessness can cause neurons ” Necrosis ”. for instance : because ReLU stay x<0 Time gradient is 0, This leads to a negative gradient at this ReLU Set to zero , And this neuron may never be activated by any data . If this happens , So the gradient behind this neuron is always 0 了 , That is to say ReLU Neurons are dead , No longer responding to any data . In practice , If your learning rate It's big , It's very possible that you are on the Internet 40% All the neurons are dead . Of course , If you set a suitable smaller learning rate, This problem doesn't happen too often .,Dead ReLU Problem( Neuronal necrosis ): Some neurons may never be activated , Causes the corresponding parameters to never be updated ( In the negative part , The gradient of 0). There are two reasons for this phenomenon : Parameter initialization problem ;learning rate Too high results in too large parameter update during training . resolvent : use Xavier Initialization method , And avoid learning rate Set too large or use adagrad And so on learning rate The algorithm of .
- ReLU No amplitude compression of data , Therefore, the range of data will continue to expand with the increase of the number of model layers .
2、 Why Join in Layer normalization modular ?
motivation : because transformer Stacked A lot of layers , Easy to Gradient disappearance or gradient explosion ;
reason :
After the data passes through the function of the network layer , No longer normalized , The deviation will be bigger and bigger , So we need to data again Do normalization ;
Purpose :
Before the data is sent into the activation function normalization( normalization ) Before , You need to use the input information normalization Converted to an average of 0 The variance of 1 The data of , Avoid the input data falling in the saturation area of the activation function The gradient disappears problem
3、 Why? transformer Block use LayerNorm instead of BatchNorm?LayerNorm stay Transformer Where is the location of ?
Normalization The goal is to stabilize the distribution ( Reduce the variance of each dimension data ).
Another way to ask this question is : Why is image processing used batch normalization The effect is good , Natural language processing uses layer normalization good ?
LayerNorm Is to normalize the hidden layer state dimension , and batch norm Yes sample batch size Dimensions are normalized . stay NLP The task is not like the image task batch size Is constant , Usually changing , therefore batch norm The variance of will be larger . and layer norm Can alleviate the problem .
Reference material :
About Transformer Those why
边栏推荐
- anconda下载+添加清华+tensorflow 安装+No module named ‘tensorflow‘+KernelRestarter: restart failed,内核重启失败
- MySQL存储引擎
- What is information security? What is included? What is the difference with network security?
- 关于slmgr命令的那些事
- Spark SQL空值Null,NaN判断和处理
- [designmode] composite mode
- Problems and solutions of converting date into specified string in date class
- Global and Chinese market of water heater expansion tank 2022-2028: Research Report on technology, participants, trends, market size and share
- Go learning --- read INI file
- MySql——CRUD
猜你喜欢
[noi simulation] Anaid's tree (Mobius inversion, exponential generating function, Ehrlich sieve, virtual tree)
如何利用Flutter框架开发运行小程序
Mysql - CRUD
【NOI模拟赛】Anaid 的树(莫比乌斯反演,指数型生成函数,埃氏筛,虚树)
Recognize the small experiment of extracting and displaying Mel spectrum (observe the difference between different y_axis and x_axis)
Multithreading and high concurrency (8) -- summarize AQS shared lock from countdownlatch (punch in for the third anniversary)
Huawei equipment configuration ospf-bgp linkage
OpenCV经典100题
MySql——CRUD
Notepad++ regular expression replacement string
随机推荐
Choose to pay tribute to the spirit behind continuous struggle -- Dialogue will values [Issue 4]
[designmode] composite mode
MySQL存储引擎
Problems encountered in the database
How to solve the problems caused by the import process of ecology9.0
Yunna | what are the main operating processes of the fixed assets management system
wx. Getlocation (object object) application method, latest version
Date类中日期转成指定字符串出现的问题及解决方法
Go learning --- structure to map[string]interface{}
There is no network after configuring the agent by capturing packets with Fiddler mobile phones
Browser local storage
Location based mobile terminal network video exploration app system documents + foreign language translation and original text + guidance records (8 weeks) + PPT + review + project source code
OS i/o devices and device controllers
Global and Chinese markets of universal milling machines 2022-2028: Research Report on technology, participants, trends, market size and share
建立时间和保持时间的模型分析
Pointer - character pointer
Shardingsphere source code analysis
Transport layer protocol ----- UDP protocol
N1 # if you work on a metauniverse product [metauniverse · interdisciplinary] Season 2 S2
Spark SQL空值Null,NaN判断和处理