当前位置:网站首页>Deep Learning Basics Overfitting, Underfitting Problems, and Regularization
Deep Learning Basics Overfitting, Underfitting Problems, and Regularization
2022-08-02 05:29:00 【hello689】
引自《统计学习方法》李航, 当假设空间含有不同复杂度(例如,不同的参数个数)的模型时,就要面临模型选择的问题.We want to choose to study a suitable or model.如果在假设空间中存在’真’模型,Then the selected model should be close to real. 具体地,The selected model to have the same number with the true model parameters,The selected model parameter vector close to the true model parameter vector.
1. 过拟合
过拟合现象:Model of the known data to predict very well,For unknown data to predict the phenomenon of poor(训练集效果好,In the test set and validation set effect is poor).
背后的原理:If the constantly pursue to the predictive ability of training data,The selected model complexity tend to be higher than the complexity of the true model.(李航-Statistical learning methods of)
From the model complexity perspective:模型过于复杂,The noise data also study in,Led to the decrease of the model generalization performance.
From the perspective of the data set is:数据集规模Relative to the model complexity too小,The features of the model of excessive mining data set.
解决过拟合常用方法:
- 增加数据集;数据增强,扩充数据,Synthesis of new data generated against network.
- 正则化方法:BN和dropout
- 添加BN层,BnTo a certain extent, can improve the model generalization.
- dropout,Some random hidden neurons,So in the process of training, it won't update every time.
- 降低模型复杂度,Can reduce network layer,To switch to participate less number of model;
- Reduce training round number,(也叫early stopping,The iterative convergence model training data sets before stop iterative,来防止过拟合.)做法:每个epoch,记录最好的结果.When the tenepoch,Fail to improve on the accuracy of test set,那就说明,The model can be truncated.
- 集成学习方法:把多个模型集成在一起,降低单一模型的过拟合风险.
- 交叉检验:这个有点复杂,几乎没用过,没有仔细了解.
2. 欠拟合
现象:Whether also in training set and test set,The effect of the model are.
原因:
- 模型过于简单;Model of learning ability is poor;
- Extraction of features is bad;When the data characteristic of the training is not、Characteristics and the existing sample label when the correlation is not strong,Fitting model easy to seen.
解决办法:
- 增加模型复杂度,Such as change the high in the linear model for nonlinear model;Add the network layer in the neural network or neuron number.
- 增加新特征:Can consider features combination such as project work.
- If the loss function to add the regular item,Can consider to reduce the regularization coefficient λ \lambda λ.
3. 正则化
写在前边:什么是正则化,不太好理解;监督学习的两个基本策略:经验风险最小化和结构风险最小化;Assuming that sample enough,So think the empirical risk minimum model is the optimal model of;When sample size is small,Empirical risk minimization to the learning effect is not very good,会产生过拟合的现象;The structural risk minimization(等价于正则化)Who had been made to fit in order to prevent.
正则化是结构风险最小化策略的实现,Is the empirical risk and add a正则化项或罚项.正则化项一般是模型复杂度的单调递增函数,模型越复杂,正则化值就越大.
Regularization item generally has the following form:
min f ∈ F 1 N ∑ i = 1 N L ( y i , f ( x i ) ) + λ J ( f ) \min _{f \in \mathcal{F}} \frac{1}{N} \sum_{i=1}^{N} L\left(y_{i}, f\left(x_{i}\right)\right)+\lambda J(f) f∈FminN1i=1∑NL(yi,f(xi))+λJ(f)
Among them is the first experience,第二项是正则化项. λ \lambda λTo adjust the coefficient between the two.
The first experience less risk model may be more complex(有多个非零参数),Then the second model complexity will be larger.正则化的作用是选择经验风险与模型复杂度同时较小的模型.
参考:李航《统计学习方法》 p18;
边栏推荐
猜你喜欢
随机推荐
力扣 215. 数组中的第K个最大元素
el-input 只能输入整数(包括正数、负数、0)或者只能输入整数(包括正数、负数、0)和小数
计算属性的学习
单目三维目标检测之CaDDN论文阅读
Jetson Nano 2GB Developer Kit Installation Instructions
数据复制系统设计(3)-配置新的从节点及故障切换
How to save a section of pages in a PDF as a new PDF file
[Win11] PowerShell无法激活Conda虚拟环境
吴恩达机器学习系列课程笔记——第十六章:推荐系统(Recommender Systems)
其他语法和模块的导出导入
多数据中心操作和检测并发写入
QT中更换OPENCV版本(3->4),以及一些宏定义的改变
Scalar value for argument ‘color‘ is not numeric错误处理
ffmpeg视频播放、格式转化、缩放等命令
QT+OPENCV+FFTW内存问题
深蓝学院-视觉SLAM十四讲-第五章作业
Excel操作技巧大全
MySQL read-write separation mysql-proxy deployment
不会多线程还想进 BAT?精选 19 道多线程面试题,有答案边看边学
使用 Fastai 构建食物图像分类器