当前位置:网站首页>12. Neural network model
12. Neural network model
2022-07-26 00:18:00 【WuJiaYFN】
primary coverage :
- The reason why neural networks are introduced
- Introduction to neural network model
- Computational vectorization of Neural Networks
- The relationship between neural network and logistic regression algorithm
One 、 The reason why neural networks are introduced
- Both linear regression and logistic regression have such a disadvantage , namely : When there are too many features , The calculated load will be very large .
- Most machine learning involves many features , For nonlinear classification problems , Often need to construct polynomial To represent the relationship between data , Ordinary logistic regression model , Can't handle so many features effectively , Now we need neural networks
Two 、 Introduction to neural network model
2.1 A single neuron model

- In the neural network , The small yellow circles in the figure are called with sigmoid perhaps logistic Activate the Artificial neuron
- x_0 node , It is called bias unit or bias neuron ,x_0 Always =1, You can draw or not draw , It mainly depends on how to be more convenient in specific examples
- Another term of activation function in neural network refers to nonlinear function g(z)
- θ The parameter is called the weight of the model in the neural network
2.2 Neural network model
A neural network is a group of neurons , Connected collections

Neural network has three layers :
- The first input layer —— Input characteristics x_1,x_2…
- The second hidden layer —— I can't see it in the training set , Its value is not x Neither y , There may be more than one hidden layer , Any non input layer and non output layer are called hidden layers
- The third output layer —— Output the final calculation result of the hypothesis h_θ(x)
Calculation method of neural network model :

Activation term : It refers to the value calculated and output by a specific neuron
θ^(j): Weight matrices , Control from j Layer to j+1 Mapping of layers
For example, in the figure :θ^(1) The parameter matrix that controls the mapping from three input units to three hidden units , It's a 3 x 4 Matrix
If a neural network is in the j Layer has a s_j A unit , stay j+1 Layer has a s_j +1 A unit , that θ^(j) That is, it controls the j layer To j+1 Layer mapping ,θ^(j) For one s_(j+1) X (s_j + 1) Matrix
The mathematical definition of the hypothetical function of neural network :
Neural networks define functions h_θ(x) From input x To output y Mapping , These hypothetical functions are parameterized , The parameter is recorded as θ, So just change θ, You can get different hypothetical functions
3、 ... and 、 Computational vectorization of Neural Networks

- For the convenience of representation and calculation , Input in the input layer x , It can be regarded as the activation value of the first layer , That is to say a^(1) = x
- Input from a particular neuron x_1,x_2,x_3 The weighted linear combination of the other is z
- a_0^(2) —— It is called a bias unit , be equal to 1
Four 、 The relationship between neural network and logistic regression algorithm

- In the neural network , In fact, the latter part is a logistic regression algorithm
- But neural network does not directly use the original x_1,x_2, … x_n As an input feature , Instead, it uses the learned function input value a1,a2,…an As an input feature
But neural network does not directly use the original x_1,x_2, … x_n As an input feature , Instead, it uses the learned function input value a1,a2,…an As an input feature - Neural networks do not use input features x_1,x_2, … x_n To train logical regression , Instead, use what you learned a1,a2,…an To train the logical model by yourself , Thus, we can get complex and diverse nonlinear hypothetical functions
If you think the article is good , You can give me some praise and encouragement
Pay attention to me , Let's study together , Progress together !!!
边栏推荐
- FreeMarker view integration
- 软件测试同行评审到底是什么?
- J9数字论:什么是DAO模式?DAO发展过程的阻碍
- 合肥提前批
- 多任务编程
- Jd.com API for obtaining recommended product list
- Duplicate disk: recommended system - negative sampling strategy
- Binary tree -- 257. All paths of binary tree
- Bond network card mode configuration
- Getaverse, a distant bridge to Web3
猜你喜欢

Leetcode169 detailed explanation of most elements

FreeRTOS personal notes - mutex

"Demons dance", is the bull market over? 2021-05-13

After using MQ message oriented middleware, I began to regret

FreeMarker view integration

How long can this bull market last Answers to questions 2021-05-11

LeetCode高频题66. 加一,给你一个数组表示数字,则加1返回结果

Binary tree - 530. Minimum absolute difference of binary search tree

MySQL——多版本并发控制(MVCC)

Js理解之路:Object.call与Object.create()实现继承的原理
随机推荐
牛市还将继续,拿好手里的币 2021-05-08
“群魔乱舞”,牛市是不是结束了?2021-05-13
How does the server build a virtual host?
C语言 预处理详解
这一次,彻底弄懂 Promise 原理
NVIDIA可编程推理加速器TensorRT学习笔记(三)——加速推理
34-SparkSQL自定义函数的使用、SparkStreaming的架构及计算流程、DStream转换操作、SparkStreaming对接kafka和offset的处理
Nest.js 用了 Express 但也没完全用
J9数字论:什么是DAO模式?DAO发展过程的阻碍
Stack and queue - 347. Top k high frequency elements
redis的使用
FreeRTOS个人笔记-信号量
[brother hero July training] day 24: linear tree
OPENCV学习DAY6
Old laptop becomes server (laptop + intranet penetration)
bond网卡模式配置
Piziheng embedded: the method of making source code into lib Library under MCU Xpress IDE and its difference with IAR and MDK
Yolov4 tiny network structure
SSM environment integration
京东按关键字搜索商品 API 的使用说明