当前位置:网站首页>Introduction to deep learning

Introduction to deep learning

2022-06-23 04:28:00 Falling flowers and rain

1. What is deep learning

Before I introduce deep learning , Let's first look at artificial intelligence , The relationship between machine learning and deep learning :

 Insert picture description here

Machine learning is a way to realize artificial intelligence , Deep learning is a subset of machine learning , In other words, deep learning is a method to realize machine learning . The main difference from machine learning algorithm is shown in the figure below :

 Insert picture description here

Traditional machine learning arithmetic relies on artificial design features , And feature extraction , The deep learning method does not need manual , Instead, it relies on algorithms to automatically extract features . Deep learning mimics the way the human brain works , Learn from experience and acquire knowledge . It's also that deep learning is seen as a black box , Reasons for poor interpretability .

With the rapid development of computer software and hardware , At this stage, we use deep learning to simulate the human brain to interpret data , Include images , Text , Audio and other contents . At present, the main application fields of deep learning are :

  • A smart phone
     Insert picture description here

  • speech recognition
    For example, Apple's intelligent voice assistant siri

 Insert picture description here

  • Machine translation
    Google has embedded deep learning into Google translate , Able to support 100 Instant translation in multiple languages .

 Insert picture description here

  • Photo translation
     Insert picture description here

  • Autopilot
     Insert picture description here

Of course, deep learning can also be seen in other fields , For example, risk control , Security , Smart retail , The medical field , Recommendation system, etc .

In this course , We mainly introduce :

  • Deep learning framework TensorFlow Application
  • Deep neural network

Two parts .

2. The development history ( understand )

 Insert picture description here

  • In fact, deep learning is not a new thing , The neural network technology needed for deep learning originated from 20 century 50 years , It's called a perceptron . At that time, single-layer perceptron was usually used , Despite its simple structure , But it can solve complex problems . Later, the perceptron proved to have serious problems , Because you can only learn linear separable functions , Even simple XOR (XOR) There's nothing you can do about linear inseparable problems ,1969 year Marvin Minsky I wrote a book called 《Perceptrons》 The book of , He put forward two famous views :1. The single-layer sensor is useless , We need multi-layer perceptron to solve complex problems 2. There is no effective training algorithm .
     Insert picture description here

  • 20 century 80 s , Back propagation algorithm for artificial neural networks ( Also called Back Propagation Algorithm or BP Algorithm ) The invention of the , It brings hope to machine learning , It has set off an upsurge of machine learning based on statistical model . This upsurge continues to this day . It was found that , utilize BP The algorithm can make an artificial neural network model learn statistical laws from a large number of training samples , So as to predict the unknown events . This statistical based machine learning method is better than the past manual rule-based system , Show superiority in many ways . Artificial neural network at this time , Although it is also called multi-layer perceptron (Multi-layer Perceptron), But it is actually a shallow model with only one hidden layer node .
     Insert picture description here

  • 20 century 90 years , Various shallow machine learning models have been proposed , For example, support vector machine (SVM,Support Vector Machines)、 Boosting、 Maximum entropy method ( Such as LR,Logistic Regression) etc. . The structure of these models can basically be regarded as nodes with a hidden layer ( Such as SVM、Boosting), Or no hidden layer nodes ( Such as LR). These models have achieved great success in both theoretical analysis and Application . by comparison , Due to the difficulty of theoretical analysis , Training methods need a lot of experience and skills , In this period, the shallow artificial neural network is relatively silent .
     Insert picture description here

  • 2006 year , Geoffrey · Hinton and his student Ruslan · Sarakhdinov formally put forward the concept of deep learning . They are in the world's top academic journals 《 science 》 In an article published in detail “ The gradient disappears ” Solutions to problems —— The algorithm is trained layer by layer through unsupervised learning method , Then use the supervised back propagation algorithm for tuning . The proposal of this deep learning method , Immediately caused great repercussions in the academic circle , Take Stanford University as an example 、 Many world-renowned universities represented by the University of Toronto have invested huge manpower 、 Financial resources to conduct relevant research in the field of in-depth learning . Then it quickly spread to the industry .
     Insert picture description here

  • 2012 year , In the famous ImageNet In the image recognition competition , Geoffrey · The team led by Hinton uses a deep learning model AlexNet Win at one stroke .AlexNet use ReLU Activation function , The gradient vanishing problem has been fundamentally solved , And USES the GPU It greatly improves the operation speed of the model . Same year , By the famous professor wuenda of Stanford University and the world's top computer experts Jeff Dean The co - dominant deep neural network ——DNN Technology has made amazing achievements in the field of image recognition , stay ImageNet In the evaluation, the error rate is successfully reduced from 26% Down to 15%. Deep learning algorithm stands out in the world competition , It has once again attracted the attention of academia and industry to the field of in-depth learning .

  • 2016 year , With the development of Google based on deep learning AlphaGo With 4:1 The score of Li Shishi is better than that of Li Shishi, the world's top go player , The enthusiasm of deep learning is the same for a while . later ,AlphaGo He also competed with many world-class go masters , Both have won a complete victory . This also proves that in the go world , Robots based on deep learning technology have surpassed human beings .

 Insert picture description here

  • 2017 year , Based on reinforcement learning algorithm AlphaGo Upgraded version AlphaGo Zero Born in the sky . It USES the “ Starting from scratch ”、“ learned without teacher ” The learning model of , With 100:0 Our score easily beat the previous AlphaGo. Except go , It is also proficient in chess and other board games , It can be said to be a real chess game “ genius ”. Besides, in this year , Deep learning related algorithms in medical 、 Finance 、 art 、 Remarkable achievements have been made in many fields such as unmanned driving . therefore , Some experts put 2017 It is regarded as a year of deep learning and even the most rapid development of artificial intelligence .

  • 2019 year , be based on Transformer The continuous growth and diffusion of natural language models , This is a language modeling neural network model , Can improve on almost all tasks NLP The quality of the .Google It is even used as one of the main signals of correlation , This is the most important update in years

  • 2020 year , Deep learning extends to more application scenarios , Such as ponding identification , Pavement collapse, etc , And during the epidemic , In the intelligent outbound call system , Crowd temperature measurement system , Masks, face recognition, etc. have deep learning applications .

原网站

版权声明
本文为[Falling flowers and rain]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/174/202206222253257387.html