当前位置:网站首页>How to safely eat apples on the edge of a cliff? Deepmind & openai gives the answer of 3D security reinforcement learning
How to safely eat apples on the edge of a cliff? Deepmind & openai gives the answer of 3D security reinforcement learning
2022-07-05 01:15:00 【QbitAl】
Line early From the Aofei temple
qubits | official account QbitAI
DeepMind&OpenAI This time, we jointly demonstrated the good work of the first-hand safety reinforcement learning model .
They put two-dimensional security RL Model ReQueST To a more practical 3D Scene .
Need to know ReQueST Originally, it was only used in navigation tasks ,2D Racing and other two-dimensional tasks , Learn how to avoid agents from the safety trajectory given by humans “ Self mutilation ”.
△ Figure note : original ReQueST Two dimensional navigation task ( Avoid the red area ) And racing tasks
But in practice 3D The problem in the environment is more complex , For example, robots performing tasks need to avoid obstacles in their work , Self driving cars need to avoid driving into ditches .
But in practice 3D The problem in the environment is more complex , For example, robots performing tasks need to avoid obstacles in their work , Self driving cars need to avoid driving into ditches .
So here comes the question , be used for 2D Mission ReQueST In a complex 3D Can it work in the environment ? stay 3D Can the quality and quantity of safety trajectory data given by humans in the environment meet the needs of training ?
To solve these two problems ,DeepMind and OpenAI Come up with a more complex dynamic model and a reward model incorporating human feedback , Will succeed ReQueST Migrate to 3D Environment , A step towards application .
And the security has also been improved , In the experiment, the number of unsafe behaviors of agents was reduced to baseline One tenth of .
How can I feel it intuitively ? Let's go to simulation 3D Take a look in the environment .
In the scene above , On the upper left side of the room is a cliff , The agent needs to wait until the green light on both sides of the room disappears , Try to eat three apples .
One of the apples needs to press the button to open the door to eat .
In the video shown , The agent presses the button , Open the gate , Successfully eat the apple that is locked , A set of operating procedures .
Let's see how it does it .
3D How to train the version of safety reinforcement learning model
stay ReQueST On the basis of ,DeepMind and OpenAI The problem to be solved is to apply to 3D Of the scene Dynamic model and Reward model .
Let's first look at the roles of these two from the overall process .
As shown in the figure below , It is the training process of the new model for the task of eating apples .
The light blue box represents the steps involved in the dynamic model . Start from the top row , Provide some safe tracks by people , Avoid red danger areas .
According to these, the dynamic model is trained , Then use it to generate some random tracks .
Then go to the lower row , Let humans follow these random tracks , Provide feedback by rewarding sketches , Then use these reward sketches , Reward model at the beginning of training , And constantly optimize both .
Next, we introduce these two models .
This time, DeepMind and OpenAI The dynamic model used LSTM Predict future image observations based on action sequences and past image observations .
Models and ReQueST Similar to , The encoder network and the deconvolution decoder network are a little larger , And use the mean square error loss of the observed and predicted values of the real image for training .
most important of all , This loss is based on the prediction of the future steps of each step , Thus, the dynamic model can maintain consistency in long-term deployment .
The training curve obtained is shown in the figure below , The horizontal axis represents the number of steps , The vertical axis represents the loss , Curves of different colors represent the number of tracks of different orders :
Besides , In the reward model section ,DeepMind and OpenAI Trained a 220 10000 parameter 11 Layer residual convolution network .
Input is 96x72 Of RGB Images , Output a scalar reward prediction , The loss is also the mean square error .
In this network , The reward sketch of human feedback also plays a very important role .
The reward sketch is simply to score the reward value manually .
As shown in the figure below , The upper part of the figure is the sketch given by people , In the second half of the prediction observation, there is apple , The reward value is 1, If Apple fades out of sight , The reward becomes -1.
In order to adjust the reward model network .
3D How effective is the security reinforcement learning model version
Next, let's take a look at the new model and other models as well Baseline How about the contrast effect of .
The results are shown in the following figure , Different difficulties correspond to different scene sizes .
On the left side of the figure below is the number of times the agent fell from the cliff , On the right is the number of apples eaten .
It should be noted that , In the legend ReQueST(ours) The representative training set contains the training results of human providing the wrong path .
and ReQueST(safe-only) Represents the training results of using only safe paths in the training set .
in addition ,ReQueST(sparse) It is the result of sketch training without reward .
It can be seen from it that , although Model-free This article baseline Ate all the apples , But at the expense of a lot of security .
and ReQueST The average agent can eat two of the three apples , And the number of falls off the cliff is only baseline One tenth of , Outstanding performance .
Judging from the difference between reward models , Reward sketch training ReQueST And sparse label training ReQueST The effect varies greatly .
Sparse label training ReQueST On average, you can't eat an apple .
It seems ,DeepMind and OpenAI There are indeed improvements in these two points .
Reference link :
[1]https://www.arxiv-vanity.com/papers/2201.08102/
[2]https://deepmind.com/blog/article/learning-human-objectives-by-evaluating-hypothetical-behaviours
边栏推荐
- Behind the cluster listing, to what extent is the Chinese restaurant chain "rolled"?
- C basic knowledge review (Part 3 of 4)
- Async/await you can use it, but do you know how to deal with errors?
- If the consumer Internet is compared to a "Lake", the industrial Internet is a vast "ocean"
- pycharm专业版下载安装教程
- Robley's global and Chinese markets 2022-2028: technology, participants, trends, market size and share Research Report
- 微信小程序:全网独家小程序版本独立微信社群人脉
- Global and Chinese markets for stratospheric UAV payloads 2022-2028: Research Report on technology, participants, trends, market size and share
- Ruby tutorial
- Four pits in reentrantlock!
猜你喜欢
JS implementation determines whether the point is within the polygon range
"Upside down salary", "equal replacement of graduates" these phenomena show that the testing industry has
微信小程序;胡言乱语生成器
College degree, what about 33 year old Baoma? I still sell and test, and my monthly income is 13K+
【CTF】AWDP总结(Web)
A simple SSO unified login design
潘多拉 IOT 开发板学习(RT-Thread)—— 实验4 蜂鸣器+马达实验【按键外部中断】(学习笔记)
What you learned in the eleventh week
ROS command line tool
Senior Test / development programmers write no bugs? Qualifications (shackles) don't be afraid of mistakes
随机推荐
创新引领方向 华为智慧生活全场景新品齐发
Database postragesq role membership
Delaying wages to force people to leave, and the layoffs of small Internet companies are a little too much!
Database postragesql lock management
Apifox (postman + swagger + mock + JMeter), an artifact of full stack development and efficiency improvement
Async/await you can use it, but do you know how to deal with errors?
Digital DP template
[Yocto RM]11 - Features
POAP:NFT的采用入口?
BGP comprehensive experiment
整理混乱的头文件,我用include what you use
142. Circular linked list II
Actual combat simulation │ JWT login authentication
Call Huawei order service to verify the purchase token interface and return connection reset
Les phénomènes de « salaire inversé » et de « remplacement des diplômés » indiquent que l'industrie des tests a...
College degree, what about 33 year old Baoma? I still sell and test, and my monthly income is 13K+
There is a new Post-00 exam king in the testing department. I really can't do it in my old age. I have
Basic operation of database and table ----- the concept of index
微信小程序:全新独立后台月老办事处一元交友盲盒
微信小程序:全网独家小程序版本独立微信社群人脉