当前位置:网站首页>How to safely eat apples on the edge of a cliff? Deepmind & openai gives the answer of 3D security reinforcement learning

How to safely eat apples on the edge of a cliff? Deepmind & openai gives the answer of 3D security reinforcement learning

2022-07-05 01:15:00 QbitAl

Line early From the Aofei temple
qubits | official account QbitAI

DeepMind&OpenAI This time, we jointly demonstrated the good work of the first-hand safety reinforcement learning model .

They put two-dimensional security RL Model ReQueST To a more practical 3D Scene .

Need to know ReQueST Originally, it was only used in navigation tasks ,2D Racing and other two-dimensional tasks , Learn how to avoid agents from the safety trajectory given by humans “ Self mutilation ”.

490581581fcb676d9df30ab3e2457dbd.png

a76c890f103255cb74a83c07b3326b71.gif
Figure note : original ReQueST Two dimensional navigation task ( Avoid the red area ) And racing tasks

But in practice 3D The problem in the environment is more complex , For example, robots performing tasks need to avoid obstacles in their work , Self driving cars need to avoid driving into ditches .

But in practice 3D The problem in the environment is more complex , For example, robots performing tasks need to avoid obstacles in their work , Self driving cars need to avoid driving into ditches .

So here comes the question , be used for 2D Mission ReQueST In a complex 3D Can it work in the environment ? stay 3D Can the quality and quantity of safety trajectory data given by humans in the environment meet the needs of training ?

To solve these two problems ,DeepMind and OpenAI Come up with a more complex dynamic model and a reward model incorporating human feedback , Will succeed ReQueST Migrate to 3D Environment , A step towards application .

And the security has also been improved , In the experiment, the number of unsafe behaviors of agents was reduced to baseline One tenth of .

How can I feel it intuitively ? Let's go to simulation 3D Take a look in the environment .

e935ed5e2bd2b8c5843f5459e675e8f9.png

In the scene above , On the upper left side of the room is a cliff , The agent needs to wait until the green light on both sides of the room disappears , Try to eat three apples .

One of the apples needs to press the button to open the door to eat .

In the video shown , The agent presses the button , Open the gate , Successfully eat the apple that is locked , A set of operating procedures .

0a438cd37c5a7d760222cde6893e6ae0.gif

Let's see how it does it .

3D How to train the version of safety reinforcement learning model

stay ReQueST On the basis of ,DeepMind and OpenAI The problem to be solved is to apply to 3D Of the scene Dynamic model and Reward model .

Let's first look at the roles of these two from the overall process .

As shown in the figure below , It is the training process of the new model for the task of eating apples .

7d42aa5f6356d2944dab8b6769afd724.png

The light blue box represents the steps involved in the dynamic model . Start from the top row , Provide some safe tracks by people , Avoid red danger areas .

According to these, the dynamic model is trained , Then use it to generate some random tracks .

Then go to the lower row , Let humans follow these random tracks , Provide feedback by rewarding sketches , Then use these reward sketches , Reward model at the beginning of training , And constantly optimize both .

Next, we introduce these two models .

This time, DeepMind and OpenAI The dynamic model used LSTM Predict future image observations based on action sequences and past image observations .

Models and ReQueST Similar to , The encoder network and the deconvolution decoder network are a little larger , And use the mean square error loss of the observed and predicted values of the real image for training .

most important of all , This loss is based on the prediction of the future steps of each step , Thus, the dynamic model can maintain consistency in long-term deployment .

The training curve obtained is shown in the figure below , The horizontal axis represents the number of steps , The vertical axis represents the loss , Curves of different colors represent the number of tracks of different orders :

048128922af9441be90560ccc60cb835.png

Besides , In the reward model section ,DeepMind and OpenAI Trained a 220 10000 parameter 11 Layer residual convolution network .

Input is 96x72 Of RGB Images , Output a scalar reward prediction , The loss is also the mean square error .

In this network , The reward sketch of human feedback also plays a very important role .

The reward sketch is simply to score the reward value manually .

As shown in the figure below , The upper part of the figure is the sketch given by people , In the second half of the prediction observation, there is apple , The reward value is 1, If Apple fades out of sight , The reward becomes -1.

6306b737aa36aa587c6ff0db5aa4639e.png

In order to adjust the reward model network .

3D How effective is the security reinforcement learning model version

Next, let's take a look at the new model and other models as well Baseline How about the contrast effect of .

The results are shown in the following figure , Different difficulties correspond to different scene sizes .

On the left side of the figure below is the number of times the agent fell from the cliff , On the right is the number of apples eaten .

a59835fd9eff2ca7eadc1e3d1bc1778f.png

It should be noted that , In the legend ReQueST(ours) The representative training set contains the training results of human providing the wrong path .

and ReQueST(safe-only) Represents the training results of using only safe paths in the training set .

in addition ,ReQueST(sparse) It is the result of sketch training without reward .

It can be seen from it that , although Model-free This article baseline Ate all the apples , But at the expense of a lot of security .

and ReQueST The average agent can eat two of the three apples , And the number of falls off the cliff is only baseline One tenth of , Outstanding performance .

Judging from the difference between reward models , Reward sketch training ReQueST And sparse label training ReQueST The effect varies greatly .

Sparse label training ReQueST On average, you can't eat an apple .

It seems ,DeepMind and OpenAI There are indeed improvements in these two points .

Reference link :

[1]https://www.arxiv-vanity.com/papers/2201.08102/
[2]https://deepmind.com/blog/article/learning-human-objectives-by-evaluating-hypothetical-behaviours

原网站

版权声明
本文为[QbitAl]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/02/202202141039062817.html