当前位置:网站首页>Particle filter learning record
Particle filter learning record
2022-06-12 15:02:00 【From spring to winter】
Particle filter learning record
1. survey
Particle filter is a nonparametric implementation of Bayesian filter . Its starting point is to use a series of particles sampled from the posterior probability distribution to represent the posterior probability . The advantage is that it can represent all kinds of strange distribution , It can also cope with nonlinear transformations .
Parametric estimation and nonparametric estimation : Take the following example , Suppose there is a distribution , Then the parameter filter will first determine that it is a Gaussian distribution , And then we get its probability density function ( By mean 、 Variance and other parameters ); Particle filter does not bother about the specific function of this distribution , Instead, a large number of samples are sampled from this distribution , Use samples to describe this distribution . So why can particle filter do this ?

Monte Carlo sampling : Intuitionistic theory , Suppose we can sample from a target probability distribution to a series of samples , Then we can use these samples to estimate some properties of this distribution . Here I think of the law of large numbers , And the concept of experimental frequency and probability of coin tossing . In the coin toss experiment , If you only throw it a few times , There is no regularity in the number of positive and negative occurrences , But if you throw enough times , The frequency of positive and negative sides is closer and closer to the real value . Pictured above is an example , I sample from a one-dimensional probability distribution , Then the sample is more likely to appear near the Gaussian mean , So the samples there are dense ; And the further away from the mean , The more sparse the sample is . If every sample I collect , Add... To the corresponding position 1, Then draw the frequency , When the sample is large enough , The shape should be similar to the distribution shape .
So in turn , Although I don't know the specific shape of the distribution , But the denser the sample is in a certain interval , Then the greater the probability of that interval , The more likely it is peak. So let's put it figuratively , The sample can in turn deduce what the distribution looks like .
2. Particle filter algorithm
technological process
stay particle filter in , The particle set is represented as follows :

here , Particles are sampled from a posterior distribution , Every particle is t A possible assumption of the state of time . So the idea of particle filter is to use particle set to approximate the posterior distribution . that , The probability that a hypothesis will be selected to join the particle swarm is related to the posterior distribution :

Due to the peak ( Corresponding to the truth value ) Nearby particles are easier to collect , It means that the more particles fall into a certain range , The easier the state truth value falls into this range . The standard particle filter algorithm mentioned above , When a particle tends to infinity , For Limited M, Particles will sample from slightly different distributions . In practice, if the number of particles is not less than 100, The difference is negligible .
t The posterior distribution of time is determined by t Particle set description of time ,t-1 The posterior distribution of time is determined by t-1 Particle set description of time . According to Bayesian filtering , Particle filter is to recursively find the particle set : The input is the particle set of the previous time , Control of the current moment , observation , The output is the current particle set


Let's analyze this code :
1. Input
2. Define two Empty particle set
3. Yes t-1 The moment M Particles are processed one by one
4. Apply control to each particle ( Including noise ), A new particle is obtained by sampling in the state transition distribution . all M New particles form a priori set
5. Calculate the importance factor for new particles ( That is, the weight ), That is, the possibility that a particle can get the current observation in its current state . It will be mentioned later , It is very important .
6. Combine particles with their importance factors
7. The loop ends
Resampling section
8~11. Sample from particle set M Time , To generate a new particle set of equal size . Which particle in the particle set is selected depends on its weight , Heavy particles are more likely to be drawn ( Even repeatedly draw ). In the process , The particles slowly gather in certain areas . Resampling end , The new particle set formed a posterior set .
12. Returns the particle set
We see , Particles are not sampled directly from the posterior distribution , Actually, the algorithm is trick It is here for resampling , Even if it is not directly sampled from the posterior distribution , But the transformed particles after resampling obey a posteriori distribution .
Resampling further understanding
hypothesis f Is an unknown distribution , Call it the target distribution ,g Is a known distribution , Call it the proposed distribution . The problem now is , How to distribute by proposal g The set of particles obtained obeys the target distribution f Particle set of ? It will be well understood in combination with the following figure . First , from g A batch of particles is sampled from the , It is indicated by a blue vertical line :

And then calculate f and g The degree of mismatch between , That is, the weight factor w



Then resample according to the weight , Because the vertical line is high ( Corresponding to the great power ) Particles of are easier to select , So the resampled particles are surrounded by the vertical line , The distribution changes , Change to obey the target distribution f.


3. Summary
Why do others understand so thoroughly …
Have a good weekend ~
Article transferred from :https://blog.csdn.net/setella/article/details/82912604
边栏推荐
- TCP/IP 三次握手四次挥手(面试题)
- Process abstraction of program construction and interpretation
- [wp][beginner level] attack and defense world game
- Error 1105: message:\“raft entry is too large
- Ali suggests that all POJO attributes use wrapper classes, but have you noticed these pits?
- Alibaba, Tencent et pingduo sont à la recherche d'une nouvelle logique pour l'Internet industriel
- 【Environment】1. Get the configuration in YML through the environment in the configuration class
- Producers (send syncask requests) and consumers (with xxxask monitoring and Implementation)
- Assertion of selenium webdriver
- 【LDA】LDA主题模型笔记—主要是狄利克雷
猜你喜欢

亿纬锂能拟募资90亿:刘金成骆锦红夫妇合计认购60亿 布局光谷

Seaborn的简述

Dataset and dataframe in sparksql

简单的爬虫框架:解析51job页面岗位信息

TCP/IP 三次握手四次挥手(面试题)

C operator

TC menu split

数据的收集

Ankai microelectronics rushes to the scientific innovation board: the annual revenue of 500million Xiaomi industry fund is the shareholder
![[wechat applet] 6.1 applet configuration file](/img/8c/eadaa6d0cadde459c3f817a23303ea.jpg)
[wechat applet] 6.1 applet configuration file
随机推荐
TCP/IP 三次握手四次挥手(面试题)
How to add WWW to the domain name
FIRSTVT和LASTVT白话版
Swap numbers, XOR, operator correlation
First set and follow set in vernacular
Energy chain smart electronics landed on NASDAQ: Bain is the shareholder to become the first share of charging services in China
FIRST集与FOLLOW集白话版
USART(RS232422485)、I2C、SPI、CAN、USB总线
浏览器指纹解读
New features of ES6
odom坐标系的理解
MySQL index and view
安凯微电子冲刺科创板:年营收5亿 小米产业基金是股东
Structure example
Wild pointer understanding
Getting started with webdriver
ShardingSphere实践(6)——弹性伸缩
Seaborn的简述
Qiming Zhixian shares the application scheme of 2.8-inch handheld central control screen
频繁项集产生强关联规则的过程