当前位置:网站首页>Notes on probability theory
Notes on probability theory
2022-07-01 06:33:00 【Keep--Silent】
List of articles
- Chapter one Random events and probabilities
- Chapter two Random variables and their probability distributions
- The third chapter Multidimensional random variables and their distribution
- Section 1 Two dimensional random variables and their distribution
- In the second quarter The independence of random variables
- In the third quarter Two dimensional uniform distribution and two-dimensional normal distribution
- Chapter four The numerical characteristics of random variables
- The fifth chapter The law of large numbers and the central limit theorem
- Chapter six Basic concepts of mathematical statistics
- Chapter vii. Parameter estimation
- Chapter viii.
- Common formula
probability theory
Chapter one Random events and probabilities
Section 1 Relationship and operation of events
A ⊂ B ⇒ A − B = A B ˉ = ∅ A\subset B \Rightarrow A-B=A\bar{B} =\varnothing A⊂B⇒A−B=ABˉ=∅
Duality rate : ∩ i = 1 n A i ‾ = ∪ i = 1 n A i ‾ , ∪ i = 1 n A i ‾ = ∩ i = 1 n A i ‾ \overline{\mathop{\cap}\limits_{i=1}^{n} A_i}= \mathop{\cup }\limits_{i=1}^{n} \overline{A_i},\overline{\mathop{\cup}\limits_{i=1}^{n} A_i}= \mathop{\cap }\limits_{i=1}^{n} \overline{A_i} i=1∩nAi=i=1∪nAi,i=1∪nAi=i=1∩nAi
In the second quarter Probability and formula
Conditional probability
P ( B ∣ A ) = P ( A B ) P ( A ) P(B|A)=\cfrac{P(AB)}{P(A)} P(B∣A)=P(A)P(AB)
Independent
P ( A B ) = P ( A ) P ( B ) namely P ( A ∣ B ) = P ( A ∣ B ˉ ) P(AB)=P(A)P(B) namely P(A|B)=P(A|\bar{B}) P(AB)=P(A)P(B) namely P(A∣B)=P(A∣Bˉ)
A class
P ( A ∪ B ) = P ( A ) + P ( B ) − P ( A B ) P(A\cup B)= P(A)+P(B)-P(AB) P(A∪B)=P(A)+P(B)−P(AB)
P ( A ∪ B ∪ C ) = P ( A ) + P ( B ) + P ( C ) − P ( A B ) − P ( A C ) − P ( B C ) + P ( A B C ) P(A\cup B\cup C)=P(A)+P(B)+P(C)-P(AB)-P(AC)-P(BC)+P(ABC) P(A∪B∪C)=P(A)+P(B)+P(C)−P(AB)−P(AC)−P(BC)+P(ABC)
Multiplication
P ( A B ) = P ( A ) P ( B ∣ A ) P(AB)=P(A)P(B|A) P(AB)=P(A)P(B∣A)
P ( A 1 A 2 A n ) = P ( A 1 ) P ( A 2 ∣ A 1 ) … P ( A n ∣ A 1 A 2 … A n − 1 ) P(A_1A_2A_n)=P(A_1)P(A_2|A_1)\dots P(A_n|A_1A_2\dots A_{n-1}) P(A1A2An)=P(A1)P(A2∣A1)…P(An∣A1A2…An−1)
All probability formula
P ( A ) = ∑ i = 1 n P ( B i ) P ( A ∣ B i ) P(A)=\displaystyle \sum\limits_{i=1}^n P(B_i)P(A|B_i) P(A)=i=1∑nP(Bi)P(A∣Bi)
Bayes' formula
P ( B k ∣ A ) = P ( B k ) P ( A ∣ B k ) ∑ i = 1 n P ( B i ) P ( A ∣ B i ) P(B_k|A)=\displaystyle \frac{P(B_k)P(A|B_k)}{\sum\limits_{i=1}^n P(B_i)P(A|B_i)} P(Bk∣A)=i=1∑nP(Bi)P(A∣Bi)P(Bk)P(A∣Bk)
Classical concept and Bernoulli
P ( A ) = n A n = A the package contain Of sample spot sample Ben spot total Count P(A)=\cfrac{n_A}{n}=\cfrac{A Included samples }{ Total number of sample points } P(A)=nnA= sample Ben spot total Count A the package contain Of sample spot
Only A , A ‾ A,\overline{A} A,A , be called Bernoulli's test , repeat n n n The second is n Heavy Bernoulli experiment , P ( A ) = p P(A)=p P(A)=p , happen k The probability of times is P = C n k p k ( 1 − p ) n − k P=C_n^kp^k(1-p)^{n-k} P=Cnkpk(1−p)n−k
Add
C n + 1 k = C n k + C n k − 1 C_{n+1}^k=C_n^k+C_n^{k-1} Cn+1k=Cnk+Cnk−1
n+1 Choose a few k individual = The current number is not selected ( Choose k-1)+ Select the current number ( Choose k)
C n + m k = ∑ i = 1 k C n i ∗ C m k − i C_{n+m}^k=\displaystyle \sum \limits _{i=1}^kC_n^i*C_m^{k-i} Cn+mk=i=1∑kCni∗Cmk−i
n+m Number selected k individual = from n Number selected i individual * from m Number selected k-i individual
Chapter two Random variables and their probability distributions
Section 1 Random variables and their distribution functions
F ( x ) = P ( X < x ) F(x)=P(X<x) F(x)=P(X<x)
If there is a non negative integrable function f ( x ) f(x) f(x), Make any x x x, There are F ( x ) = ∫ − ∞ x f ( t ) d t , − ∞ < x < + ∞ F(x)=\int^x_{-\infty}f(t)\mathrm{d}t ,-\infty<x<+\infty F(x)=∫−∞xf(t)dt,−∞<x<+∞, said X by Continuous random variables f(x) Is the probability density function
In the second quarter Common distribution
0 − 1 0-1 0−1 Distribution
E ( X ) = p , D ( X ) = p ( 1 − p ) E(X)=p,D(X)=p(1-p) E(X)=p,D(X)=p(1−p)
The binomial distribution ( Bernoulli
X ∼ B ( n , p ) X\sim B(n,p) X∼B(n,p)
P ( X = k ) = C n k p k ( 1 − p ) n − k P(X=k)=C_n^kp^k(1-p)^{n-k} P(X=k)=Cnkpk(1−p)n−k
E ( X ) = n p , D ( X ) = n p ( 1 − p ) E(X)=np,D(X)=np(1-p) E(X)=np,D(X)=np(1−p)
Hypergeometric distribution
P ( X = k ) = C N k C M n − k C N + M n P(X=k)=\cfrac{C_N^kC_M^{n-k}}{C_{N+M}^n} P(X=k)=CN+MnCNkCMn−k
Geometric distribution
P ( X = k ) = ( 1 − p ) k − 1 p P(X=k)=(1-p)^{k-1}p P(X=k)=(1−p)k−1p
E ( X ) = 1 p , D ( X ) = 1 − p p 2 E(X)=\cfrac{1}{p},D(X)=\cfrac{1-p}{p^2} E(X)=p1,D(X)=p21−p
Poisson distribution
X ∼ P ( λ ) , λ > 0 X\sim P(\lambda),\lambda>0 X∼P(λ),λ>0
P ( X = k ) = λ k k ! e − λ , k = 0 , 1 , 2 , 3 , . . P(X=k)=\cfrac{\lambda^k}{k!}e^{-\lambda},k=0,1,2,3,.. P(X=k)=k!λke−λ,k=0,1,2,3,..
E ( X ) = λ , D ( X ) = λ E(X)=\lambda,D(X)=\lambda E(X)=λ,D(X)=λ
Poisson's theorem : Bernoulli experiment p n p_n pn representative A The probability of occurrence in an experiment , It is related to the total number of experiments n n n of , With n n n The increase of , p n p_n pn It's diminishing , If lim n → ∞ n p n = λ ⇒ lim n → ∞ C n k p k ( 1 − p ) n − k = λ k k ! e − λ \lim\limits_{n\rightarrow \infty}np_n=\lambda \Rightarrow\lim\limits_{n\rightarrow \infty}C_n^kp^k(1-p)^{n-k}=\cfrac{\lambda^k}{k!}e^{-\lambda} n→∞limnpn=λ⇒n→∞limCnkpk(1−p)n−k=k!λke−λ
Uniform distribution
X ∼ [ a , b ] or person X ∼ ( a , b ) X\sim [a,b] perhaps X\sim (a,b) X∼[a,b] or person X∼(a,b)
f ( x ) = { 1 b − a , a < x < b 0 , Its He f(x)=\left\{\begin{aligned} &\cfrac{1}{b-a},&a<x<b\\ &0,& other \\ \end{aligned}\right. f(x)=⎩⎪⎨⎪⎧b−a1,0,a<x<b Its He
F ( x ) = { 0 , b ≤ x x − a b − a , a ≤ x < b 1 , b ≤ x F(x)=\left\{\begin{aligned} &0,&b\leq x\\ &\cfrac{x-a}{b-a},&a\leq x<b\\ &1,&b\leq x\\ \end{aligned}\right. F(x)=⎩⎪⎪⎪⎨⎪⎪⎪⎧0,b−ax−a,1,b≤xa≤x<bb≤x
E ( X ) = a + b 2 , D ( X ) = a + b 12 E(X)=\cfrac{a+b}{2},D(X)=\cfrac{a+b}{12} E(X)=2a+b,D(X)=12a+b
An index distribution
X ∼ E ( λ ) X \sim E(\lambda) X∼E(λ)
among λ > 0 \lambda >0 λ>0
f ( x ) = { λ e − λ x , x > 0 0 , x ≤ 0 f(x)=\left\{\begin{aligned} &\lambda e^{-\lambda x}&,x>0\\ &0&,x \leq0\\ \end{aligned}\right. f(x)={ λe−λx0,x>0,x≤0
F ( x ) = { 1 − e − λ x , x > 0 0 , x ≤ 0 F(x)=\left\{\begin{aligned} &1-e^{-\lambda x}&,x>0\\ &0&,x \leq0\\ \end{aligned}\right. F(x)={ 1−e−λx0,x>0,x≤0
E ( X ) = 1 λ , D ( X ) = 1 λ 2 E(X)=\cfrac{1}{\lambda},D(X)=\cfrac{1}{\lambda^2} E(X)=λ1,D(X)=λ21
Normal distribution
X ∼ N ( μ , σ 2 ) X \sim N(\mu,\sigma^2 ) X∼N(μ,σ2)
f ( x ) = 1 2 π σ e − ( x − μ ) 2 2 σ 2 f(x)=\cfrac{1}{\sqrt{2 \pi}\sigma }e^{-\cfrac{(x-\mu)^2 }{2 \sigma^2}} f(x)=2πσ1e−2σ2(x−μ)2
F ( x ) = 1 2 π σ ∫ − ∞ x e − ( t − μ ) 2 2 σ 2 d t F(x)=\cfrac{1}{\sqrt{2 \pi}\sigma }\displaystyle\int_{-\infty}^x e^{-\cfrac{(t-\mu)^2 }{2 \sigma^2}}\mathrm{d}t F(x)=2πσ1∫−∞xe−2σ2(t−μ)2dt
E ( X ) = μ , D ( X ) = σ 2 E(X)=\mu,D(X)=\sigma^2 E(X)=μ,D(X)=σ2
Standard normal distribution
Make μ = 0 , σ = 1 \mu=0,\sigma=1 μ=0,σ=1
φ ( x ) = 1 2 π e − x 2 2 \varphi (x)=\cfrac{1}{\sqrt{2 \pi}}e^{-\cfrac{x^2 }{2 }} φ(x)=2π1e−2x2
Φ ( x ) = 1 2 π ∫ − ∞ x e − t 2 2 d t \varPhi(x)=\cfrac{1}{\sqrt{2 \pi} }\displaystyle\int_{-\infty}^x e^{-\cfrac{t^2 }{2 }}\mathrm{d}t Φ(x)=2π1∫−∞xe−2t2dt
E ( X ) = 0 , D ( X ) = 1 E(X)=0,D(X)=1 E(X)=0,D(X)=1
In the third quarter Random variable function distribution
Y = g ( X ) Y=g(X) Y=g(X)
X Is a discrete random variable Y It is also a discrete random variable
X Is a continuous random variable Y It is also a continuous random variable
- Formula method
y = g ( x ) y=g(x) y=g(x) It's a monotone function , h ( y ) h(y) h(y) Is its inverse function
f Y ( y ) = { ∣ h ′ ( y ) ∣ f X ( h ( y ) ) , a < y < b 0 , Its He f_Y(y)=\left\{\begin{aligned} &|h'(y)|f_X(h(y)),&a<y<b\\ &0,& other \\ \end{aligned}\right. fY(y)={ ∣h′(y)∣fX(h(y)),0,a<y<b Its He - Definition
F Y ( y ) = P ( Y ≤ y ) = P ( g ( X ) ≤ y ) = ∫ g ( x ) ≤ y f X ( x ) d x F_Y(y)=P(Y \leq y)=P(g(X)\leq y )=\displaystyle\int\limits _{g(x)\leq y} f_X(x)\mathrm{d}x FY(y)=P(Y≤y)=P(g(X)≤y)=g(x)≤y∫fX(x)dx
then f Y ( y ) = F Y ′ ( y ) f_Y(y)=F'_Y(y) fY(y)=FY′(y)
The third chapter Multidimensional random variables and their distribution
Section 1 Two dimensional random variables and their distribution
Two dimensional continuous random variable
F ( x , y ) = ∫ − ∞ x ∫ − ∞ y f ( u , v ) d u d v F(x,y)=\displaystyle \int_{-\infty}^x \int_{-\infty}^y f(u,v)\mathrm{d}u\mathrm{d}v F(x,y)=∫−∞x∫−∞yf(u,v)dudv
F X ( x ) = F ( x , + ∞ ) = F ( x , y ) = ∫ − ∞ x ∫ − ∞ + ∞ f ( u , v ) d u d v F_X(x)=F(x,+\infty)=F(x,y)=\displaystyle \int_{-\infty}^x \int_{-\infty}^{+\infty} f(u,v)\mathrm{d}u\mathrm{d}v FX(x)=F(x,+∞)=F(x,y)=∫−∞x∫−∞+∞f(u,v)dudv
f X ∣ Y ( x ∣ y ) = f ( x , y ) f Y ( y ) , f Y ( y ) > 0 f_{X|Y}(x|y)=\cfrac{f(x,y)}{f_Y(y)},f_Y(y)>0 fX∣Y(x∣y)=fY(y)f(x,y),fY(y)>0
In the second quarter The independence of random variables
Definition :
X 、 Y phase mutual single state ⇔ ren It means x , y { P { X ≤ x , Y ≤ y } = P { X ≤ x } = P { Y ≤ y } F ( x , y ) = F X ( x ) F Y ( y ) X、Y Are independent of each other \Leftrightarrow arbitrarily x,y\left\{\begin{aligned} &P\{X\leq x,Y\leq y\}=P\{X\leq x\}=P\{Y\leq y\}\\ &F(x,y)=F_X(x)F_Y(y)\\ \end{aligned}\right. X、Y phase mutual single state ⇔ ren It means x,y{ P{ X≤x,Y≤y}=P{ X≤x}=P{ Y≤y}F(x,y)=FX(x)FY(y)
leave scattered type along with machine change The amount X 、 Y phase mutual single state ⇔ P { X = x i , Y = y i } = P { X = x i } P { Y = y i } namely p i j = p i p j Discrete random variables X、Y Are independent of each other \Leftrightarrow P\{X= x_i,Y=y_i\}=P\{X= x_i\}P\{Y=y_i\} namely p_{ij}=p_ip_j leave scattered type along with machine change The amount X、Y phase mutual single state ⇔P{ X=xi,Y=yi}=P{ X=xi}P{ Y=yi} namely pij=pipj
even To continue type along with machine change The amount X 、 Y phase mutual single state ⇔ f ( x , y ) = f X ( x ) f Y ( y ) Continuous random variable X、Y Are independent of each other \Leftrightarrow f(x,y)=f_X(x)f_Y(y) even To continue type along with machine change The amount X、Y phase mutual single state ⇔f(x,y)=fX(x)fY(y)
In the third quarter Two dimensional uniform distribution and two-dimensional normal distribution
Two dimensional uniform distribution
District Domain G Of Noodles product by A Area G The area of is A District Domain G Of Noodles product by A
f ( x , y ) = { 1 A , ( x , y ) ∈ G 0 , Its He f(x,y)=\left\{\begin{aligned} &\cfrac{1}{A}&,(x,y) \in G\\ &0&, other \\ \end{aligned}\right. f(x,y)=⎩⎪⎨⎪⎧A10,(x,y)∈G, Its He
Two dimensional normal distribution
X ∼ N ( μ 1 , σ 1 2 ) , Y ∼ N ( μ 2 , σ 2 2 ) X \sim N(\mu_1,\sigma_1^2 ) ,Y \sim N(\mu_2,\sigma_2^2 ) X∼N(μ1,σ12),Y∼N(μ2,σ22)
( X , Y ) ∼ N ( μ 1 , μ 2 , σ 1 2 , σ 2 2 , ρ ) (X,Y) \sim N(\mu_1,\mu_2 ,\sigma_1^2,\sigma_2^2,\rho ) (X,Y)∼N(μ1,μ2,σ12,σ22,ρ)
f ( x , y ) = 1 2 π σ 1 σ 2 1 − ρ 2 e x p [ − 1 2 ( 1 − ρ 2 ) ( ( x − μ 1 ) 2 2 σ 1 2 + ( y − μ 2 ) 2 2 σ 2 2 ) − 2 ρ ( x − μ 1 ) ( y − μ 2 ) σ 1 σ 2 ] \displaystyle f(x,y)=\cfrac{1}{2\pi \sigma_1 \sigma_2\sqrt{1-\rho^2}}exp[-\cfrac{1}{2(1-\rho^2)}(\cfrac{(x-\mu_1)^2 }{2 \sigma_1^2}+\cfrac{(y-\mu_2)^2 }{2 \sigma_2^2})-\cfrac{2 \rho (x-\mu_1)(y-\mu_2)}{\sigma_1 \sigma_2}] f(x,y)=2πσ1σ21−ρ21exp[−2(1−ρ2)1(2σ12(x−μ1)2+2σ22(y−μ2)2)−σ1σ22ρ(x−μ1)(y−μ2)]
e x p ( t ) surface in e t exp(t) Express e^t exp(t) surface in et
X 、 Y phase mutual single state ⇔ ρ = 0 X、Y Are independent of each other \Leftrightarrow \rho =0 X、Y phase mutual single state ⇔ρ=0
The fourth quarter, Two random variable functions Z = g ( X , Y ) Z=g(X,Y) Z=g(X,Y) The distribution of
The core formula
F Z ( z ) = P { Z ≤ z } = P { g ( X , Y ) ≤ z } = ∬ g ( x , y ) ≤ z f ( x , y ) d x d y \begin{aligned} F_Z(z) &= P\{Z\leq z\}\\ &= P\{g(X,Y)\leq z\}\\ &=\iint \limits_{g(x,y)\leq z}f(x,y)\mathrm{d}x\mathrm{d}y\\ \end{aligned} FZ(z)=P{ Z≤z}=P{ g(X,Y)≤z}=g(x,y)≤z∬f(x,y)dxdy
Z = X + Y Z=X+Y Z=X+Y The distribution of
F Z ( z ) = ∬ x + y ≤ z f ( x , y ) d x d y = ∫ − ∞ + ∞ d x ∫ − ∞ z − x f ( x , y ) d y = ∫ − ∞ + ∞ d y ∫ − ∞ z − y f ( x , y ) d x f Z ( z ) = ∫ − ∞ + ∞ f ( x , z − x ) d x = ∫ − ∞ + ∞ f ( z − y , y ) d y \begin{aligned} F_Z(z) &=\iint \limits_{x+y\leq z}f(x,y)\mathrm{d}x\mathrm{d}y\\ &= \int_{-\infty}^{+\infty}\mathrm{d}x \int_{-\infty}^{z-x} f(x,y)\mathrm{d}y =\int_{-\infty}^{+\infty}\mathrm{d}y \int_{-\infty}^{z-y} f(x,y)\mathrm{d}x \\ f_Z(z) &=\int_{-\infty}^{+\infty} f(x,z-x)\mathrm{d}x=\int_{-\infty}^{+\infty} f(z-y,y)\mathrm{d}y \\ \end{aligned} FZ(z)fZ(z)=x+y≤z∬f(x,y)dxdy=∫−∞+∞dx∫−∞z−xf(x,y)dy=∫−∞+∞dy∫−∞z−yf(x,y)dx=∫−∞+∞f(x,z−x)dx=∫−∞+∞f(z−y,y)dy
When X X X and Y Y Y When they are independent of each other
f Z ( z ) = ∫ − ∞ + ∞ f X ( x ) f Y ( z − x ) d x = ∫ − ∞ + ∞ f X ( z − y ) f Y ( y ) d y \displaystyle f_Z(z)=\int_{-\infty}^{+\infty} f_X(x)f_Y(z-x)\mathrm{d}x=\int_{-\infty}^{+\infty} f_X(z-y)f_Y(y)\mathrm{d}y fZ(z)=∫−∞+∞fX(x)fY(z−x)dx=∫−∞+∞fX(z−y)fY(y)dy
It is called convolution formula , Write it down as f Z = f X ∗ f Y f_Z=f_X \ast f_Y fZ=fX∗fY
Z = max ( X , Y ) , min ( X , Y ) Z=\max(X,Y),\min(X,Y) Z=max(X,Y),min(X,Y) The calculation of
Z = max ( X , Y ) ≤ z ⇒ X ≤ z ⋂ Y ≤ z Z=\max(X,Y) \leq z \Rightarrow X \leq z \bigcap Y \leq z Z=max(X,Y)≤z⇒X≤z⋂Y≤z
F Z ( z ) = P { Z ≤ z } = P { X ≤ z } P { Y ≤ z } = F X ( x ) F Y ( y ) \begin{aligned} F_Z(z) &=P\{Z\leq z\}\\ &=P\{X \leq z\}P\{Y \leq z\}\\ &=F_X(x)F_Y(y)\\ \end{aligned} FZ(z)=P{ Z≤z}=P{ X≤z}P{ Y≤z}=FX(x)FY(y)
Z = min ( X , Y ) ≤ z ⇒ X ≤ z ⋃ Y ≤ z Z=\min(X,Y) \leq z \Rightarrow X \leq z \bigcup Y \leq z Z=min(X,Y)≤z⇒X≤z⋃Y≤z
F Z ( z ) = P { Z ≤ z } = F X ( x ) + F Y ( y ) − F X ( x ) F Y ( y ) \begin{aligned} F_Z(z) &=P\{Z\leq z\}\\ &=F_X(x)+F_Y(y)-F_X(x)F_Y(y)\\ \end{aligned} FZ(z)=P{ Z≤z}=FX(x)+FY(y)−FX(x)FY(y)
perhaps
X ≤ z ⋃ Y ≤ z ⇒ U n i o n − ( X > z ⋂ Y > z ) X \leq z \bigcup Y \leq z \Rightarrow Union -(X>z \bigcap Y>z) X≤z⋃Y≤z⇒Union−(X>z⋂Y>z)
F Z ( z ) = P { Z ≤ z } = 1 − P { X > z } P { Y > z } = 1 − ( 1 − F X ( x ) ) ( 1 − F Y ( y ) ) = Same as On \begin{aligned} F_Z(z) &=P\{Z\leq z\}\\ &=1-P\{X > z\}P\{Y > z\}\\ &=1-(1-F_X(x))(1-F_Y(y))= ditto \\ \end{aligned} FZ(z)=P{ Z≤z}=1−P{ X>z}P{ Y>z}=1−(1−FX(x))(1−FY(y))= Same as On
Chapter four The numerical characteristics of random variables
Section 1 Mathematical expectation and variance of random variables
expect
Definition
Continuous type
E ( X ) = ∫ − ∞ + ∞ x f ( x ) d x \displaystyle E(X)=\int_{-\infty}^{+\infty} xf(x) dx E(X)=∫−∞+∞xf(x)dx
E ( Y ) = E [ g ( X ) ] = ∫ − ∞ + ∞ g ( x ) f ( x ) d x E(Y)=E[g(X)]=\displaystyle \int _{-\infty}^{+\infty} g(x)f(x) dx E(Y)=E[g(X)]=∫−∞+∞g(x)f(x)dx
nature
E ( a X + b ) = a E ( X ) + b E(aX+b)=aE(X)+b E(aX+b)=aE(X)+b
E ( X ± Y ) = E ( X ) ± E ( Y ) E(X\pm Y)=E(X)\pm E(Y) E(X±Y)=E(X)±E(Y)
X , Y single state ⇒ E ( X Y ) = E ( X ) E ( Y ) X,Y Independent \Rightarrow E(XY)=E(X)E(Y) X,Y single state ⇒E(XY)=E(X)E(Y)
variance
Definition
D ( X ) = E { [ X − E ( X ) ] 2 } D(X)=E\{[X-E(X)]^2\} D(X)=E{ [X−E(X)]2}
Standard deviation ( Mean square error ) σ ( X ) = D ( X ) \sigma (X)=\sqrt{D(X)} σ(X)=D(X)
nature
D ( a X + b ) = a 2 D ( X ) D(aX+b)=a^2D(X) D(aX+b)=a2D(X)
X , Y single state ⇒ D ( X ± Y ) = D ( X ) ± D ( Y ) X,Y Independent \Rightarrow D(X\pm Y)=D(X)\pm D(Y) X,Y single state ⇒D(X±Y)=D(X)±D(Y)
D ( X ) = E { [ X − E ( X ) ] 2 } = E { [ X 2 − 2 X E ( X ) + E 2 ( X ) ] } = E ( X 2 ) − 2 E [ X E ( X ) ] + E 2 ( X ) = E ( X 2 ) − 2 E 2 ( X ) + E 2 ( X ) = E ( X 2 ) − E 2 ( X ) \begin{aligned} D(X) &=E\{[X-E(X)]^2\}\\ &=E\{[X^2-2XE(X)+E^2(X)]\}\\ &=E(X^2)-2E[XE(X)]+E^2(X)\\ &=E(X^2)-2E^2(X)+E^2(X)\\ &=E(X^2)-E^2(X)\\ \end{aligned} D(X)=E{ [X−E(X)]2}=E{ [X2−2XE(X)+E2(X)]}=E(X2)−2E[XE(X)]+E2(X)=E(X2)−2E2(X)+E2(X)=E(X2)−E2(X)
Available E ( X 2 ) − E 2 ( X ) ≥ 0 E(X^2)-E^2(X) \geq 0 E(X2)−E2(X)≥0
Moment 、 covariance and correlation coefficient
Definition
X Of k rank primary spot Moment : E ( X k ) X Of k Moment of origin of order :E(X^k) X Of k rank primary spot Moment :E(Xk)
X Of k rank in heart Moment : E { [ X − E ( X ) ] k } X Of k Moment of order center :E\{[X-E(X)]^k\} X Of k rank in heart Moment :E{ [X−E(X)]k}
X and Y Of k + l mixed close Moment : E ( X k Y l ) X and Y Of k+l The mixing moment :E(X^k Y^l) X and Y Of k+l mixed close Moment :E(XkYl)
X and Y Of k + l mixed close in heart Moment : E { [ X − E ( X ) ] k [ Y − E ( Y ) ] l } X and Y Of k+l Mixed central moment :E\{[X-E(X)]^k [Y-E(Y)]^l\} X and Y Of k+l mixed close in heart Moment :E{ [X−E(X)]k[Y−E(Y)]l}
Association Fang Bad C o v ( X , Y ) = E { [ X − E ( X ) ] [ Y − E ( Y ) ] } covariance Cov(X,Y)=E\{[X-E(X)] [Y-E(Y)]\} Association Fang Bad Cov(X,Y)=E{ [X−E(X)][Y−E(Y)]}
phase Turn off system Count ρ X Y = C o v ( X , Y ) D ( X ) D ( Y ) The correlation coefficient \rho_{XY}=\cfrac{Cov(X,Y)}{\sqrt{D(X)}\sqrt{D(Y)}} phase Turn off system Count ρXY=D(X)D(Y)Cov(X,Y)
nature
C o v ( X , Y ) = E ( X Y ) − E ( X ) E ( Y ) Cov(X,Y)=E(XY)-E(X)E(Y) Cov(X,Y)=E(XY)−E(X)E(Y)
D ( X ± Y ) = D ( X ) + D ( Y ) ± 2 C o v ( X , Y ) D(X \pm Y)=D(X)+D(Y) \pm 2Cov(X,Y) D(X±Y)=D(X)+D(Y)±2Cov(X,Y)
C o v ( a X , b Y ) = a b C o v ( X , Y ) Cov(aX,bY)=abCov(X,Y) Cov(aX,bY)=abCov(X,Y)
C o v ( X 1 + X 2 , Y ) = C o v ( X 1 , Y ) + C o v ( X 2 , Y ) Cov(X_1+X_2,Y)=Cov(X_1,Y)+Cov(X_2,Y) Cov(X1+X2,Y)=Cov(X1,Y)+Cov(X2,Y)
C o v ( X , X ) = E ( X 2 ) − E ( X ) E ( X ) = D ( X ) Cov(X,X)=E(X^2)-E(X)E(X)=D(X) Cov(X,X)=E(X2)−E(X)E(X)=D(X)
D ( X ) D ( Y ) = 0 ⇒ ρ X Y = 0 D(X)D(Y)=0 \Rightarrow \rho_{XY}=0 D(X)D(Y)=0⇒ρXY=0
ρ X Y = 0 ⇒ X , Y No phase Turn off \rho_{XY}=0 \Rightarrow X,Y Unrelated ρXY=0⇒X,Y No phase Turn off
∣ ρ X Y ∣ = 1 ⇒ Y = a X + b |\rho_{XY}|=1 \Rightarrow Y=aX+b ∣ρXY∣=1⇒Y=aX+b
Independent and irrelevant
- X , Y phase mutual single state ⇒ X , Y No phase Turn off X,Y Are independent of each other \Rightarrow X,Y Unrelated X,Y phase mutual single state ⇒X,Y No phase Turn off
- Two dimension just state along with machine change The amount ( X , Y ) , X 、 Y phase mutual single state ⇔ ρ = 0 Two dimensional normal random variable (X,Y), X、Y Are independent of each other \Leftrightarrow \rho =0 Two dimension just state along with machine change The amount (X,Y),X、Y phase mutual single state ⇔ρ=0
- Two dimension just state along with machine change The amount ( X , Y ) , X 、 Y phase mutual single state ⇔ X , Y No phase Turn off Two dimensional normal random variable (X,Y), X、Y Are independent of each other \Leftrightarrow X,Y Unrelated Two dimension just state along with machine change The amount (X,Y),X、Y phase mutual single state ⇔X,Y No phase Turn off
The fifth chapter The law of large numbers and the central limit theorem
Chebyshev inequality
E ( X ) and D ( x ) save stay , Yes On ren It means Of ε > 0 , Yes E(X) and D(x) There is , For arbitrary \varepsilon>0 , Yes E(X) and D(x) save stay , Yes On ren It means Of ε>0, Yes P { ∣ X − E ( X ) ∣ ≥ ε } ≤ D ( X ) ε 2 P\{|X-E(X) |\geq \varepsilon \} \leq \cfrac{D(X)}{\varepsilon^2} P{ ∣X−E(X)∣≥ε}≤ε2D(X)
convergence
X 1 , X 2 , … , X n , … yes One individual along with machine change The amount order Column , A yes One individual often Count , Such as fruit Yes On ren It means Of ε > 0 Yes X_1,X_2,\dots,X_n,\dots Is a sequence of random variables ,A It's a constant , If for any \varepsilon>0 Yes X1,X2,…,Xn,… yes One individual along with machine change The amount order Column ,A yes One individual often Count , Such as fruit Yes On ren It means Of ε>0 Yes
lim n → + ∞ P { ∣ X n − A ∣ < ε } = 1 \displaystyle \lim_{n\to+\infty} P\{ |X_n -A|<\varepsilon \}=1 n→+∞limP{ ∣Xn−A∣<ε}=1 be call X 1 , X 2 , … , X n , … Converges to a constant in probability A , remember do X n * P A said X_1,X_2,\dots,X_n,\dots \textbf{ Converges to a constant in probability }A, Write it down as X_n \stackrel{P} \longrightarrow A be call X1,X2,…,Xn,… Converges to a constant in probability A, remember do Xn*PA
Chebyshev's law of large numbers
X 1 , X 2 , … , X n , … yes One individual two two No phase Turn off Of along with machine change The amount order Column , D ( X i ) ≤ C ( C yes One individual often Count , i = 1 , 2 , … ) , be Yes On ren It means Of ε > 0 Yes X_1,X_2,\dots,X_n,\dots Is a sequence of random variables that are not related in pairs ,D(X_i) \leq C(C It's a constant ,i=1,2,\dots), For any \varepsilon>0 Yes X1,X2,…,Xn,… yes One individual two two No phase Turn off Of along with machine change The amount order Column ,D(Xi)≤C(C yes One individual often Count ,i=1,2,…), be Yes On ren It means Of ε>0 Yes
lim n → + ∞ P { ∣ 1 n ∑ i = 1 n X i − 1 n ∑ i = 1 n E ( X i ) ∣ < ε } = 1 \displaystyle \lim_{n\to+\infty} P\{|\cfrac{1}{n}\sum^n_{i=1} X_i -\cfrac{1}{n}\sum^n_{i=1}E( X_i)|<\varepsilon \}=1 n→+∞limP{ ∣n1i=1∑nXi−n1i=1∑nE(Xi)∣<ε}=1
Bernoulli's law of large numbers
X n ∼ B ( n , p ) , n = 1 , 2 , . . . , be Yes On ren It means Of ε > 0 Yes X_n \sim B(n,p),n=1,2,..., For any \varepsilon>0 Yes Xn∼B(n,p),n=1,2,..., be Yes On ren It means Of ε>0 Yes lim n → + ∞ P { ∣ X n n − p ∣ < ε } = 1 \displaystyle \lim_{n\to+\infty} P\{| \cfrac{X_n}{n} -p|<\varepsilon \}=1 n→+∞limP{ ∣nXn−p∣<ε}=1
Sinchin's law of large numbers
X 1 , X 2 , … , X n , … yes single state Same as branch cloth , E ( X i ) ≤ μ ( i = 1 , 2 , … ) , be Yes On ren It means Of ε > 0 Yes X_1,X_2,\dots,X_n,\dots It's independent and identically distributed ,E(X_i) \leq \mu(i=1,2,\dots), For any \varepsilon>0 Yes X1,X2,…,Xn,… yes single state Same as branch cloth ,E(Xi)≤μ(i=1,2,…), be Yes On ren It means Of ε>0 Yes
lim n → + ∞ P { ∣ 1 n ∑ i = 1 n X i − μ ∣ < ε } = 1 \displaystyle \lim_{n\to+\infty} P\{|\cfrac{1}{n}\sum^n_{i=1} X_i - \mu|<\varepsilon \}=1 n→+∞limP{ ∣n1i=1∑nXi−μ∣<ε}=1
Di morph - Laplace central limit theorem
X ∼ B ( n , p ) , be X \sim B(n,p), be X∼B(n,p), be
lim n → + ∞ P { X n − n p n p ( 1 − p ) ≤ x } = Φ ( x ) \displaystyle \lim_{n\to+\infty} P\{ \cfrac{X_n-np}{\sqrt{np(1-p)}} \leq x \}=\Phi (x) n→+∞limP{ np(1−p)Xn−np≤x}=Φ(x) , Φ ( x ) yes just state branch cloth Letter Count ,\Phi (x) Is a normal distribution function ,Φ(x) yes just state branch cloth Letter Count
Levy - Lindbergh's central limit theorem
X 1 , X 2 , … , X n , … yes single state Same as branch cloth , E ( X i ) = μ , D ( X i ) = σ 2 ( i = 1 , 2 , … ) , be Yes On ren It means Of ε > 0 Yes X_1,X_2,\dots,X_n,\dots It's independent and identically distributed ,E(X_i) = \mu, D(X_i)=\sigma^2(i=1,2,\dots), For any \varepsilon>0 Yes X1,X2,…,Xn,… yes single state Same as branch cloth ,E(Xi)=μ,D(Xi)=σ2(i=1,2,…), be Yes On ren It means Of ε>0 Yes
lim n → + ∞ P { ∑ i = 1 n X i − n μ n σ ≤ x } = Φ ( x ) \displaystyle \lim_{n\to+\infty} P\{ \cfrac{\displaystyle\sum^n_{i=1} X_i-n\mu}{\sqrt{n}\sigma} \leq x \}=\Phi (x) n→+∞limP{ nσi=1∑nXi−nμ≤x}=Φ(x) , Φ ( x ) yes just state branch cloth Letter Count ,\Phi (x) Is a normal distribution function ,Φ(x) yes just state branch cloth Letter Count
Chapter six Basic concepts of mathematical statistics
Section 1 The overall 、 sample 、 Statistics and sample number characteristics
f n ( x 1 , x 2 , . . . , x n ) = ∏ i n f ( x i ) f_n(x_1,x_2,...,x_n)= \displaystyle \prod_i^nf(x_i) fn(x1,x2,...,xn)=i∏nf(xi)
sample
- Sample mean X ‾ = 1 n ∑ i = 1 n X i \overline{X} =\cfrac{1}{n} \displaystyle\sum_{i=1}^n X_i X=n1i=1∑nXi
- Sample variance S 2 = 1 n − 1 ∑ i = 1 n ( X i − X ‾ ) 2 S^2=\cfrac{1}{n-1} \displaystyle\sum_{i=1}^n (X_i- \overline{X})^2 S2=n−11i=1∑n(Xi−X)2
D ( X ‾ ) = 1 n D ( X ) D(\overline{X})=\cfrac{1}{n} D(X) D(X)=n1D(X)
( n − 1 ) D ( X ) = n D ( X ) − D ( X ) = n D ( X ) − n D ( X ‾ ) = [ ∑ i = 1 n X i 2 − n E 2 ( X ) ] − [ ∑ i = 1 n X ‾ − n E 2 ( X ‾ 2 ) ] = ∑ i = 1 n X i 2 − ∑ i = 1 n X ‾ 2 = ∑ i = 1 n X i 2 − 2 ∑ i = 1 n X i 2 X ‾ 2 + ∑ i = 1 n X ‾ 2 = ∑ i = 1 n ( X i − X ‾ ) 2 \begin{aligned} (n-1)D(X) &=nD(X)-D(X)\\ &=nD(X)-nD(\overline{X})\\ &=[\sum_{i=1}^n X_i^2-nE^2(X)] - [\sum_{i=1}^n \overline{X} -nE^2(\overline{X}^2)]\\ &=\sum_{i=1}^n X_i^2 -\sum_{i=1}^n \overline{X}^2\\ &=\sum_{i=1}^n X_i^2 -2\sum_{i=1}^n X_i^2\overline{X}^2+\sum_{i=1}^n \overline{X}^2\\ &=\sum_{i=1}^n (X_i- \overline{X})^2\\ \end{aligned} (n−1)D(X)=nD(X)−D(X)=nD(X)−nD(X)=[i=1∑nXi2−nE2(X)]−[i=1∑nX−nE2(X2)]=i=1∑nXi2−i=1∑nX2=i=1∑nXi2−2i=1∑nXi2X2+i=1∑nX2=i=1∑n(Xi−X)2
namely D ( X ) = ∑ i = 1 n ( X i − X ‾ ) 2 n − 1 D(X)=\cfrac{\displaystyle \sum_{i=1}^n (X_i- \overline{X})^2}{n-1} D(X)=n−1i=1∑n(Xi−X)2
- sample k k k Moment of origin of order A k = 1 n ∑ i = 1 n X i k , A 1 = X ‾ A_k =\cfrac{1}{n} \displaystyle\sum_{i=1}^n X_i^k, A_1=\overline{X} Ak=n1i=1∑nXik,A1=X
- sample k k k Moment of order center B k = 1 n ∑ i = 1 n ( X i − X ‾ ) k , B 2 = n − 1 n S 2 B_k=\cfrac{1}{n} \displaystyle\sum_{i=1}^n (X_i- \overline{X})^k, B_2=\cfrac{n-1}{n}S^2 Bk=n1i=1∑n(Xi−X)k,B2=nn−1S2
In the second quarter Commonly used statistical sampling distribution
χ 2 \chi^2 χ2 Distribution
Definition
since from degree by n Of χ 2 branch cloth : The degree of freedom is n Of \chi^2 Distribution : since from degree by n Of χ2 branch cloth :
χ 2 ∼ χ 2 ( n ) \chi^2 \sim \chi^2(n) χ2∼χ2(n)
χ 2 = ∑ i = 1 n X i 2 , X i ∼ N ( 0 , 1 ) And phase mutual single state \chi^2=\displaystyle\sum_{i=1}^n X^2_i, X_i \sim N(0,1) And independent of each other χ2=i=1∑nXi2,Xi∼N(0,1) And phase mutual single state
nature
- P { χ 2 > χ α 2 ( n ) } = ∫ χ α 2 ( n ) + ∞ f ( x ) d x = α P\{\chi^2>\chi^2_\alpha(n) \}=\displaystyle\int_{\chi^2_\alpha(n)}^{+\infty}f(x)dx=\alpha P{ χ2>χα2(n)}=∫χα2(n)+∞f(x)dx=α
- E ( χ 2 ) = n , D ( χ 2 ) = 2 n E(\chi^2)=n,D(\chi^2)=2n E(χ2)=n,D(χ2)=2n
t Distribution
Definition
since from degree by n Of t branch cloth : The degree of freedom is n Of t Distribution : since from degree by n Of t branch cloth :
T ∼ t ( n ) T \sim t(n) T∼t(n)
X ∼ N ( 0 , 1 ) , Y ∼ χ 2 ( n ) , X 、 Y single state X\sim N(0,1),Y\sim \chi^2(n),X、Y Independent X∼N(0,1),Y∼χ2(n),X、Y single state
T = X Y n T=\cfrac{X}{\sqrt{\frac{Y}{n}}} T=nYX
nature
- f ( x ) = f ( − x ) , n enough Big when , Trend near On N ( 0 , 1 ) f(x)=f(-x),n When big enough , Tend to be N(0,1) f(x)=f(−x),n enough Big when , Trend near On N(0,1)
- P { T > t α ( n ) } = ∫ t α ( n ) + ∞ f ( x ) d x = α P\{T>t_\alpha(n) \}=\displaystyle\int_{t_\alpha(n)}^{+\infty}f(x)dx=\alpha P{ T>tα(n)}=∫tα(n)+∞f(x)dx=α
- t α ( n ) = − t 1 − α t_\alpha(n)=-t_{1-\alpha} tα(n)=−t1−α To be continued
- P { ∣ T ∣ > t α 2 ( n ) } = α P\{|T|>t_{\frac{\alpha}{2}}(n)\}=\alpha P{ ∣T∣>t2α(n)}=α
F Distribution
Definition
since from degree by ( n 1 , n 2 ) Of F branch cloth : The degree of freedom is (n_1,n_2) Of F Distribution : since from degree by (n1,n2) Of F branch cloth :
F ∼ F ( n 1 , n 2 ) F\sim F(n_1,n_2) F∼F(n1,n2)
X ∼ χ 2 ( n 1 ) , Y ∼ χ 2 ( n 2 ) , X 、 Y single state X\sim \chi^2(n_1),Y\sim \chi^2(n_2),X、Y Independent X∼χ2(n1),Y∼χ2(n2),X、Y single state
F = X n 1 Y n 2 F=\cfrac{\frac{X}{n_1}}{\frac{Y}{n_2}} F=n2Yn1X
nature
- P { F > F α ( n 1 , n 2 ) } = ∫ F α ( n 1 , n 2 ) + ∞ f ( x ) d x = α P\{F>F_\alpha(n_1,n_2) \}=\displaystyle\int_{F_\alpha(n_1,n_2)}^{+\infty}f(x)dx=\alpha P{ F>Fα(n1,n2)}=∫Fα(n1,n2)+∞f(x)dx=α
- 1 F ∼ F ( n 2 , n 1 ) \frac{1}{F}\sim F(n_2,n_1) F1∼F(n2,n1)
- F α ( n 1 , n 2 ) ∼ 1 F 1 − α ( n 2 , n 1 ) F_\alpha(n_1,n_2)\sim \cfrac{1}{F_{1-\alpha}(n_2,n_1)} Fα(n1,n2)∼F1−α(n2,n1)1
The sampling distribution of a normal population
A normal population
X i ∼ N ( μ , σ 2 ) X_i\sim N(\mu,\sigma^2) Xi∼N(μ,σ2)
all value X ‾ , sample Ben Fang Bad S 2 mean value \overline{X}, Sample variance S^2 all value X, sample Ben Fang Bad S2
- X ‾ ∼ N ( μ , σ 2 n ) , U = X ‾ − μ σ 2 ∼ N ( 0 , 1 ) \overline{X} \sim N(\mu,\frac{\sigma^2}{n}),U=\cfrac{\overline{X}-\mu}{\sigma^2}\sim N(0,1) X∼N(μ,nσ2),U=σ2X−μ∼N(0,1)
- X ‾ and S 2 phase mutual single state , χ 2 = ( n − 1 ) S 2 σ 2 = ∑ i = 1 n ( X i − X ‾ ) 2 σ 2 ∼ χ 2 ( n − 1 ) \overline{X} and S^2 Are independent of each other ,\chi^2=\cfrac{(n-1)S^2}{\sigma^2}=\cfrac{\displaystyle\sum_{i=1}^n (X_i- \overline{X})^2}{\sigma^2}\sim\chi^2(n-1) X and S2 phase mutual single state ,χ2=σ2(n−1)S2=σ2i=1∑n(Xi−X)2∼χ2(n−1)
- T = X ‾ − μ S n ∼ t ( n − 1 ) T=\cfrac{\overline{X}-\mu}{\frac{S}{\sqrt{n}}} \sim t(n-1) T=nSX−μ∼t(n−1)
- χ 2 = ∑ i = 1 n ( X i − μ ) 2 σ 2 ∼ χ 2 ( n ) \chi^2=\cfrac{\displaystyle \sum_{i=1}^n (X_i- \mu)^2}{\sigma^2}\sim \chi^2(n) χ2=σ2i=1∑n(Xi−μ)2∼χ2(n)
Two normal populations
X i ∼ N ( μ 1 , σ 1 2 ) , Y j ∼ N ( μ 2 , σ 2 2 ) , 1 ≤ i ≤ n 1 , 1 ≤ j ≤ n 2 X_i \sim N(\mu_1,\sigma _1^2 ) ,Y_j \sim N(\mu_2,\sigma _2^2 ),1\leq i\leq n_1,1\leq j\leq n_2 Xi∼N(μ1,σ12),Yj∼N(μ2,σ22),1≤i≤n1,1≤j≤n2
all value X ‾ and Y ‾ , sample Ben Fang Bad S 1 2 and S 2 2 mean value \overline{X} and \overline{Y}, Sample variance S_1^2 and S_2^2 all value X and Y, sample Ben Fang Bad S12 and S22
- X ‾ − Y ‾ ∼ N ( μ 1 − μ 2 , σ 1 2 n 1 + σ 2 2 n 2 ) \overline{X}-\overline{Y} \sim N(\mu_1-\mu_2,\cfrac{\sigma _1^2}{n_1}+\cfrac{\sigma _2^2}{n_2}) X−Y∼N(μ1−μ2,n1σ12+n2σ22)
U = ( X ‾ − Y ‾ ) − ( μ 1 − μ 2 ) σ 1 2 n 1 + σ 2 2 n 2 U=\cfrac{(\overline{X}-\overline{Y})-(\mu_1-\mu_2)}{\sqrt{\frac{\sigma _1^2}{n_1}+\frac{\sigma _2^2}{n_2}}} U=n1σ12+n2σ22(X−Y)−(μ1−μ2) - If σ 1 2 = σ 2 2 \sigma _1^2=\sigma _2^2 σ12=σ22 be
{ T = ( X ‾ − Y ‾ ) − ( μ 1 − μ 2 ) S w 1 n 1 + 1 n 2 ∼ t ( n 1 + n 2 − 2 ) S w = ( n 1 − 1 ) S 1 2 + ( n 2 − 1 ) S 2 2 n 1 + n 2 − 2 \left\{\begin{aligned} &T=\cfrac{(\overline{X}-\overline{Y})-(\mu_1-\mu_2)}{S_w\sqrt{\frac{1}{n_1}+\frac{1}{n_2}}}\sim t(n_1+n_2-2)\\ &S_w=\cfrac{(n_1-1)S_1^2+(n_2-1)S_2^2}{n_1+n_2-2}\\ \end{aligned}\right. ⎩⎪⎪⎪⎪⎨⎪⎪⎪⎪⎧T=Swn11+n21(X−Y)−(μ1−μ2)∼t(n1+n2−2)Sw=n1+n2−2(n1−1)S12+(n2−1)S22 - F = S 1 2 σ 1 2 S 2 2 σ 2 2 ∼ F ( n 1 − 1 , n 2 − 1 ) F=\cfrac{\frac{S_1^2}{\sigma _1^2 }}{\frac{S_2^2}{\sigma _2^2 }} \sim F(n_1-1,n_2-1) F=σ22S22σ12S12∼F(n1−1,n2−1)
Chapter vii. Parameter estimation
Section 1 Point estimation
not know ginseng Count θ , θ ^ ( X 1 , X 2 , … , X n ) call by An estimate Unknown parameter \theta,\widehat{\theta } (X_1,X_2,\dots,X_n) be called \textbf{ An estimate } not know ginseng Count θ,θ(X1,X2,…,Xn) call by An estimate
if E ( θ ^ ) = θ , θ ^ yes θ Of Unbiased estimator if E(\widehat{\theta } )=\theta,\widehat{\theta } yes \theta Of \textbf{ Unbiased estimator } if E(θ)=θ,θ yes θ Of Unbiased estimator
{ E ( θ ^ 1 ) = E ( θ ^ 2 ) = θ D ( θ ^ 1 ) ≤ D ( θ ^ 2 ) ⇒ θ ^ 1 Than θ ^ 2 More effective \left\{\begin{aligned} &E(\widehat{\theta }_1 )=E(\widehat{\theta }_2 )=\theta\\ &D(\widehat{\theta }_1) \leq D(\widehat{\theta }_2)\\ \end{aligned}\right. \Rightarrow \widehat{\theta }_1 Than \widehat{\theta }_2\textbf{ More effective } { E(θ1)=E(θ2)=θD(θ1)≤D(θ2)⇒θ1 Than θ2 More effective
Such as fruit θ ^ ( X 1 , X 2 , … , X n ) In accordance with the General rate closed Convergence On θ , be call θ ^ yes θ Of Consistent estimator If \widehat{\theta } (X_1,X_2,\dots,X_n) Converges in probability to \theta, said \widehat{\theta } yes \theta Of \textbf{ Consistent estimator } Such as fruit θ(X1,X2,…,Xn) In accordance with the General rate closed Convergence On θ, be call θ yes θ Of Consistent estimator
In the second quarter Estimation and interval estimation
Moment estimation method
Definition : The sample moments are used to estimate the corresponding population moments , Use the function of sample moment to estimate the corresponding function of population moment , Then find the parameters to be estimated
step : Set the overall X The distribution of contains unknown parameters $\theta_1,\theta_2,\dots,\theta _k,\alpha _k = E(X^l) $ There is , α l yes Turn off On θ 1 , θ 2 , … , θ k \alpha _l It's about \theta_1,\theta_2,\dots,\theta _k αl yes Turn off On θ1,θ2,…,θk Function of , Write it down as α l ( θ 1 , θ 2 , … , θ k ) , l = 1 , 2 , … , k \alpha _l(\theta_1,\theta_2,\dots,\theta _k),l=1,2,\dots ,k αl(θ1,θ2,…,θk),l=1,2,…,k . Of the sample l l l The moment of origin of the order is A l = ∑ i = 1 n X i l n . A_l= \frac{\displaystyle \sum_{i=1}^n X_i^l}{n}. Al=ni=1∑nXil. Make α l ( θ 1 , θ 2 , … , θ k ) = A l , l = 1 , 2 , … , k ⇒ \alpha _l(\theta_1,\theta_2,\dots,\theta _k)=A_l,l=1,2,\dots ,k \Rightarrow αl(θ1,θ2,…,θk)=Al,l=1,2,…,k⇒ It can be solved that θ 1 , θ 2 , … , θ k \theta_1,\theta_2,\dots,\theta _k θ1,θ2,…,θk
set up g ( a 1 , a 2 ) g(a_1,a_2) g(a1,a2) Is the first moment a 1 a_1 a1 And the second moment a 2 a_2 a2 Function of , and a ^ 1 、 a ^ 2 yes a 1 、 a 2 \widehat{a }_1、\widehat{a }_2 yes a_1、a_2 a1、a2 yes a1、a2 The moment estimate of , be g ( a ^ 1 , a ^ 2 ) g(\widehat{a }_1,\widehat{a }_2) g(a1,a2) Namely g ( a 1 , a 2 ) g(a_1,a_2) g(a1,a2) The moment estimate of
Maximum likelihood estimation
X i yes X Of sample Ben , x i yes sample Ben value , θ yes generation Estimate Letter Count X_i yes X The sample of ,x_i Is the sample value ,\theta Is a proxy function Xi yes X Of sample Ben ,xi yes sample Ben value ,θ yes generation Estimate Letter Count
Parameters θ \theta θ Likelihood function of
discrete : L ( θ ) = L ( x 1 , x 2 , … , x n ; θ ) = ∏ i = 1 n p ( x i ; θ ) L(\theta)=L(x_1,x_2,\dots,x_n;\theta)=\displaystyle\prod_{i=1}^n p(x_i;\theta) L(θ)=L(x1,x2,…,xn;θ)=i=1∏np(xi;θ)
Continuous type : L ( θ ) = L ( x 1 , x 2 , … , x n ; θ ) = ∏ i = 1 n f ( x i ; θ ) L(\theta)=L(x_1,x_2,\dots,x_n;\theta)=\displaystyle\prod_{i=1}^n f(x_i;\theta) L(θ)=L(x1,x2,…,xn;θ)=i=1∏nf(xi;θ)
Maximum likelihood estimation
Definition
For a given sample ( x 1 , x 2 , … , x n ) (x_1,x_2,\dots,x_n) (x1,x2,…,xn), Let the likelihood function L ( x 1 , x 2 , … , x n ; θ ) L(x_1,x_2,\dots,x_n;\theta) L(x1,x2,…,xn;θ) The parameter that reaches the maximum value θ ^ = θ ^ ( x 1 , x 2 , … , x n ) \widehat{\theta }=\widehat{\theta } (x_1,x2,\dots,x_n) θ=θ(x1,x2,…,xn) It is called unknown parameter θ \theta θ Maximum likelihood estimate of , The corresponding likelihood function L ( X 1 , X 2 , … , X n ; θ ) L(X_1,X_2,\dots,X_n;\theta) L(X1,X2,…,Xn;θ) The parameter value that reaches the maximum value θ ^ = θ ^ ( X 1 , X 2 , … , X n ) \widehat{\theta } =\widehat{\theta }(X_1,X_2,\dots,X_n) θ=θ(X1,X2,…,Xn) be called θ \theta θ Maximum likelihood estimator of . Collectively referred to as θ \theta θ Maximum likelihood estimation of
step
seek θ ^ \widehat{\theta } θ, The likelihood equation can be used d L ( θ ) d θ = 0 or d ln L ( θ ) d θ = 0 \cfrac{\mathrm{d}L(\theta )}{\mathrm{d}\theta }=0 or \cfrac{\mathrm{d}\ln L(\theta )}{\mathrm{d}\theta }=0 dθdL(θ)=0 or dθdlnL(θ)=0
Suppose the parameter to be estimated is θ 1 And θ 2 \theta_1 And \theta_2 θ1 And θ2, Then the likelihood equation can be obtained
{ ∂ L ( θ ) ∂ θ 1 = 0 ∂ L ( θ ) ∂ θ 2 = 0 or { ∂ ln L ( θ ) ∂ θ 1 = 0 ∂ ln L ( θ ) ∂ θ 2 = 0 \left\{\begin{aligned} &\cfrac{\partial L(\theta )}{\partial \theta_1 }=0\\ &\cfrac{\partial L(\theta )}{\partial \theta_2 }=0\\ \end{aligned}\right. or \left\{\begin{aligned} &\cfrac{\partial \ln L(\theta )}{\partial \theta_1 }=0\\ &\cfrac{\partial \ln L(\theta )}{\partial \theta_2 }=0\\ \end{aligned}\right. ⎩⎪⎪⎪⎨⎪⎪⎪⎧∂θ1∂L(θ)=0∂θ2∂L(θ)=0 or ⎩⎪⎪⎪⎨⎪⎪⎪⎧∂θ1∂lnL(θ)=0∂θ2∂lnL(θ)=0
interval estimation
confidence interval
Definition of confidence interval : set up θ \theta θ Is the overall X X X Unknown parameter for , X 1 , X 2 , . . . , X n X_1,X_2,...,X_n X1,X2,...,Xn It's a sample from the population , For a given a ( 0 < a < 1 ) a(0<a<1) a(0<a<1), If two statistics satisfy P ( θ 1 < θ < θ 2 ) = 1 − a P(\theta _1<\theta <\theta _2)=1−a P(θ1<θ<θ2)=1−a It is called random interval ( θ 1 , θ 2 ) (\theta _1,\theta _2) (θ1,θ2) Is the parameter θ \theta θ The confidence level of is 1 − a 1−a 1−a The confidence interval of
Interval estimation of a normal population parameter

Interval estimation of two normal population parameters

Chapter viii.
Common formula
∫ 0 + ∞ x n e − x d x = n ! \displaystyle \int_{0}^{+\infty} x^n e^{-x} dx=n! ∫0+∞xne−xdx=n!
∫ 0 + ∞ e − x 2 2 d x = π 2 , ∫ 0 + ∞ x e − x 2 2 d x = 1 \displaystyle \int_{0}^{+\infty} e^{-\frac{x^2}{2}}dx =\cfrac{\sqrt{\pi}}{\sqrt{2}}, \;\;\;\;\;\;\;\;\;\;\;\;\displaystyle \int_{0}^{+\infty}xe^{-\frac{x^2}{2}}dx =1 ∫0+∞e−2x2dx=2π,∫0+∞xe−2x2dx=1
∫ 0 + ∞ t x e − t x d x = 1 \displaystyle \int_{0}^{+\infty} txe^{-tx} dx =1 ∫0+∞txe−txdx=1
∑ i = 1 n ( X i − X ‾ ) 2 = ∑ i = 1 n X i 2 − ∑ i = 1 n X ‾ 2 = ∑ i = 1 n X i 2 − n X ‾ 2 \displaystyle\sum_{i=1}^n (X_i- \overline{X})^2=\sum_{i=1}^n X_i^2 -\sum_{i=1}^n \overline{X}^2=\sum_{i=1}^n X_i^2 -n \overline{X}^2 i=1∑n(Xi−X)2=i=1∑nXi2−i=1∑nX2=i=1∑nXi2−nX2
∑ i = 1 n ( X i − X ‾ ) 2 = ∑ i = 1 n ( X i − μ ) 2 − n ( X ‾ − μ ) 2 \displaystyle\sum_{i=1}^n (X_i- \overline{X})^2=\sum_{i=1}^n (X_i-\mu)^2- n(\overline{X}-\mu)^2 i=1∑n(Xi−X)2=i=1∑n(Xi−μ)2−n(X−μ)2
边栏推荐
- 嵌入式系统
- 【Unity Shader 消融效果_案例分享】
- B-tree series
- 谷粒商城-环境(p1-p27)
- Application of IT service management (ITSM) in Higher Education
- 问题:OfficeException: failed to start and connect(二)
- JMM details
- Self confidence is indispensable for technology
- 10-golang运算符
- [automatic operation and maintenance] what is the use of the automatic operation and maintenance platform
猜你喜欢

Record MySQL troubleshooting caused by disk sector damage

Promise

概率论学习笔记

MySQL learning
![[ManageEngine Zhuohao] helps Julia college, the world's top Conservatory of music, improve terminal security](/img/fb/0a9f0ea72efc4785cc21fd0d4830c2.png)
[ManageEngine Zhuohao] helps Julia college, the world's top Conservatory of music, improve terminal security

产品学习(二)——竞品分析

C语言课设学生选修课程系统(大作业)
![[wechat applet low code development] second, resolve the code composition of the applet in practice](/img/ab/28ab01db84b1437220e659118b2871.png)
[wechat applet low code development] second, resolve the code composition of the applet in practice

【微信小程序】视图容器和基本内容组件
![[unity shader amplify shader editor (ASE) Chapter 9]](/img/f5/f0f6786406e149187e71c8e12cde0d.png)
[unity shader amplify shader editor (ASE) Chapter 9]
随机推荐
[wechat applet low code development] second, resolve the code composition of the applet in practice
【微信小程序】如何搭积木式开发?
[wechat applet] to solve button, input and image components
SQL中DML语句(数据操作语言)
lxml模块(数据提取)
[wechat applet] view container and basic content components
Several ways of gson's @jsonadapter annotation
[ManageEngine] terminal management system helps Huasheng securities' digital transformation
Find the original array for the inverse logarithm
C language course set up property fee management system (big work)
阿里OSS Postman Invalid according to Policy: Policy Condition failed: [“starts-with“, “$key“, “test/“]
自开发软件NoiseCreater1.1版本免费试用
What is a port scanning tool? What is the use of port scanning tools
华福证券开户是安全可靠的么?怎么开华福证券账户
关于变量是否线程安全的问题
Chapitre V gestion des entrées / sorties
【#Unity Shader#自定义材质面板_第二篇】
B-tree series
C语言课设物业费管理系统(大作业)
【微信小程序】一文解决button、input、image组件