当前位置:网站首页>Notes on numerical calculation - iterative solution of linear equations
Notes on numerical calculation - iterative solution of linear equations
2022-06-30 10:23:00 【Yangxiangrui】
Three classical iterative methods python Example
1. Matrix limit : { A ( k ) } \{A^{(k)}\} { A(k)} Is a sequence of matrices , And A = ( a i j ) ∈ R n × n A = (a_{ij}) \in R^{n \times n} A=(aij)∈Rn×n, If you have any
lim k → ∞ a i j ( k ) = a i j , ( i , j = 1 , ⋯ , n ) \lim_{k \rightarrow \infty} a_{ij}^{(k)} = a_{ij}, (i, j = 1, \cdots, n) k→∞limaij(k)=aij,(i,j=1,⋯,n)
be { A ( k ) } \{A^{(k)}\} { A(k)} Converge to A, Write it down as lim k → ∞ A ( k ) = A \lim_{k \rightarrow \infty} A^{(k)} = A limk→∞A(k)=A
Operator norm equivalence : ∃ c 1 , c 2 > 0 \exist c_1, c_2 > 0 ∃c1,c2>0 Yes ∀ A ∈ R n × n \forall A \in R^{n \times n} ∀A∈Rn×n, And any operator norm ∣ ∣ ⋅ ∣ ∣ i ||\cdot||_i ∣∣⋅∣∣i, Yes
c 1 ∣ ∣ A ∣ ∣ ∞ ≤ ∣ ∣ A ∣ ∣ i ≤ c 2 ∣ ∣ A ∣ ∣ ∞ c_1 ||A||_\infty \leq ||A||_i \leq c_2 ||A||_\infty c1∣∣A∣∣∞≤∣∣A∣∣i≤c2∣∣A∣∣∞
Norm and limit : among ∣ ∣ ⋅ ∣ ∣ ||\cdot|| ∣∣⋅∣∣ Denotes any operator norm
lim k → ∞ A ( k ) = A ⇔ lim k → ∞ ∣ ∣ A ( k ) − A ∣ ∣ = 0 \lim_{k \rightarrow \infty} A^{(k)} = A \Leftrightarrow \lim_{k \rightarrow \infty} ||A^{(k)} - A|| = 0 k→∞limA(k)=A⇔k→∞lim∣∣A(k)−A∣∣=0
Limits and linear equations : lim k → ∞ A ( k ) = A \lim_{k \rightarrow \infty} A^{(k)} = A limk→∞A(k)=A necessary and sufficient condition : ∀ x ⃗ \forall \vec{x} ∀x Yes lim k → ∞ A ( k ) x ⃗ = A x ⃗ \lim_{k \rightarrow \infty} A^{(k)} \vec{x} = A \vec{x} limk→∞A(k)x=Ax
2. Define the spectral radius ( = Maximum eigenvalue ): A ∈ R n × n A\in R^{n \times n} A∈Rn×n The characteristic value is λ i , ( i = 1 , ⋯ , n ) \lambda_i, (i = 1, \cdots, n) λi,(i=1,⋯,n), be ρ ( A ) = max 1 < i ≤ n { λ i } \rho(A) = \max_{1 < i \leq n} \{\lambda_i\} ρ(A)=max1<i≤n{ λi} Becomes the spectral radius
Spectral radius and operator norm :
(1) ρ ( A ) ≤ ∣ ∣ A ∣ ∣ \rho(A) \leq ||A|| ρ(A)≤∣∣A∣∣, That is, the spectral radius is not greater than any operator norm
(2) ∣ ∣ A ∣ ∣ 2 = ρ ( A ) ||A||_2 = \rho(A) ∣∣A∣∣2=ρ(A)
Spectral radius and limit : B = ( b i j ) ∈ R n × n B = (b_{ij}) \in R^{n \times n} B=(bij)∈Rn×n, be lim k → ∞ B n × n = 0 ⇔ ρ ( B ) < 1 \lim_{k \rightarrow \infty} B^{n \times n} = 0 \Leftrightarrow \rho(B) < 1 limk→∞Bn×n=0⇔ρ(B)<1
3. The basic form of iterative method : The original equation A x ⃗ = b ⃗ A\vec{x} = \vec{b} Ax=b, Iterative form x ⃗ ( k + 1 ) = B x ⃗ ( k ) + f ⃗ \vec{x}^{(k + 1)} = B \vec{x}^{(k)} + \vec{f} x(k+1)=Bx(k)+f
Algorithm :1 Order fixed length iteration method
Input : initial value x ⃗ 0 \vec{x}_0 x0,B, f ⃗ \vec{f} f; Output : x ⃗ \vec{x} x
x ⃗ = x ⃗ 0 w h i l e ( ) x ⃗ = B x ⃗ + f ⃗ E n d \begin{aligned} &\vec{x} = \vec{x}_0\\ &while ()\\ & \quad\vec{x} = B\vec{x} + \vec{f}\\ &End \end{aligned} x=x0while()x=Bx+fEnd
4. A necessary and sufficient condition for the global convergence of the iterative method : If iterative method x ⃗ k + 1 = B x ⃗ k + f ⃗ \vec{x}_{k + 1} = B \vec{x}_k + \vec{f} xk+1=Bxk+f, in I - B Nonsingular . Yes ∀ x ⃗ 0 \forall \vec{x}_0 ∀x0 The sequence obtained by iterative method { x ⃗ k } \{\vec{x}_k\} { xk} convergence ⇔ ρ ( B ) < 1 \Leftrightarrow \ \rho(B) < 1 ⇔ ρ(B)<1. Under such conditions { x ⃗ k } \{\vec{x}_k\} { xk} The limits of x ⃗ ∗ \vec{x}^* x∗ by x ⃗ = B x ⃗ + f ⃗ \vec{x} = B \vec{x} + \vec{f} x=Bx+f Unique solution
Convergence condition : Iterative form x ⃗ k + 1 = B x ⃗ k + f ⃗ \vec{x}_{k + 1} = B \vec{x}_k + \vec{f} xk+1=Bxk+f, if ∀ ∣ ∣ B ∣ ∣ = q < 1 \forall ||B|| = q < 1 ∀∣∣B∣∣=q<1, be
(1) Global convergence
(2) ∣ ∣ x ⃗ k − x ⃗ ∗ ∣ ∣ ≤ q k ∣ ∣ x ⃗ 0 − x ⃗ ∗ ∣ ∣ ||\vec{x}_k - \vec{x}^*|| \leq q^k ||\vec{x}_0 - \vec{x}^*|| ∣∣xk−x∗∣∣≤qk∣∣x0−x∗∣∣
(3) ∣ ∣ x ⃗ k − x ⃗ ∗ ∣ ∣ ≤ q 1 − q ∣ ∣ x ⃗ k − x ⃗ k + 1 ∣ ∣ ||\vec{x}_k - \vec{x}^*|| \leq \frac{q}{1 - q} ||\vec{x}_k - \vec{x}_{k + 1}|| ∣∣xk−x∗∣∣≤1−qq∣∣xk−xk+1∣∣
(4) ∣ ∣ x ⃗ k − x ⃗ ∗ ∣ ∣ ≤ q k 1 − q ∣ ∣ x ⃗ 1 − x ⃗ 0 ∣ ∣ ||\vec{x}_k - \vec{x}^*|| \leq \frac{q^k}{1 - q} ||\vec{x}_1 - \vec{x}_0|| ∣∣xk−x∗∣∣≤1−qqk∣∣x1−x0∣∣
Define the order of convergence : The solution sequence is x ⃗ k , x ⃗ k ∈ R n \vec{x}_k, \vec{x}_k \in R^n xk,xk∈Rn, Converge to a vector x ⃗ ∗ \vec{x}^* x∗, if
lim k → ∞ e ⃗ k + 1 e ⃗ k = c ≠ 0 e ⃗ k = x ⃗ k − x ⃗ ∗ \lim_{k \rightarrow \infty} \frac{\vec{e}_{k + 1}}{\vec{e}_k} = c \neq 0\\ \vec{e}_k = \vec{x}_k - \vec{x}^* k→∞limekek+1=c=0ek=xk−x∗
Then the iterative process is p Order convergent .1 The convergence rate of the order constant iteration method : lim k → ∞ ∣ ∣ e ⃗ k + 1 ∣ ∣ ∣ ∣ e ⃗ k ∣ ∣ = ρ ( B ) \lim_{k \rightarrow \infty} \frac{||\vec{e}_{k + 1}||}{||\vec{e}_k||} = \rho(B) limk→∞∣∣ek∣∣∣∣ek+1∣∣=ρ(B)
Define the convergence rate : R = − l o g 10 ρ ( B ) R = - log_{10} \rho(B) R=−log10ρ(B)
5. Jacobian iteration
Ideas : The first k The result of this iteration n Dimension vector x ⃗ k = ( x 1 ( k ) , ⋯ , x n ( k ) ) \vec{x}_k = (x_1^{(k)}, \cdots, x_n^{(k)}) xk=(x1(k),⋯,xn(k)). With 3 × 3 3 \times 3 3×3 D matrix A For example
{ a 11 x 1 + a 12 x 2 + a 13 x 3 = b 1 a 21 x 1 + a 22 x 2 + a 23 x 3 = b 2 a 31 x 1 + a 32 x 2 + a 33 x 3 = b 3 ⇒ { x 1 = − 1 a 11 ( a 12 x 2 + a 13 x 3 ) + b 1 a 11 x 2 = − 1 a 22 ( a 21 x 1 + a 13 x 2 ) + b 2 a 22 x 3 = − 1 a 33 ( a 31 x 1 + a 32 x 2 ) + b 3 a 33 \begin{cases} a_{11} x_1 + a_{12}x_2 + a_{13}x_3 = b_1\\ a_{21} x_1 + a_{22}x_2 + a_{23}x_3 = b_2\\ a_{31} x_1 + a_{32}x_2 + a_{33}x_3 = b_3 \end{cases} \Rightarrow \begin{cases} x_1^{} = -\frac{1}{a_{11}}(\qquad a_{12}x_2 + a_{13}x_3) + \frac{b_1}{a_{11}}\\ x_2 = -\frac{1}{a_{22}}(a_{21}x_1 \qquad + a_{13} x_2) + \frac{b_2}{a_{22}}\\ x_3 = -\frac{1}{a_{33}}(a_{31}x_1 + a_{32}x_2 \qquad) + \frac{b_3}{a_{33}} \end{cases} ⎩⎪⎨⎪⎧a11x1+a12x2+a13x3=b1a21x1+a22x2+a23x3=b2a31x1+a32x2+a33x3=b3⇒⎩⎪⎨⎪⎧x1=−a111(a12x2+a13x3)+a11b1x2=−a221(a21x1+a13x2)+a22b2x3=−a331(a31x1+a32x2)+a33b3
Iterative formula :
{ A x ⃗ = b ⃗ A = D − L − U ⇒ x ⃗ k + 1 = D − 1 ( L + U ) x ⃗ k + D − 1 b ⃗ ⇒ { x ⃗ k + 1 = B x ⃗ k + f ⃗ B = D − 1 ( L + U ) f ⃗ = D − 1 b ⃗ \begin{aligned} &\begin{cases} A\vec{x} = \vec{b}\\ A = D - L - U \end{cases} \\ & \Rightarrow \vec{x}_{k + 1} = D^{-1}(L + U)\vec{x}_k + D^{-1}\vec{b}\\ & \Rightarrow \begin{cases} \vec{x}_{k + 1} = B\vec{x}_k + \vec{f}\\ B = D^{-1}(L + U)\\ \vec{f} = D^{-1}\vec{b} \end{cases} \end{aligned} { Ax=bA=D−L−U⇒xk+1=D−1(L+U)xk+D−1b⇒⎩⎪⎨⎪⎧xk+1=Bxk+fB=D−1(L+U)f=D−1b
among D by A Diagonal matrix composed of diagonal elements of ;L、U by A Lower triangle 、 The upper triangle element is composed of elements , Its diagonal elements are 0
Algorithm : Input : initial value x ⃗ \vec{x} x,A, b ⃗ \vec{b} b; Output : x ⃗ \vec{x} x
w h i l e ( No full foot sentence stop strip Pieces of ) y ⃗ = x ⃗ f o r i = 1 → n x i = ( b i − ∑ j = 1 i − 1 a i j y j − ∑ j = i + 1 n a i j y j ) / a i i E n d E n d \begin{aligned} &while ( Fail to meet the conditions for stopping )\\ & \quad \vec{y} = \vec{x}\\ & \quad \quad for \ i = 1 \rightarrow n\\ & \quad \quad \quad x_i = (b_i - \sum_{j = 1}^{i - 1}{a_{ij} y_j} - \sum_{j = i + 1}^{n} a_{ij} y_{j}) / a_{ii}\\ & \quad \quad End\\ & End \end{aligned} while( No full foot sentence stop strip Pieces of )y=xfor i=1→nxi=(bi−j=1∑i−1aijyj−j=i+1∑naijyj)/aiiEndEnd
6. gaussian - Selder iterative method (G-S Iterative method )
Ideas :
{ x 1 ( k + 1 ) = − 1 a 11 ( + a 12 x 2 ( k ) + a 13 x 3 ( k ) ) + b 1 a 11 x 2 ( k + 1 ) = − 1 a 22 ( a 21 x 1 ( k + 1 ) + a 23 x 3 ( k ) ) + b 2 a 22 x 3 ( k + 1 ) = − 1 a 33 ( a 31 x 1 ( k + 1 ) + a 32 x 2 ( k + 1 ) ) + b 3 a 3 3 \begin{cases} x_1^{(k+1)} = -\frac{1}{a_{11}}(\qquad + a_{12}x_2^{(k)} + a_{13}x_3^{(k)}) + \frac{b_1}{a_{11}}\\ x_2^{(k + 1)} = -\frac{1}{a_{22}}(a_{21} x_1^{(k + 1)} \qquad + a_{23}x_3^{(k)}) + \frac{b_2}{a_{22}}\\ x_3^{(k + 1)} = -\frac{1}{a_{33}}(a_{31}x_1^{(k + 1)} + a_{32}x_2^{(k + 1)} \qquad) + \frac{b_3}{a_33} \end{cases} ⎩⎪⎨⎪⎧x1(k+1)=−a111(+a12x2(k)+a13x3(k))+a11b1x2(k+1)=−a221(a21x1(k+1)+a23x3(k))+a22b2x3(k+1)=−a331(a31x1(k+1)+a32x2(k+1))+a33b3
Iterative formula
{ A x ⃗ = b ⃗ A = D − L − U ⇒ x ⃗ k + 1 = D − 1 ( L x ⃗ k + 1 + U x ⃗ k ) + D − 1 b ⃗ ⇒ x ⃗ k + 1 = ( D − L ) − 1 U x ⃗ k + ( D − L ) − 1 b ⃗ ⇒ { x ⃗ k + 1 = B x ⃗ k + f ⃗ B = ( D − L ) − 1 U f ⃗ = ( D − L ) − 1 b ⃗ \begin{aligned} &\begin{cases} A\vec{x} = \vec{b}\\ A = D - L - U \end{cases} \\ & \Rightarrow \vec{x}_{k + 1} = D^{-1}(L\vec{x}_{k + 1} + U\vec{x}_k) + D^{-1}\vec{b}\\ & \Rightarrow \vec{x}_{k + 1} = (D - L)^{-1}U\vec{x}_k + (D - L)^{-1}\vec{b}\\ & \Rightarrow \begin{cases} \vec{x}_{k + 1} = B\vec{x}_k + \vec{f}\\ B = (D - L)^{-1}U\\ \vec{f} = (D - L)^{-1}\vec{b} \end{cases} \end{aligned} { Ax=bA=D−L−U⇒xk+1=D−1(Lxk+1+Uxk)+D−1b⇒xk+1=(D−L)−1Uxk+(D−L)−1b⇒⎩⎪⎨⎪⎧xk+1=Bxk+fB=(D−L)−1Uf=(D−L)−1b
Algorithm : Input : initial value x ⃗ \vec{x} x,A, b ⃗ \vec{b} b; Output : Approximate solution x ⃗ \vec{x} x
w h i l e ( No full foot sentence stop strip Pieces of ) f o r i = 1 → n x i = ( b i − ∑ j = 1 i − 1 a i j x j − ∑ j = i + 1 n a i j x j ) / a i i E n d E n d \begin{aligned} &while ( Fail to meet the conditions for stopping )\\ & \quad \quad for \ i = 1 \rightarrow n\\ & \quad \quad \quad x_i = (b_i - \sum_{j = 1}^{i - 1}{a_{ij} x_j} - \sum_{j = i + 1}^{n} a_{ij} x_{j}) / a_{ii}\\ & \quad \quad End\\ & End \end{aligned} while( No full foot sentence stop strip Pieces of )for i=1→nxi=(bi−j=1∑i−1aijxj−j=i+1∑naijxj)/aiiEndEnd
7. Successive over relaxation iterative method (SOR Iterative method )
Ideas : from G-S The iterative method yields x ⃗ k + 1 ′ \vec{x}_{k + 1}' xk+1′. be SOR The vector obtained by iteration is x ⃗ k + 1 = ( 1 − w ) x ⃗ k + w x ⃗ k + 1 ′ \vec{x}_{k + 1} = (1 - w)\vec{x}_k + w\vec{x}_{k + 1}' xk+1=(1−w)xk+wxk+1′
Iterative formula :
{ A x ⃗ = b ⃗ A = D − L − U ⇒ x ⃗ k + 1 = ( 1 − w ) x ⃗ k + w [ D − 1 ( L x ⃗ k + 1 + U x ⃗ k ) ] + D − 1 b ⃗ ⇒ x ⃗ k + 1 = ( D − w L ) − 1 ( ( 1 − w ) D + w U ) x ⃗ k + ( D − s L ) − 1 w b ⃗ ⇒ { x ⃗ k + 1 = B x ⃗ k + f ⃗ B = ( D − w L ) − 1 ( ( 1 − w ) D + w U ) f ⃗ = ( D − s L ) − 1 w b ⃗ \begin{aligned} &\begin{cases} A\vec{x} = \vec{b}\\ A = D - L - U \end{cases} \\ & \Rightarrow \vec{x}_{k + 1} = (1 - w)\vec{x}_k + w[D^{-1}(L\vec{x}_{k + 1} + U\vec{x}_k)] + D^{-1}\vec{b}\\ & \Rightarrow \vec{x}_{k + 1} = (D - wL)^{-1}((1-w)D + wU)\vec{x}_k + (D - sL)^{-1}w\vec{b}\\ & \Rightarrow \begin{cases} \vec{x}_{k + 1} = B\vec{x}_k + \vec{f}\\ B = (D - wL)^{-1}((1-w)D + wU)\\ \vec{f} = (D - sL)^{-1}w\vec{b} \end{cases} \end{aligned} { Ax=bA=D−L−U⇒xk+1=(1−w)xk+w[D−1(Lxk+1+Uxk)]+D−1b⇒xk+1=(D−wL)−1((1−w)D+wU)xk+(D−sL)−1wb⇒⎩⎪⎨⎪⎧xk+1=Bxk+fB=(D−wL)−1((1−w)D+wU)f=(D−sL)−1wb
Algorithm : Input : initial value x ⃗ \vec{x} x,A, b ⃗ \vec{b} b, Parameters w; Output : Approximate solution x ⃗ \vec{x} x
w h i l e ( No full foot sentence stop strip Pieces of ) f o r i = 1 → n x i = ( 1 − w ) x i + w ( b i − ∑ j = 1 i − 1 a i j x j − ∑ j = i + 1 n a i j x j ) / a i i E n d E n d \begin{aligned} &while ( Fail to meet the conditions for stopping )\\ & \quad \quad for \ i = 1 \rightarrow n\\ & \quad \quad \quad x_i = (1 - w)x_i + w(b_i - \sum_{j = 1}^{i - 1}{a_{ij} x_j} - \sum_{j = i + 1}^{n} a_{ij} x_{j}) / a_{ii}\\ & \quad \quad End\\ & End \end{aligned} while( No full foot sentence stop strip Pieces of )for i=1→nxi=(1−w)xi+w(bi−j=1∑i−1aijxj−j=i+1∑naijxj)/aiiEndEnd
8. A necessary and sufficient condition for the convergence of Jacobian iteration : Known diagonal elements a i i > 0 ( i = 1 , ⋯ , n ) a_{ii} > 0 (i = 1, \cdots, n) aii>0(i=1,⋯,n). A x ⃗ = b ⃗ A\vec{x} = \vec{b} Ax=b The iterative method converges ⇔ \Leftrightarrow ⇔A and 2D-A Positive definite , among D by A A matrix of diagonal elements
inference : set up B Is the matrix in Jacobi iteration . if ∣ ∣ B ∣ ∣ ∞ < 1 ||B||_\infty < 1 ∣∣B∣∣∞<1 perhaps ∣ ∣ B ∣ ∣ 1 < 1 ||B||_1 < 1 ∣∣B∣∣1<1, be G-S Iterative convergence
9. Define reducible matrix : set up A = ( a i j ) ∈ R n × n A = (a_{ij}) \in R^{n \times n} A=(aij)∈Rn×n, If there is a permutation matrix P( The product of a series of elementary matrices ) send
P T A P = [ A 11 A 12 O A 22 ] P^TAP = \left[ \begin{matrix} A_{11}& A_{12}\\ O& A_{22} \end{matrix} \right] PTAP=[A11OA12A22]
among A 11 , A 22 A_{11}, A_{22} A11,A22 The order is not equal to 0 Matrix of , be A Is a reducible matrix
Diagonally dominant property : if A For strictly diagonally dominant matrices , Or irreducible weakly diagonally dominant matrix , be A Nonsingular
10. G-S/SOR Convergence condition of iterative method
(1) If the matrix A Strictly diagonally dominant , Or irreducible diagonally dominant , Then Jacobi iterative method and G-S The iterative method converges
(2) Same conditions (1), Relaxation factor 0 < w ≤ 1 0 < w \leq 1 0<w≤1 Of SOR The iterative method converges
Convergence properties of positive definite matrix
(1) if A Symmetric positive definite , be G-S Iteration and 0 < w < 2 0 < w < 2 0<w<2 Of SOR The iterative method converges
(2) if SOR The iterative method converges , be w ∈ ( 0 , 2 ) w \in (0, 2) w∈(0,2)
11. The steepest descent (= Gradient descent method )
deduction :
(1) The variational principle is applied to A x ⃗ = b ⃗ A\vec{x} = \vec{b} Ax=b Change to seek ϕ ( x ⃗ ) = 1 2 x ⃗ T A x ⃗ − b ⃗ T x ⃗ \phi(\vec{x}) = \frac{1}{2} \vec{x}^TA\vec{x} - \vec{b}^T\vec{x} ϕ(x)=21xTAx−bTx Minimum point
(2) Iterative way : x ⃗ k + 1 = x ⃗ k + α k p ⃗ k \vec{x}_{k + 1} = \vec{x}_k + \alpha_k \vec{p}_k xk+1=xk+αkpk, among α k \alpha_k αk Search step size , p ⃗ k \vec{p}_k pk Search direction
(3) Orient : The most open direction of descent speed is the negative gradient direction , namely − ∇ ϕ ( x ⃗ ) = − ( ∂ ϕ ∂ x 1 , ⋯ , ∂ ϕ ∂ x n ) T -\nabla \phi(\vec{x}) = -(\frac{\partial \phi}{\partial x_1}, \cdots, \frac{\partial \phi}{\partial x_n})^T −∇ϕ(x)=−(∂x1∂ϕ,⋯,∂xn∂ϕ)T
(4) Determine the step size
α k = a r g min α 1 2 ( x ⃗ k + α p ⃗ k ) T A ( x ⃗ k + α p ⃗ k ) − b ⃗ T ( x ⃗ k + α p ⃗ k ) = a r g min α 1 2 α 2 p ⃗ k T A p ⃗ k − α r ⃗ k p ⃗ k + ϕ ( x ⃗ k ) ( r ⃗ k = b ⃗ − A x ⃗ ) ⇒ α k = r ⃗ k T p ⃗ k p ⃗ k A p ⃗ k \begin{aligned} \alpha_k &= arg \min_\alpha \frac{1}{2} (\vec{x}_k + \alpha \vec{p}_k)^T A (\vec{x}_k + \alpha \vec{p}_k) - \vec{b}^T (\vec{x}_k + \alpha \vec{p}_k)\\ &= arg \min_\alpha \frac{1}{2} \alpha^2 \vec{p}_k^TA\vec{p}_k - \alpha\vec{r}_k\vec{p}_k + \phi(\vec{x}_k) \qquad (\vec{r}_k = \vec{b} - A\vec{x})\\ &\Rightarrow \alpha_k = \frac{\vec{r}_k^T \vec{p}_k}{\vec{p}_k A\vec{p}_k} \end{aligned} αk=argαmin21(xk+αpk)TA(xk+αpk)−bT(xk+αpk)=argαmin21α2pkTApk−αrkpk+ϕ(xk)(rk=b−Ax)⇒αk=pkApkrkTpk
Algorithm : Input : initial value x ⃗ \vec{x} x,A, b ⃗ \vec{b} b; Output : Approximate solution x ⃗ \vec{x} x
r ⃗ = b ⃗ − A x ⃗ w h i l e No full foot sentence stop strip Pieces of α = r ⃗ T r ⃗ / ( r ⃗ T A r ⃗ ) x ⃗ = x ⃗ + α r ⃗ r ⃗ = r ⃗ − α A r ⃗ E n d \begin{aligned} &\vec{r} = \vec{b} - A\vec{x}\\ &while Fail to meet the conditions for stopping \\ &\quad \alpha = \vec{r}^T\vec{r} / (\vec{r}^T A\vec{r})\\ &\quad \vec{x} = \vec{x} + \alpha \vec{r}\\ &\quad \vec{r} = \vec{r} - \alpha A\vec{r}\\ &End \end{aligned} r=b−Axwhile No full foot sentence stop strip Pieces of α=rTr/(rTAr)x=x+αrr=r−αArEnd
12. conjugate gradient method
Ideas : The first step is the same as the steepest descent method , After that, the search direction of each iteration is determined by 2 A linear combination of vectors generates , namely x ⃗ k + 1 = x ⃗ k + α ( ξ r ⃗ k + η p ⃗ k − 1 ) \vec{x}_{k + 1} = \vec{x}_k + \alpha (\xi \vec{r}_k + \eta \vec{p}_{k - 1}) xk+1=xk+α(ξrk+ηpk−1)
Deduce the search direction : set up f ( ξ , η ) = ϕ ( x ⃗ k + ξ r ⃗ k + η p ⃗ k − 1 ) f(\xi, \eta) = \phi(\vec{x}_k + \xi \vec{r}_k + \eta \vec{p}_{k - 1}) f(ξ,η)=ϕ(xk+ξrk+ηpk−1)
Make ∂ f ∂ ξ = ∂ f ∂ η = 0 \frac{\partial f}{\partial \xi} = \frac{\partial f}{\partial \eta} = 0 ∂ξ∂f=∂η∂f=0, obtain
{ ξ r ⃗ k T A x ⃗ r + η r ⃗ k T A p ⃗ k − 1 = r ⃗ T r ⃗ ξ r ⃗ k T A p ⃗ k − 1 + η p ⃗ k − 1 T A p ⃗ k − 1 = 0 ⇒ p ⃗ k = 1 ξ ( x ⃗ k + 1 − x ⃗ k ) = r ⃗ k + η ξ p ⃗ k − 1 = r ⃗ k + β p ⃗ k − 1 β k − 1 = η ξ = r ⃗ k T A p ⃗ k − 1 p ⃗ k − 1 T A p ⃗ k − 1 \begin{cases} \xi \vec{r}_k^T A \vec{x}_r + \eta \vec{r}_k^TA \vec{p}_{k - 1} = \vec{r}^T\vec{r}\\ \xi \vec{r}_k^T A \vec{p}_{k - 1} + \eta \vec{p}_{k - 1}^T A \vec{p}_{k - 1} = 0 \end{cases}\\ \Rightarrow \vec{p}_k = \frac{1}{\xi} (\vec{x}_{k + 1} - \vec{x}_k) = \vec{r}_k + \frac{\eta}{\xi} \vec{p}_{k - 1} = \vec{r}_{k} + \beta \vec{p}_{k - 1}\\ \beta_{k - 1} = \frac{\eta}{\xi} = \frac{\vec{r}^T_k A \vec{p}_{k - 1}}{\vec{p}_{k - 1}^T A \vec{p}_{k - 1}} { ξrkTAxr+ηrkTApk−1=rTrξrkTApk−1+ηpk−1TApk−1=0⇒pk=ξ1(xk+1−xk)=rk+ξηpk−1=rk+βpk−1βk−1=ξη=pk−1TApk−1rkTApk−1
Step size derivation does not change , The resulting x ⃗ k + 1 = x ⃗ k + α k p ⃗ k \vec{x}_{k + 1} = \vec{x}_k + \alpha_k \vec{p}_k xk+1=xk+αkpk
Algorithm : Input : initial value x ⃗ 0 \vec{x}_0 x0,A, b ⃗ \vec{b} b; Output : x ⃗ k \vec{x}_k xk
r ⃗ 0 = b ⃗ − A x ⃗ 0 p ⃗ 0 = r ⃗ 0 k = 0 w h i l e No full foot sentence stop strip Pieces of α k = r ⃗ k T p ⃗ k / ( p ⃗ k T A r ⃗ k ) x ⃗ k + 1 = x ⃗ k + α k p ⃗ k r ⃗ k + 1 = r ⃗ k − α k A p ⃗ k / / Next Noodles meter count Next One Time Of Step Long and Fang towards β k = − r ⃗ k + 1 T A p ⃗ k / ( p ⃗ k T A p ⃗ k ) p ⃗ k + 1 = r ⃗ k + 1 + β k p ⃗ k k = k + 1 E n d \begin{aligned} &\vec{r}_0 = \vec{b} - A\vec{x}_0\\ &\vec{p}_0 = \vec{r}_0\\ &k = 0\\ &while Fail to meet the conditions for stopping \\ &\quad \alpha_k = \vec{r}^T_k\vec{p}_k / (\vec{p}^T_k A \vec{r}_k)\\ &\quad \vec{x}_{k + 1} = \vec{x}_k + \alpha_k \vec{p}_k\\ &\quad \vec{r}_{k + 1} = \vec{r}_k - \alpha_k A\vec{p}_k \qquad // Next, calculate the next step size and direction \\ &\quad \beta_k = -\vec{r}_{k + 1}^T A \vec{p}_k / (\vec{p}_k^T A \vec{p}_k) \\ &\quad \vec{p}_{k + 1} = \vec{r}_{k + 1} + \beta_k \vec{p}_k\\ &\quad k = k + 1\\ &End \end{aligned} r0=b−Ax0p0=r0k=0while No full foot sentence stop strip Pieces of αk=rkTpk/(pkTArk)xk+1=xk+αkpkrk+1=rk−αkApk// Next Noodles meter count Next One Time Of Step Long and Fang towards βk=−rk+1TApk/(pkTApk)pk+1=rk+1+βkpkk=k+1End
边栏推荐
- 二极管如何工作?
- Go -- maximum heap and minimum heap
- Nlopt -- Nonlinear Optimization -- principle introduction and application method
- AttributeError: ‘Version‘ object has no attribute ‘major‘
- KOREANO ESSENTIAL打造气质职场范
- Chen Haotian won the national championship of the national finals of the 7th children's model star ceremony
- train_de.py: error: argument --save_steps: invalid int value: ‘$[$[889580/128/4]*10/2]‘
- 1033 To Fill or Not to Fill
- WGet -- 404 not found due to spaces in URL
- Go -- standard library sort package
猜你喜欢
![JS get the substring of the specified character position and the specified character position interval of the specified string [simple and detailed]](/img/01/6829e85bf28431eb06e70b87ceaaff.jpg)
JS get the substring of the specified character position and the specified character position interval of the specified string [simple and detailed]

MySQL index, transaction and storage engine of database (1)

Appium automation test foundation - ADB shell command

2022第六季完美童模 托克逊赛区 决赛圆满落幕

新冠无情人有情,芸众惠爱心善举暖人间——捐赠商丘市儿童福利院公益行动

Deploy lvs-dr cluster

The famous painter shiguoliang's "harvest season" digital collection was launched on the Great Wall Digital Art

6.Redis新数据类型

‘Failed to fetch current robot state‘ when using the ‘plan_kinematic_path‘ service #868

NLopt--非线性优化--原理介绍及使用方法
随机推荐
著名画家史国良《丰收时节》数字藏品上线长城数艺
Description of event flow
MIT-6874-Deep Learning in the Life Sciences Week4
MySQL index, transaction and storage engine of database (1)
Automated stock trading ensemble strategy based on Reinforcement Learning
Description of event object
About the split and join operations of strings
MIT-6874-Deep Learning in the Life Sciences Week6
【C语言快速上手】带你了解C语言,零基础入门③
Nlopt -- Nonlinear Optimization -- principle introduction and application method
华南产业集团发力数字经济,城链科技发布会成功召开
磁悬浮3D灯
ModuleNotFoundError: No module named ‘_ swigfaiss‘
Enter the world of helium (hNT) hotspot servers to bring you different benefits
Yixian e-commerce released its first quarterly report: adhere to R & D and brand investment to achieve sustainable and high-quality development
半钢同轴射频线的史密斯圆图查看和网络分析仪E5071C的射频线匹配校准
The digital collection of sunanmin's lotus heart clearing was launched on the Great Wall Digital Art
Quick completion guide for mechanical arm (V): end effector
Koreano essential creates a professional style
SolidWorks质量特性详解(惯性张量、转动惯量、惯性主轴)