当前位置:网站首页>[linear algebra] understand eigenvalues and eigenvectors
[linear algebra] understand eigenvalues and eigenvectors
2022-06-09 09:41:00 【Poor and poor to an annual salary of millions】
Catalog
1 Popular explanation
Definition : about Any reversible square matrix , There is a vector , Multiply the matrix by the vector , The size of the vector changes but the direction does not change . in other words , about n × n n×n n×n matrix M M M, There is a non 0 0 0 Of n n n Dimension vector V 1 , V 2 , . . . . . . V n V_1,V_2,......V_n V1,V2,......Vn Let's set up the following formula :
M V i = λ i V i MV_i=\lambda_iV_i MVi=λiVi
among , ratio λ i \lambda_i λi Become a matrix M M M The eigenvalues of the , vector V i V_i Vi Become the eigenvector corresponding to the eigenvalue .
For a reversible square matrix, there can be a set of eigenvalues and eigenvectors . Simplify the above formula to :
A x = λ x Ax=\lambda x Ax=λx
among A A A It's a matrix x x x Value eigenvector λ \lambda λ It's characteristic value . The eigenvalue is a number , And vector λ x \lambda x λx Number multiplication is essentially the scaling of a vector . Such as λ = 2 \lambda =2 λ=2, x = [ 2 , 3 ] T x=[2, 3]^T x=[2,3]T, be λ x = [ 4 , 6 ] T \lambda x =[4, 6]^T λx=[4,6]T. The transformed vector is compared with the original vector x x x The size of the has doubled and the direction has not changed . And since the above two formulas are equal to each other , Therefore, the effect of multiplying a matrix by a vector is to make the vector stretch in a constant direction .【 reference 1】
So the popular explanation of eigenvalues and eigenvectors is :
- A matrix is a transformation of a vector .
- An eigenvector is a vector whose direction is invariant after a matrix transformation .
- The eigenvalue λ \lambda λ Is a multiple of expansion .
Here we add the properties of eigenvalues and eigenvectors :
The eigenvalue : A A A yes n n n Order matrix λ 1 , λ 2 , λ 3 . . . . . . λ n \lambda_1 ,\lambda_2, \lambda_3......\lambda_n λ1,λ2,λ3......λn yes A A A Of n n n Eigenvalues have :
∑ i n λ i = λ 1 + λ 2 + λ 3 + . . . . . . + λ n = a 11 + a 22 + a 33 + . . . . . . + a n n = t r ( A ) ∏ i = 1 n λ 1 λ 2 . . . . . . λ n = ∣ A ∣ \sum_i^n \lambda _i = \lambda_1 + \lambda_2+ \lambda_3+......+\lambda_n=a_{11}+a_{22}+a_{33}+......+a_{nn}=tr(A)\\ \prod_{i=1}^{n}\lambda_1 \lambda_2......\lambda_n=|A| i∑nλi=λ1+λ2+λ3+......+λn=a11+a22+a33+......+ann=tr(A)i=1∏nλ1λ2......λn=∣A∣
Eigenvector : n n n Order matrix A A A Unequal eigenvalues of λ 1 , λ 2 , λ 3 . . . . . . λ n \lambda_1 ,\lambda_2, \lambda_3......\lambda_n λ1,λ2,λ3......λn The corresponding eigenvectors x 1 , x 2 , . . . . . . , x n x_1, x_2, ......,x_n x1,x2,......,xn Linearly independent . Be careful : The eigenvectors of symmetric matrix with unequal eigenvalues are orthogonal .
2 The matrix is understood from the perspective of motion
If you have read the matrix ( One )( Two )( 3、 ... and ) Series of articles , Eigenvalues and eigenvectors can be understood from the perspective of transformation . With M a = b Ma=b Ma=b Introduce matrix as an example M M M The meaning of :
- From the perspective of transformation , matrix M M M It can be understood as a pair of vectors a a a Make a transformation and get b b b
- From the point of view of the coordinate system , M M M It can be understood as a coordinate system ( Commonly used coordinates are Cartesian coordinates , namely I I I), vector a a a Is in the M M M The coordinates in this coordinate system , a a a Corresponding to I I I The coordinates in the coordinate system are vectors b b b.
What do eigenvalues and eigenvectors mean ?
Let's assume the matrix A A A A characteristic value of is m 1 m_1 m1, The corresponding eigenvector is x 1 x_1 x1. According to the definition and the above understanding of the matrix, we can know , x 1 x_1 x1 In order to A A A Is the coordinate vector of the coordinate system , Transform it to with I I I Is the coordinate vector obtained after the coordinate system And Its original coordinate vector There will always be one m 1 m_1 m1 The scaling relation of times .
For convenience of understanding, give a simple example , If the matrix A A A as follows , You can see that its characteristic values are 2 2 2 individual , Namely 1 , 100 1,100 1,100, They correspond to each other 2 2 2 A special eigenvector , namely [ 1 , 0 ] , [ 0 , 1 ] [1,0],[0,1] [1,0],[0,1].
A = [ 1 0 0 100 ] A=\left[\begin{array}{cc} 1 & 0 \\ 0 & 100 \end{array}\right] A=[100100]
So the matrix A A A Multiply left by any vector x x x, In fact, it can be understood as a vector x x x Along this 2 2 2 The direction of the two eigenvectors is expanded , The scaling ratio is the corresponding eigenvalue . You can see this 2 2 2 The difference between the two eigenvalues is very large , The smallest is 1 1 1, The largest eigenvalue is 100 100 100.
The picture is from 【 reference 3】
3 The meaning of eigenvalues and eigenvectors
The point is that if we know the magnitude of the eigenvalue , Sometimes in order to reduce the calculation , We can keep only those with large eigenvalues , For example, in the picture above , We can see the transformed vector x x x The shaft fits the same , and y y y The axis direction is stretched 100 100 100 times , So usually in order to implement the compression algorithm , We can just keep y y y The axis direction can be changed . It is similar to the high-dimensional case , Multidimensional matrices stretch vectors in multiple directions , Some directions may stretch very little , And some are big , We only need to keep a large range to achieve the purpose of compression .【 reference 3】
4 Understand from a computational point of view
for instance : matrix A A A The eigenvalue of is 2 , 1 2, 1 2,1, The eigenvector is [ 1 , 1 ] T and [ 2 , 3 ] T [1, 1]^T and [2, 3]^T [1,1]T and [2,3]T.
A = [ 4 − 2 3 − 1 ] A=\left[\begin{array}{ll} 4 & -2 \\ 3 & -1 \end{array}\right] A=[43−2−1]
Suppose there is a vector x = [ 1 , 2 ] T , be y = A x by [ 0 , 1 ] T x=[1, 2]^T, be y=Ax by [0, 1]^T x=[1,2]T, be y=Ax by [0,1]T. The following uses another method to calculate : First of all, will x x x Expressed as a linear combination of eigenvectors
x = ( 1 2 ) = − 1 ∗ ( 1 1 ) + 1 ∗ ( 2 3 ) x=\left(\begin{array}{l} 1 \\ 2 \end{array}\right)=-1 *\left(\begin{array}{l} 1 \\ 1 \end{array}\right)+1 *\left(\begin{array}{l} 2 \\ 3 \end{array}\right) x=(12)=−1∗(11)+1∗(23)
then , Multiply the eigenvalue by the corresponding coefficient , obtain :
y = − 1 ∗ 2 ∗ ( 1 1 ) + 1 ∗ 1 ∗ ( 2 3 ) = − 2 ∗ ( 1 1 ) + 1 ∗ ( 2 3 ) y=-1 * 2 *\left(\begin{array}{l} 1 \\ 1 \end{array}\right)+1 * 1 *\left(\begin{array}{l} 2 \\ 3 \end{array}\right)=-2 *\left(\begin{array}{l} 1 \\ 1 \end{array}\right)+1 *\left(\begin{array}{l} 2 \\ 3 \end{array}\right) y=−1∗2∗(11)+1∗1∗(23)=−2∗(11)+1∗(23)
obviously y = [ 0 , 1 ] T y=[0, 1]^T y=[0,1]T.( Understand well )
So far , Let's summarize the previous conclusion again :
- Matrix multiplication can be understood as the transformation of the coordinate system of the corresponding vector ( Understand from the perspective of coordinate system )
- From the properties of eigenvectors , A set of eigenvectors corresponding to a matrix is linearly independent , So it can be used as a group The base .
The key is coming. , From the above calculation, we can see that , The result of a matrix left multiplying by a vector is equivalent to the representation of the corresponding vector by the expansion of the linear combination of the eigenvectors of the matrix ( Understand this sentence well ). That is to say, the corresponding vector expands and contracts in the coordinate system based on the matrix eigenvector . It can also be understood as , The mapping that the matrix acts as , It's actually scaling the eigenvector , The scaling degree of each eigenvector is the eigenvalue .【 reference 5】 It can be understood that eigenvalues and eigenvectors are attributes of the matrix itself .
5 Understand other conclusions
5.1 Diagonalization decomposition
reference 【5】 I don't understand it very thoroughly , When you understand it, you can add
6 reference
[1] A popular explanation of eigenvalues and eigenvectors
[2]Python Calculate eigenvalues and eigenvectors
[3] What are eigenvalues and eigenvectors ? What can it do ?
[4] How to understand matrix eigenvalues and eigenvectors ?
[5] Understanding of eigenvalues and eigenvectors simple There must be a harvest
[6] Symmetric matrix The eigenvectors are orthogonal
边栏推荐
- 2022-2028 global linear lamp industry research and trend analysis report
- LeetCode_ Binary tree_ Prefix and_ Medium_ 437. path sum III
- Basic pointer ~ guide you to the introduction pointer
- 【线性代数】理解特征值和特征向量
- Median plot (prefix and)
- LeetCode_单调栈_中等_581.最短无序连续子数组
- WebRTC系列--计算帧率及帧间隔
- MySQL basic DML and DDL learning
- xml转Map(递归调用读取XML全部节点内容) readXml 读取xml
- 【计算机网络-19】计算机网络面试题
猜你喜欢

【新手上路常见问答】平面设计的基本原则

Neo4j realizes social recommendation (4)

Sofa weekly | kusion open source, QA this week, contributor this week

How to draw a picture gracefully

KusionStack 开源有感|历时两年,打破“隔行如隔山”困境

Understand the graph database neo4j (III)

了解图数据库neo4j(二)

Omit 应用 减少 TS 重复代码

TS 泛型的extends属性

GLCC's first programming summer camp welcome to sign up for layotto, kusionstack, Nydus, Kata containers!
随机推荐
LeetCode_ Simulation_ Medium_ 621. task scheduler
LeetCode_ Sort_ Medium_ 406. reconstruct the queue based on height
Kusionstack has a sense of open source | it took two years to break the dilemma of "separating lines like mountains"
LeetCode_栈_困难_394. 字符串解码
Attachment 17: limited articles on network program interpretation
【图机器学习】启发式链路预测方法
LeetCode_ Stack_ Difficulties_ 394. string decoding
判断是Json还是文件流
Summary of Android development interview experience and compilation of actual records (must see)
LeetCode_单调栈_中等_581.最短无序连续子数组
LeetCode_单调栈_中等_739. 每日温度
JWT和session
使用Canvas画出多个多边形Polygon
Web Service进阶(八)BASE64Decoder小解
根据投影坐标裁剪影像中的目标区域(附有完整代码)
Basic pointer ~ guide you to the introduction pointer
Cypher usage statement record of neo4j
Reroute the final chapter of riverpod in the state management of flutter
WebRTC系列--计算帧率及帧间隔
Sofa weekly | kusion open source, QA this week, contributor this week