当前位置:网站首页>Linear, integer, nonlinear, dynamic programming

Linear, integer, nonlinear, dynamic programming

2022-06-13 02:20:00 Shameful child

Linear programming

  • Linear programming (Linear Programming Brief notes LP) Is an important branch of mathematical programming .

  • since 1947 year G. B. Dantzig A method for solving linear programming The simplex method since , Linear programming tends to be mature in theory

    • Objective function
    • constraint condition , Write it down as s.t.( namely subject to)
    • The objective function and constraints are linear functions , So it is called linear programming problem .
    • Linear programming Matlab Standard form

      • [ Failed to transfer the external chain picture , The origin station may have anti-theft chain mechanism , It is suggested to save the pictures and upload them directly (img-T01W3XxU-1633779106919)(C:/Users/23105/AppData/Roaming/Typora/typora-user-images/image-20211009151645757.png)]

      • among c and x by n Virial vector , AAeq Is a matrix of proper dimension ,bbeq Is a column vector of proper dimension .

        • Linear programming
          • [ Failed to transfer the external chain picture , The origin station may have anti-theft chain mechanism , It is suggested to save the pictures and upload them directly (img-3XCl5pBy-1633779106921)(C:/Users/23105/AppData/Roaming/Typora/typora-user-images/image-20211009151749505.png)]
        • Matlab Standard
          • [ Failed to transfer the external chain picture , The origin station may have anti-theft chain mechanism , It is suggested to save the pictures and upload them directly (img-ioLtJwkS-1633779106923)(C:/Users/23105/AppData/Roaming/Typora/typora-user-images/image-20211009151830161.png)]
  • Feasible solution Solutions satisfying constraints

  • The feasible region A set of all feasible solutions , Write it down as R .

Integer programming

  • Variables in planning ( In part or in whole ) When restricted to integers , It's called integer programming .

  • If in the linear programming model , Variables are limited to integers , It is called integer linear programming .

    • Classification of solution methods :

      • Branch and bound — Computable pure or mixed integer linear programming .

        • All feasible solution spaces are repeatedly divided into smaller and smaller subsets , It's called branching ;

        • Calculate a lower bound of the objective for the solution set in each subset ( For the minimum value problem ), This is called delimitation

        • After each branch , Those subsets whose bounds exceed the target value of the known feasible solution set will not be further branched , Many subsets may be disregarded , This is called pruning .

          • Solve the production schedule problem 、 Traveling salesman question 、 Plant site selection 、 Knapsack problem and assignment problem .
        • Using branch and bound method to solve integer programming ( Maximize ) The steps of the problem are :

          • The integer programming problem requiring solution is called the problem A , The corresponding linear programming problem is called problem B .
          • Solving problems B You may get one of the following :

            • B There is no feasible solution , At this time A There is no feasible solution , Then stop .
            • B There is an optimal solution , And meet the problem A The integer condition of , B The optimal solution of is A The best solution , Then stop .
            • B There is an optimal solution , But it doesn't meet the problem A The integer condition of , Remember that its objective function value is z(UP) .
          • Look for problems by observation A An integer feasible solution of , It is generally advisable to x(j) = 0, j = 1,…,n , Probe , Get the objective function value , And write it down z(down). With z* Express problem A The optimal objective function value of ; Now there is z(up)z* ≤ *z(down)* To iterate .

            • First step : m.
            • The second step : Each successive problem is a branch indicating the solution result , And other problems , Find out the maximum value of the optimal objective function as the new upper bound z(up) . From each branch that meets the integer condition , Find out the maximum value of the objective function as the new lower bound z(down) , If it doesn't work z(down) unchanged .
            • The third step : Compare and prune , If there is less than in the optimal objective function of each branch z(down) person , Then cut off this branch , That is, I will not consider it in the future . If more than z(down) , And does not meet the integer condition , Then repeat the first step . Until I finally got z* = z(down) until . Get the optimal integer solution x(j) , j = 1,…, n .
      • Cutting plane method — Computable pure or mixed integer linear programming .

      • Implicit enumeration — solve “0-1” Integer programming

        • Filter implicit enumeration ;
        • Branch implicit enumeration .
      • Hungarian law — Solve assignment problems (“0-1” Planning special situations )

        • Hungarian algorithm is mainly used to solve some problems related to Bipartite map matching Questions about
          • Bipartite graph Bipartite graph) It's a special kind of chart , It can be divided into two parts , The points in each part are not connected to each other .
          • [ Failed to transfer the external chain picture , The origin station may have anti-theft chain mechanism , It is suggested to save the pictures and upload them directly (img-4G1QCaSM-1633779106926)(C:/Users/23105/AppData/Roaming/Typora/typora-user-images/image-20211009160212023.png)]
          • Hungarian algorithm is mainly used to solve two problems : Find the of a bipartite graph The maximum number of matches and The minimum number of point covers .
            • The maximum number of matches in a bipartite graph be equal to The minimum number of vertex covers in this graph .
      • Monte Carlo method — Solve various types of programming

        • Monte Carlo method is a calculation method . The principle is through a large number of random samples , To understand a system , And then get the value to be calculated .
        • A numerical method guided by the theory of probability and statistics . It means using random numbers ( Or more common pseudo-random numbers ) To solve a lot of computational problems .

Nonlinear programming

  • If the objective function or constraint contains a nonlinear function , This kind of programming problem is called nonlinear programming problem .

  • If the optimal solution of linear programming exists , Its optimal solution can only be reached on the boundary of its feasible region ( In particular, the vertex of the feasible region reaches ); And the optimal solution of nonlinear programming ( If the optimal solution exists ) Maybe At any point in its feasible region achieve .

  • Because of the linear programming The objective function is a linear function , The feasible region is a convex set , So the optimal solution is the global optimal solution on the whole feasible region . Nonlinear programming is not , Sometimes we find out Although a solution is an extreme point on a part of the feasible region , But it is not necessarily the global optimal solution in the whole feasible region .

  • For a practical problem , When it is reduced to a nonlinear programming problem , Generally, we should pay attention to the following points :

    • Identify alternatives : First, collect information and data related to the problem , On the basis of being fully familiar with the problem , Identify what is an alternative to the problem , They are represented by a set of variables .
    • Put forward the goal : After data analysis , According to actual needs and possibilities , Put forward the goal of pursuing minimization or maximization . also , Apply various scientific and technical principles , Express it as a mathematical relation .
    • Give value criteria : After putting forward the goal to be pursued , To establish the objectives under consideration “ good ” or “ bad ” The value standard of , And describe it in some quantitative form .
    • Look for constraints : As the goal pursued is generally to achieve the minimization or maximization effect under certain conditions , So you also need to find all the constraints that cause the problem , These conditions are usually expressed by some inequalities or equations between variables .
  • convex set

    • Subsets of affine spaces closed under convex combinations . In Euclidean space , A convex set is for every pair of points in the set , Each point on the line segment connecting the pair of points is also in the set .
    • Make X yes Linear space . If for X Subset S All in x and y, And in the interval [0,1] All in t, spot (1-t)x+ty Also belong to S, be S be called convex set .
    • The boundary of a convex set is always a convex curve .
    • Convex minimization is a sub domain of optimization , The minimization of convex functions on convex sets is studied . The branch of mathematics used to study the properties of convex sets and convex functions is called convex analysis .
    • nature
      • [ Failed to transfer the external chain picture , The origin station may have anti-theft chain mechanism , It is suggested to save the pictures and upload them directly (img-xz9qcUh7-1633779106929)(C:/Users/23105/AppData/Roaming/Typora/typora-user-images/image-20211009162639341.png)]
    • convex hull : Every subset of vector space A Contained in the smallest convex set ( be called A Convex hull of ) in , Which contains A The intersection of all convex sets of .
  • Convex combination

    • Convex combination is a special kind of linear combination , It is a nonnegative linear combination of several points in a certain sense .
    • set up vector {x(i)},i=1,2…,n, if there be The set of real Numbers λ(i)>=0 , And λ(1)+…+λ(n)=1 , said λ(1)x(1)+…+λ(n)x(n) Vector {x(i)} One of the Convex combination ( Convex linear combination ).
  • Convex function

    • set up f (x) To define n Dimensional Euclidean space E(n) A convex set in R The function on , If for any real number α(0 <α <1) as well as R Any two points in x(1) and x(2) , There is always f (αx(1) + (1−α)x(2) ) ≤αf (x(1) ) + (1−α) f (x(2) ) said f (x) To define R The convex function over .
    • If for every α(0 <α <1) and x(1) ≠ x(2) ∈R There is always f (αx(1) + (1−α)x(2) ) <αf (x(1) ) + (1−α) f (x(2) ) said f (x) To define R Upper Strictly convex functions .
  • convex programming

    • Suppose that f (x) Is a convex function ,g(j)(x)( j =1,2,…,l) Is a convex function , Such nonlinear programming is called convex programming .
    • The feasible region of convex programming is a convex set , The local optimal solution is the global optimal solution , And the set of its optimal solutions forms a convex set .
    • When the objective function of convex programming f (x) For strictly convex functions , The optimal solution must be unique ( Suppose the optimal solution exists )
    • Convex programming is a kind of nonlinear programming which is simple and has important theoretical significance
  • One dimensional search method

    • Trial method (“ success — Failure ”, Fibonacci law ,0.618 FA et al )
    • Interpolation method ( Parabola interpolation , Cubic interpolation, etc )
    • The method of finding roots in calculus ( Tangent method , Dichotomy, etc ).
  • Quadratic interpolation :

    • The basic idea is this : In the search interval , Keep using it low times ( Usually no more than three times ) Polynomial to approximate the objective function , And the minimum point of interpolation polynomial is used to approach the optimal solution step by step .
  • Analytic method

    • Gradient method ( The steepest descent )
      • x*(k* +1) = x**k + t(k)p**k , Every round from point x**k Departure in the steepest descent direction − ∇f (x**k ) Do a one-dimensional search , To establish a method for solving unconstrained extreme value problems , It is called the steepest descent method .
      • The knowledge of calculus tells us , spot x**k The negative gradient direction of p**k = −∇f (x**k ) , From the point of x**k Departure envoy f The fastest way down .
      • Negative gradient direction − ∇f (x**k ) by f At point x**k The steepest descent direction at .
    • Newton Law
    • Variable scale method
      • It is not only a very effective algorithm for solving unconstrained extremum problems , It has also been extended to solve constrained extreme value problems .
      • It is both It avoids the calculation of the second derivative matrix and its inverse process , It is faster than the gradient method , In particular, it has significant advantages for high-dimensional problems
  • stay Matlab Toolbox , The functions used to solve unconstrained extremum problems are fminunc and fminsearch

  • Constrained extremum problem

    • Extreme value problems with constraints are called constrained extreme value problems , Also called planning problem .
    • Second planning
      • Of a nonlinear programming The objective function is an independent variable x The quadratic function of , The constraints are all linear , This kind of planning is called quadratic planning .
      • [ Failed to transfer the external chain picture , The origin station may have anti-theft chain mechanism , It is suggested to save the pictures and upload them directly (img-nllYVpm9-1633779106931)(C:/Users/23105/AppData/Roaming/Typora/typora-user-images/image-20211009172647008.png)]
      • here H It's a real symmetric matrix , f ,b Is a column vector , A Is the matrix of the corresponding dimension .
      • Matlab The command for solving quadratic programming in is [X,FVAL]= QUADPROG(H,f,A,b,Aeq,beq,LB,UB,X0,OPTIONS)
        • Return value X It's a decision vector x Value , Return value FVAL Is the objective function in x Place the value of the .
    • Penalty function method
      • Using penalty function method , The nonlinear programming problem can be solved , It is transformed into solving a series of unconstrained extreme value problems , Therefore, this method is also called sequence unconstrained minimization technique , Short for SUMT (Sequential Unconstrained Minization Technique).
      • The idea of penalty function method for solving nonlinear programming problems is , Making use of the constraint function in the problem to make an appropriate penalty function , thus The extended objective function with parameters is constructed , Put the question It is transformed into an unconstrained nonlinear programming problem .
  • stay Matlab Optimization toolbox , The functions used to solve constrained optimization problems are :fminbnd、fmincon、quadprog、fseminf、fminimax

    • fminbnd function : Find the minimum value of a single variable nonlinear function on an interval

Dynamic programming

  • Dynamic programming (dynamic programming) It's a branch of operations research , It's the decision-making process (decision process) The mathematical method of optimization .

  • The principle of optimality (principle of optimality), Turn a multi-stage process into a series of single-stage problems , Solve one by one , A new method to solve this kind of process optimization problem has been established — Dynamic programming .

  • The dynamic programming model of multi-stage decision process optimization problem usually contains the following elements

    • Stage : The stages are usually divided according to the characteristics of time sequence or spatial sequence , In order to solve the optimization problem in the order of stages .
    • state
      • state (state) Indicates the natural state of the process at the beginning of each phase
      • It should be able to describe the characteristics of the process without aftereffect , That is, when the state variable of a certain stage is given , The evolution of the process after this stage has nothing to do with the states of the previous stages .
      • It is also usually required that the state be directly or indirectly observable .
      • According to the specific situation of the process evolution , State variables can be discrete or continuous .
      • For the convenience of calculation, continuous variables are sometimes discretized ; For the convenience of analysis, discrete variables are sometimes regarded as continuous .
    • Decision making : When the state of a phase is determined , Various choices can be made to evolve to a certain state of the next stage , This means of choice is called resolution (decision), It is also called control in the optimal control problem (control).
    • Strategy : The sequence of decisions is called a strategy (policy).
    • State transition equation : In the process of certainty , Once the status and decision of a certain stage are known , The status of the next stage will be fully determined . Using the state transition equation (equation of state transition) This is the law of evolution
    • Index function and optimal value function
      • Index function (objective function) It is a quantitative indicator to measure the quality of the process , It is a quantity function defined on the whole process and all subsequent subprocesses ,
      • stay x(k) Given the timing index function V(k ,n) Yes p(kn) The optimal value of is called the optimal value function (optimal value function)
    • Optimal strategy and optimal trajectory
    • Recursive equations
      • The recursive equation of dynamic programming is the basis of the optimality principle of dynamic programming , namely : The sub policy of the optimal policy , Constitute the optimal sub policy .
  • The mathematical model of dynamic programming is established :

    1. Divide the process into appropriate phases .
    2. Select the state variable correctly x(k) , So that it can describe the state of the process , It also meets the requirement of no aftereffect , At the same time, the allowable state set is determined X(k) .
    3. Select the decision variable u(k) , Determine the allowable decision set U(k) (x(k) ) .
    4. Write the equation of state transition .
    5. Determine stage indicators v(k) (x(k) ,u(k) ) And index function V(kn) In the form of ( Sum of stage indicators , Product of stage indicators , The maximum or minimum of stage index, etc ).
    6. Write the basic equation, that is, the recursive equation satisfied by the optimal value function , And endpoint conditions .
  • The relationship between dynamic programming and static programming

    • Dynamic programming and static programming ( Linear and nonlinear programming ) The object of study is essentially the function extremum problem under some constraints .

    • As long as the stage variables are properly introduced in static planning 、 state 、 The decision can be solved by dynamic programming .

    • Dynamic programming can be seen as seeking decision u1 ,u2 ,…,un Make the index function V1n (x1,u1,u2 ,…,u**n ) To achieve the best ( Maximum or minimum ) Extreme value problem of , State transition equation 、 Endpoint conditions and allowable state sets 、 Allow decision sets, etc. to be constraints , In principle, it can be solved by nonlinear programming method .

    • Compared with static planning , The advantage of dynamic programming lies in :

      • The global optimal solution can be obtained . The set of constraints determined by constraints is often very complex , Even if the index function is simple , It is also difficult to find the global optimal solution by using the nonlinear programming method .
      • A family of optimal solutions can be obtained . It is different from nonlinear programming that only one optimal solution of the whole process can be obtained , Dynamic programming results in a family of optimal solutions for each state of the whole process and all subsequent sub processes . When the optimal policy cannot be realized for some reason , Such a family of solutions can be used to find suboptimal strategies .
      • Be able to use experience to improve the efficiency of solution . Because the dynamic programming method reflects the connection and dynamic characteristics of the process evolution step by step , Practical knowledge and experience can be used in calculation to improve the efficiency of solution .
    • The main drawback of dynamic programming is :

      • There is no unified standard model , There is no general way to construct models , There is not even a criterion to judge whether a problem can construct a dynamic programming model . For more complex problems in the selection state 、 Decision making 、 It requires rich imagination and flexible skills to determine the law of state transition , This brings about application limitations .
      • There is a disaster of dimensionality when numerical methods are used (curse of dimensionality). If one-dimensional state variable has m A value of , So for n Dimensional problem , state x(k) There is m**n It's worth , For each state value, calculate 、 Storage function f (k) (x(k) ) , about n The calculation of larger practical problems is often unrealistic .
  • Dynamic programming model of typical problems :

    • The shortest route problem
    • Production planning issues
    • The allocation of resources

Application limitations .
> - There is a disaster of dimensionality when numerical methods are used (curse of dimensionality). If one-dimensional state variable has m A value of , So for n Dimensional problem , state x(k) There is m**n It's worth , For each state value, calculate 、 Storage function f (k) (x(k) ) , about n The calculation of larger practical problems is often unrealistic .

  • Dynamic programming model of typical problems :

    • The shortest route problem
    • Production planning issues
    • The allocation of resources
原网站

版权声明
本文为[Shameful child]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/02/202202280543149394.html