理解LP Simplex

  • LP(Linear Programming)

    线性规划,指目标函数和约束条件解为线性的最优化问题。

    • 约束 Constraint

      数学中,约束是一个最最优化问题的解需要符合的条件。分为等式约束(束缚拘束)不等式约束(非束缚拘束)

      符合所有约束的解的集合称为可行集(feasible set)或是候选解(candidate solution)

    • 线性

      数学函数L(x)L(x)

      从直观角度,如果某数学函数或数量关系的函数图形呈现未一条直线或线段,那么这种关系就是一种线性的关系。

      用数学语言精确定义,L(x)L(x)是个只拥有一个变数的一阶多项式函数,即可以表达为L(x)=kx+bL(x) = kx+b ,其中k,bk,b为常数。

      初等数学定义,目的是把函数图像为直线的数量关系称作为线性关系

      在代数学和数学分析学中,如果一种运算同时满足特定的“加性”和“齐性”,则称这种运算是线性的。

      用数学语言精确定义,L(x)L(x)具有一下两个性质:

      1. 加性 : L(x+t)=L(x)+L(t)L(x+t) = L(x) + L(t)
      2. 一次齐次性 : L(mx)=mL(x)L(mx) = mL(x)

      高等数学定义

      • 齐次函数

        齐次函数是一个有倍数性质的函数:如果变量乘以一个系数,则新韩淑回事原函数再乘上系数的某次方倍。

        线性函数 f(αv)=αf(v)f(\alpha v)=\alpha f(v) 一次齐次函数

        多线性函数 f(αv1,...,αvn)=αnf(v1,...,vn)f(\alpha v_1,...,\alpha v_n)=\alpha^nf(v_1,...,v_n) nn次齐次函数

        齐次多项式x5+2x3y2+9xy4x^5+2x^3y^2+9xy^4 由同次数的单项式相加所组成的多项式

    • 最优化

      应用数学的一个分支,主要研究在特定情况下最大化或最小化某一特定函数或变量。

  • Simplex

    单纯形法,在数学优化领域中给常用于线性规划问题的数值求解,由George.Bernard.Dantzig发明。

    单纯形,是N维中的N+1个订单的凸包,是一个多胞体:

    • 直线上的一个线段1+1
    • 平面上的一个三角形2+1
    • 三维空间中的一个四面体3+1

    以上都是单纯形。

  • Simplex algorithm

    The simplicial cones in question are the corners (i.e., the neignborhoods of the vertices) of a geometric object called a polytope. The shape of this polytope is defined by the constraints appied to the objective function.

    polytope

    In elementary geometry, a polytope is a geometric object with “flat” sides. It is a generalization in any number of dimensions of the three-dimensional polyhedron.

    polytopes may exists in any general number of dimensions n as an n-dimensional polytope or n-polytope.

    flat sides mean that the sides of a (k+1)-polytope consist of k-polytopes that may have (k-1)-polytopes in common.

  • Canonical form

    canonical form

    In mathematics and computer science, a canonical, normal, or standard form of a mathematical object is a standard way of presenting that object as a mathematical expression.

    Often it is one which provides the simplest representation of an object and which allows it to be identified in a unique way.

    maximize    cTxsubject  to    Axb    and    x0 maximize \;\; c^Tx \\subject \;to \; \; Ax \leq b \;\;and \;\;x\geq 0

    cTxc^Tx : objective function

    c=(c1,...cn)c=(c_1,...c_n) : coefficients of the objective function

    ()T(\cdot)^T : matrix transpose

    x=(x1,...,xn)x=(x_1,...,x_n) : variables of the problem

    AA : pnp*n matrix

    b=(b1,...,bp)b=(b_1,...,b_p) : nonnegative constants (j,bj0\forall j,b_j\geq 0)

    feasible  regionfeasible \; region defined by all values of xx such that Axb    and    i,xi0Ax\leq b\;\;and\;\; \forall i,x_i\geq 0 is a (possibly unbounded) convex polytope. An extreme point or vertex of this polytope is known as basic feasible solution(BFS)

    Convex polytope

    A convex polytope is a special case of a polytope, having the additional property that it is also a convex set contained in the n-dimensional Euclidean space.

    Most texts use the term “polytope” for a bounded convex polytope, and the word “polyhedron” for the more general, possibly unbounded object.

    Convex set

    In geometry, a subset of a Euclidean space, or more generally an affine space over the reals, is convex if, given any two points, it contains the whole line segment that joins them.

    BFS(Basic Feasible Solution)

    In the theory of linear programming, a basic feasible solution (BFS) is a solution with a minimal set of non-zero variables.

    Geometrically, each BFS corresponds to a corner of the polyhedron of feasible solutions.

    If there exists an optimal solution, then there exists an optimal BFS. Hence , to find an optimal solution, it is sufficient to consider the BFS-s.

    This fact is used by the simplex algorithm, which essentially travels from some BFS to another until an optimal one is found.

    The simplex algorithm always terminates because the number of vertices in the polytope is finite; moreover since we jump between vertices always in the same direction (that of objective function), we hope that the number of vertices visited will be small.

  • Solution of a linear program

    The solution of a linear program is accomplished in two steps.

    • Phase I

      In the first step, known as Phase I, a starting extreme point is found. Depending on the nature of the program this may be trivial, but in general it can be solved by applying the simplex algorithm to a modified version of the original program. The possible results of Phase I are either that a basic feasible solution is found or that the feasible region is empty(this linear program is called infeasible).

    • Phase II

      The simplex algorithm is applied using the basic feasible solution found in Phase I as a starting point. The possible results from Phase II are either an optimum basic feasible solution or an infinite edge on which the objective function is unbounded above.

  • History of LP Simplex

    George Dantizg worked on planning methonds for the US Army Air Force during WWII, at early time, he didn’t include an objective as part of his formulation. Without an objective, a vast number of solutions can be feasible, and therefore to find the “best” feasible solution, military-specified “ground rules” must be used that describe how goals can be achieved as opposed to specifying a goal itself.

    Dantzig’s core insight was to realize that most such ground rules can be translated into a linear objective function that needs to be maximized.

  • Standard Form

    The transformation of a linear program to one in standard form may be accomplished as follows.

    First, for each variable with a lower bound other than 0, a new variable is introduced representing the difference between the variable and bound. The original variable can then be eliminated by substitution. For example, given the constraint:
    x15 x_1 \geq 5
    a new variable y1y_1 is introduced with
    y1=x15x1=y1+5 y_1 = x_1-5 \\ x_1=y_1+5
    The second equation may be used to eliminate x1x_1 from the linear program. In this way, all lower bound constraints(下界约束) may be changed to non-negativity restrictions.

    Second, for each remaining inequality constraint, a new variable, called a slack variable, is introduced to change the constraint to an equality constraint. This variable represents the difference between the two sides of the inequality and is assumed to be non-negative. For example, the inequalities :
    x2+2x33x4+3x52 x_2 + 2x_3 \leq 3 \\ -x_4+3x_5 \geq 2
    are replaced with :
    x2+2x3+s1=3x4+3x5s2=2s1,s20 x_2+2x_3+s_1=3 \\ -x_4+3x_5-s_2=2 \\s_1,s_2 \geq 0
    It is much easier to perform algebraic manipulation on inequalities in this form. In inequalities where \geq appears such as the second one, some authors refer to the variable introduced as a surplus variable.

    Slack variable

    In an optimization problem, a slack variable is a variable that added to an inequality constraint to transform it into an equality.

    Introducing a slack variable replaces an inequality constraint with an equality constraint and a non-negativity constraint on the slack variable.

    Slack variables are used in particular in linear programming.

    Third, each unrestricted variable is eliminated from the linear program. This can be done in two ways, one is by solving for the varible in one of the equations in which it appears and then eliminating the variable by substitution. The other is to replace the variable with the difference of two restricted variables.

    When this process is complete the feasible region will be in the form
    Ax=b,      i  xi0 Ax=b,\;\;\; \forall i \; x_i \geq 0
    It is also useful to assume that the rank of A is the number of rows. This results in no loss of generality since otherwise either the system Ax=bAx=b has redundant equations which can be dropped, or the system is inconsistent and the linear program has no solution.

  • Simplex tableau

    A linear program in standard form can be represented as a tableau of the form:
    [1cT00Ab] \left[ \begin{matrix} 1 & -c^T&0 \\0&A&b \end{matrix} \right]
    The first row defines the objective funcion and the remaining rows specify the constraints.

    The zero in the first column represents the zero vector of the same dimension as vector b.

    If the columns of A can be rearranged so that it contains the identity matrix of order p(the number of rows in A) then the tableau is said to be in canonical form.

    identity matrix

    In linear algebra, the identity matrix, or sometimes ambiguously called a unit matrix , of size n is the nnn*n square matrix with ones on the main diagonal and zeros elsewhere.

    It is denoted by InI_n or simply by II if the size is immaterial or can be trivially determined by the context. Less frequently, some mathematics books use UU or EE to represent the identity matrix.

    basic variables

    The variables corresponding to the columns of the identity matrix are called basic variables while the remaining variables are called nonbasic or free variables.

  • Pivot operations

    The geometical operation of moving from a basic feasible solution to an adjacent basic feasible sulution is implemented as a pivot operation.

    First, a nonzero pivot element is selected in a nonbasic column. The row containing this element is multiplied by its reciprocal to change this element to 1, and then multiples of the row are added to the other rows to change the other entries in the column to 0. The result is that, if the pivot element is in row r, then the column becomes the r-th column of the identity matrix. The variable for this column is now a basic variable, replacing the variable which coprresponded to the r-th column of the identity matrix befor the operation.

    entering variable

    The variable correspingding to the pivot column enters the set of basic variables and is called the entering varibale

    Leaving variable

    the variable being replaced leaves the set of basic variables and is called the leaving variables

  • References

  1. Wikipedia : Simplex algorithm 单纯形法
  2. The Simplex Algorithm
  3. http://fourier.eng.hmc.edu/e176/lectures/NM/NM.html
  4. THE STEPS OF THE SIMPLEX ALGORITHM
  5. GeeksforGeeks : Simplex Algorithm – Tabular Method
  6. Optimization
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章