理解LP Simplex

  • LP(Linear Programming)

    線性規劃,指目標函數和約束條件解爲線性的最優化問題。

    • 約束 Constraint

      數學中,約束是一個最最優化問題的解需要符合的條件。分爲等式約束(束縛拘束)不等式約束(非束縛拘束)

      符合所有約束的解的集合稱爲可行集(feasible set)或是候選解(candidate solution)

    • 線性

      數學函數L(x)L(x)

      從直觀角度,如果某數學函數或數量關係的函數圖形呈現未一條直線或線段,那麼這種關係就是一種線性的關係。

      用數學語言精確定義,L(x)L(x)是個只擁有一個變數的一階多項式函數,即可以表達爲L(x)=kx+bL(x) = kx+b ,其中k,bk,b爲常數。

      初等數學定義,目的是把函數圖像爲直線的數量關係稱作爲線性關係

      在代數學和數學分析學中,如果一種運算同時滿足特定的“加性”和“齊性”,則稱這種運算是線性的。

      用數學語言精確定義,L(x)L(x)具有一下兩個性質:

      1. 加性 : L(x+t)=L(x)+L(t)L(x+t) = L(x) + L(t)
      2. 一次齊次性 : L(mx)=mL(x)L(mx) = mL(x)

      高等數學定義

      • 齊次函數

        齊次函數是一個有倍數性質的函數:如果變量乘以一個係數,則新韓淑回事原函數再乘上係數的某次方倍。

        線性函數 f(αv)=αf(v)f(\alpha v)=\alpha f(v) 一次齊次函數

        多線性函數 f(αv1,...,αvn)=αnf(v1,...,vn)f(\alpha v_1,...,\alpha v_n)=\alpha^nf(v_1,...,v_n) nn次齊次函數

        齊次多項式x5+2x3y2+9xy4x^5+2x^3y^2+9xy^4 由同次數的單項式相加所組成的多項式

    • 最優化

      應用數學的一個分支,主要研究在特定情況下最大化或最小化某一特定函數或變量。

  • Simplex

    單純形法,在數學優化領域中給常用於線性規劃問題的數值求解,由George.Bernard.Dantzig發明。

    單純形,是N維中的N+1個訂單的凸包,是一個多胞體:

    • 直線上的一個線段1+1
    • 平面上的一個三角形2+1
    • 三維空間中的一個四面體3+1

    以上都是單純形。

  • Simplex algorithm

    The simplicial cones in question are the corners (i.e., the neignborhoods of the vertices) of a geometric object called a polytope. The shape of this polytope is defined by the constraints appied to the objective function.

    polytope

    In elementary geometry, a polytope is a geometric object with “flat” sides. It is a generalization in any number of dimensions of the three-dimensional polyhedron.

    polytopes may exists in any general number of dimensions n as an n-dimensional polytope or n-polytope.

    flat sides mean that the sides of a (k+1)-polytope consist of k-polytopes that may have (k-1)-polytopes in common.

  • Canonical form

    canonical form

    In mathematics and computer science, a canonical, normal, or standard form of a mathematical object is a standard way of presenting that object as a mathematical expression.

    Often it is one which provides the simplest representation of an object and which allows it to be identified in a unique way.

    maximize    cTxsubject  to    Axb    and    x0 maximize \;\; c^Tx \\subject \;to \; \; Ax \leq b \;\;and \;\;x\geq 0

    cTxc^Tx : objective function

    c=(c1,...cn)c=(c_1,...c_n) : coefficients of the objective function

    ()T(\cdot)^T : matrix transpose

    x=(x1,...,xn)x=(x_1,...,x_n) : variables of the problem

    AA : pnp*n matrix

    b=(b1,...,bp)b=(b_1,...,b_p) : nonnegative constants (j,bj0\forall j,b_j\geq 0)

    feasible  regionfeasible \; region defined by all values of xx such that Axb    and    i,xi0Ax\leq b\;\;and\;\; \forall i,x_i\geq 0 is a (possibly unbounded) convex polytope. An extreme point or vertex of this polytope is known as basic feasible solution(BFS)

    Convex polytope

    A convex polytope is a special case of a polytope, having the additional property that it is also a convex set contained in the n-dimensional Euclidean space.

    Most texts use the term “polytope” for a bounded convex polytope, and the word “polyhedron” for the more general, possibly unbounded object.

    Convex set

    In geometry, a subset of a Euclidean space, or more generally an affine space over the reals, is convex if, given any two points, it contains the whole line segment that joins them.

    BFS(Basic Feasible Solution)

    In the theory of linear programming, a basic feasible solution (BFS) is a solution with a minimal set of non-zero variables.

    Geometrically, each BFS corresponds to a corner of the polyhedron of feasible solutions.

    If there exists an optimal solution, then there exists an optimal BFS. Hence , to find an optimal solution, it is sufficient to consider the BFS-s.

    This fact is used by the simplex algorithm, which essentially travels from some BFS to another until an optimal one is found.

    The simplex algorithm always terminates because the number of vertices in the polytope is finite; moreover since we jump between vertices always in the same direction (that of objective function), we hope that the number of vertices visited will be small.

  • Solution of a linear program

    The solution of a linear program is accomplished in two steps.

    • Phase I

      In the first step, known as Phase I, a starting extreme point is found. Depending on the nature of the program this may be trivial, but in general it can be solved by applying the simplex algorithm to a modified version of the original program. The possible results of Phase I are either that a basic feasible solution is found or that the feasible region is empty(this linear program is called infeasible).

    • Phase II

      The simplex algorithm is applied using the basic feasible solution found in Phase I as a starting point. The possible results from Phase II are either an optimum basic feasible solution or an infinite edge on which the objective function is unbounded above.

  • History of LP Simplex

    George Dantizg worked on planning methonds for the US Army Air Force during WWII, at early time, he didn’t include an objective as part of his formulation. Without an objective, a vast number of solutions can be feasible, and therefore to find the “best” feasible solution, military-specified “ground rules” must be used that describe how goals can be achieved as opposed to specifying a goal itself.

    Dantzig’s core insight was to realize that most such ground rules can be translated into a linear objective function that needs to be maximized.

  • Standard Form

    The transformation of a linear program to one in standard form may be accomplished as follows.

    First, for each variable with a lower bound other than 0, a new variable is introduced representing the difference between the variable and bound. The original variable can then be eliminated by substitution. For example, given the constraint:
    x15 x_1 \geq 5
    a new variable y1y_1 is introduced with
    y1=x15x1=y1+5 y_1 = x_1-5 \\ x_1=y_1+5
    The second equation may be used to eliminate x1x_1 from the linear program. In this way, all lower bound constraints(下界約束) may be changed to non-negativity restrictions.

    Second, for each remaining inequality constraint, a new variable, called a slack variable, is introduced to change the constraint to an equality constraint. This variable represents the difference between the two sides of the inequality and is assumed to be non-negative. For example, the inequalities :
    x2+2x33x4+3x52 x_2 + 2x_3 \leq 3 \\ -x_4+3x_5 \geq 2
    are replaced with :
    x2+2x3+s1=3x4+3x5s2=2s1,s20 x_2+2x_3+s_1=3 \\ -x_4+3x_5-s_2=2 \\s_1,s_2 \geq 0
    It is much easier to perform algebraic manipulation on inequalities in this form. In inequalities where \geq appears such as the second one, some authors refer to the variable introduced as a surplus variable.

    Slack variable

    In an optimization problem, a slack variable is a variable that added to an inequality constraint to transform it into an equality.

    Introducing a slack variable replaces an inequality constraint with an equality constraint and a non-negativity constraint on the slack variable.

    Slack variables are used in particular in linear programming.

    Third, each unrestricted variable is eliminated from the linear program. This can be done in two ways, one is by solving for the varible in one of the equations in which it appears and then eliminating the variable by substitution. The other is to replace the variable with the difference of two restricted variables.

    When this process is complete the feasible region will be in the form
    Ax=b,      i  xi0 Ax=b,\;\;\; \forall i \; x_i \geq 0
    It is also useful to assume that the rank of A is the number of rows. This results in no loss of generality since otherwise either the system Ax=bAx=b has redundant equations which can be dropped, or the system is inconsistent and the linear program has no solution.

  • Simplex tableau

    A linear program in standard form can be represented as a tableau of the form:
    [1cT00Ab] \left[ \begin{matrix} 1 & -c^T&0 \\0&A&b \end{matrix} \right]
    The first row defines the objective funcion and the remaining rows specify the constraints.

    The zero in the first column represents the zero vector of the same dimension as vector b.

    If the columns of A can be rearranged so that it contains the identity matrix of order p(the number of rows in A) then the tableau is said to be in canonical form.

    identity matrix

    In linear algebra, the identity matrix, or sometimes ambiguously called a unit matrix , of size n is the nnn*n square matrix with ones on the main diagonal and zeros elsewhere.

    It is denoted by InI_n or simply by II if the size is immaterial or can be trivially determined by the context. Less frequently, some mathematics books use UU or EE to represent the identity matrix.

    basic variables

    The variables corresponding to the columns of the identity matrix are called basic variables while the remaining variables are called nonbasic or free variables.

  • Pivot operations

    The geometical operation of moving from a basic feasible solution to an adjacent basic feasible sulution is implemented as a pivot operation.

    First, a nonzero pivot element is selected in a nonbasic column. The row containing this element is multiplied by its reciprocal to change this element to 1, and then multiples of the row are added to the other rows to change the other entries in the column to 0. The result is that, if the pivot element is in row r, then the column becomes the r-th column of the identity matrix. The variable for this column is now a basic variable, replacing the variable which coprresponded to the r-th column of the identity matrix befor the operation.

    entering variable

    The variable correspingding to the pivot column enters the set of basic variables and is called the entering varibale

    Leaving variable

    the variable being replaced leaves the set of basic variables and is called the leaving variables

  • References

  1. Wikipedia : Simplex algorithm 單純形法
  2. The Simplex Algorithm
  3. http://fourier.eng.hmc.edu/e176/lectures/NM/NM.html
  4. THE STEPS OF THE SIMPLEX ALGORITHM
  5. GeeksforGeeks : Simplex Algorithm – Tabular Method
  6. Optimization
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章