Javascript required
Skip to content Skip to sidebar Skip to footer

Find a Feasible Solution With Objective Function Value Equal to

In the theory of linear programming, a basic feasible solution (BFS) is a solution with a minimal set of non-zero variables. Geometrically, each BFS corresponds to a corner of the polyhedron of feasible solutions. If there exists an optimal solution, then there exists an optimal BFS. Hence, to find an optimal solution, it is sufficient to consider the BFS-s. This fact is used by the simplex algorithm, which essentially travels from some BFS to another until an optimal one is found.[1]

Definition [edit]

To define a BFS, we first present the linear program in the so-called equational form:

maximize c T x {\textstyle \mathbf {c^{T}} \cdot \mathbf {x} }
subject to A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } and x 0 {\displaystyle \mathbf {x} \geq 0}

where:

Any linear program can be converted into an equational form by adding slack variables.

As a preliminary clean-up step, we verify that:

Typically m < n, so the system A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } has many solutions; each such solution is called a feasible solution of the LP.

Let B be a subset of m indices from {1,...,n}. Denote by A B {\displaystyle A_{B}} the square m-by-m matrix made of the m columns of A {\displaystyle A} indexed by B. If A B {\displaystyle A_{B}} is nonsingular, the columns indexed by B are a basis of the column space of A {\displaystyle A} . In this case, we call B a basis of the LP. Since the rank of A {\displaystyle A} is m, it has at least one basis; since A {\displaystyle A} has n columns, it has at most ( n m ) {\displaystyle {\binom {n}{m}}} bases.

Given a basis B, we say that a feasible solution x {\displaystyle \mathbf {x} } is a basic feasible solution with basis B if all its non-zero variables are indexed by B, i.e., for all j B : x j = 0 {\displaystyle j\not \in B:~~x_{j}=0} .

Properties [edit]

1. A BFS is determined only by the constraints of the LP (the matrix A {\displaystyle A} and the vector b {\displaystyle \mathbf {b} } ); it does not depend on the optimization objective.

2. By definition, a BFS has at most m non-zero variables and at least n-m zero variables. A BFS can have less than m non-zero variables; in that case, it can have many different bases, all of which contain the indices of its non-zero variables.

3. A feasible solution x {\displaystyle \mathbf {x} } is basic if-and-only-if the columns of the matrix A K {\displaystyle A_{K}} are linearly independent, where K is the set of indices of the non-zero elements of x {\displaystyle \mathbf {x} } . [1] : 45

4. A BFS is uniquely determined by the basis B: for each basis B of m indices, there is at most one BFS x B {\displaystyle \mathbf {x_{B}} } with basis B. This is because x B {\displaystyle \mathbf {x_{B}} } must satisfy the constraint A B x B = b {\displaystyle A_{B}\mathbf {x_{B}} =b} , and by definition of basis the matrix A B {\displaystyle A_{B}} is non-singular, so the constraint has a unique solution. The opposite is not true: each BFS can come from many different bases.

If the unique solution of A B x B = b {\displaystyle A_{B}\mathbf {x_{B}} =b} satisfies the non-negativity constraints, then the basis is called a feasible basis.

5. If a linear program has an optimal solution (i.e., it has a feasible solution, and the set of feasible solutions is bounded), then it has an optimal BFS. This is a consequence of the Bauer maximum principle: the objective of a linear program is convex; the set of feasible solutions is convex (it is an intersection of hyperspaces); therefore the objective attains its maximum in an extreme point of the set of feasible solutions.

Since the number of BFS-s is finite and bounded by ( n m ) {\displaystyle {\binom {n}{m}}} , an optimal solution to any LP can be found in finite time by just evaluating the objective function in all ( n m ) {\displaystyle {\binom {n}{m}}} BFS-s. This is not the most efficient way to solve an LP; the simplex algorithm examines the BFS-s in a much more efficient way.

Examples [edit]

Consider a linear program with the following constraints:

x 1 + 5 x 2 + 3 x 3 + 4 x 4 + 6 x 5 = 14 x 2 + 3 x 3 + 5 x 4 + 6 x 5 = 7 i { 1 , , 5 } : x i 0 {\displaystyle {\begin{aligned}x_{1}+5x_{2}+3x_{3}+4x_{4}+6x_{5}&=14\\x_{2}+3x_{3}+5x_{4}+6x_{5}&=7\\\forall i\in \{1,\ldots ,5\}:x_{i}&\geq 0\end{aligned}}}

The matrix A is:

A = ( 1 5 3 4 6 0 1 3 5 6 ) b = ( 14 7 ) {\displaystyle A={\begin{pmatrix}1&5&3&4&6\\0&1&3&5&6\end{pmatrix}}~~~~~\mathbf {b} =(14~~7)}

Here, m=2 and there are 10 subsets of 2 indices, however, not all of them are bases: the set {3,5} is not a basis since columns 3 and 5 are linearly dependent.

The set B={2,4} is a basis, since the matrix A B = ( 5 4 1 5 ) {\displaystyle A_{B}={\begin{pmatrix}5&4\\1&5\end{pmatrix}}} is non-singular.

The unique BFS corresponding to this basis is x B = ( 0 2 0 1 0 ) {\displaystyle x_{B}=(0~~2~~0~~1~~0)} .

Geometric interpretation [edit]

Elongated pentagonal orthocupolarotunda.png

The set of all feasible solutions is an intersection of hyperspaces. Therefore, it is a convex polyhedron. If it is bounded, then it is a convex polytope.

Each BFS corresponds to a vertex of this polytope. [1] : 53–56

Application in the Simplex algorithm [edit]

The simplex algorithm keeps, at each point of its execution, a "current basis" B (a subset of m out of n variables), a "current BFS", and a "current tableau". The tableau is a representation of the linear program where the basic variables are expressed in terms of the non-basic ones: [1] : 65

x B = p + Q x N z = z 0 + r T x N {\displaystyle {\begin{aligned}x_{B}&=p+Qx_{N}\\z&=z_{0}+r^{T}x_{N}\end{aligned}}}

where x B {\displaystyle x_{B}} is the vector of m basic variables, x N {\displaystyle x_{N}} is the vector of n non-basic variables, and z {\displaystyle z} is the maximization objective. Since non-basic variables equal 0, the current BFS is p {\displaystyle p} , and the current maximization objective is z 0 {\displaystyle z_{0}} .

If all coefficients in r {\displaystyle r} are negative, then z 0 {\displaystyle z_{0}} is an optimal solution, since all variables (including all non-basic variables) must be at least 0, so the second line implies z z 0 {\displaystyle z\leq z_{0}} .

If some coefficients in r {\displaystyle r} are positive, then it may be possible to increase the maximization target. For example, if x 5 {\displaystyle x_{5}} is non-basic and its coefficient in r {\displaystyle r} is positive, then increasing it above 0 may make z {\displaystyle z} larger. If it is possible to do so without violating other constraints, then the increased variable becomes basic (it "enters the base"), while another non-basic variable is decreased to 0 to keep the equality constraints and thus becomes non-basic (it "exits the base").

If this process is done carefully, then it is possible to guarantee that z {\displaystyle z} increases until it reaches the optimal BFS.

References [edit]

  1. ^ a b c d Gärtner, Bernd; Matoušek, Jiří (2006). Understanding and Using Linear Programming. Berlin: Springer. ISBN3-540-30697-8. : 44–48

Find a Feasible Solution With Objective Function Value Equal to

Source: https://en.wikipedia.org/wiki/Basic_feasible_solution