MATP6640/DSES6770 Linear Programming, Homework 5.

Due: 11.59pm on Friday, April 1, 2022 on LMS.
10% penalty for each day late.

  1. Let a stochastic program have a finite set S of scenarios ξs, each with probability p s. Let the first stage decision variables be x +n and assume the problem has complete recourse for any first stage decision. Let 0 < α < 1. For any feasible first stage decision x, the Conditional Value at Risk ρ(x) can be calculated by solving the problem
    ρ(x) :=   min            η  +   1∑     p v
            y,η,v               α   s∈S  s s
         subject to W  ys  =  hs - T sx  ∀s ∈ S
                    qT ys  ≤  η + vs     ∀s ∈ S
                       ys  ≥  0          ∀s ∈ S

    Prove that ρ(x) is a convex function of x.


    Let x(1) and x(2) be feasible first stage decisions. Since the problem has complete recourse, there are optimal second stage solutions (ys(i)(i),v s(i)) s for the first stage decisions x(i), i = 1, 2. Let λ [0, 1] and define x = λx(1) + (1 - λ)x(2). Define ys = λys(1) + (1 - λ)ys(2) s, η = λη(1) + (1 - λ)η(2), and v s = λvs(1) + (1 - λ)v s(2) s. Then (ys,η,v s) s is a feasible second stage decision for the first stage decision x. Thus,

               1-∑              (1)              (2)
ρ(x) ≤ η + α     psvs = λρ(x   ) + (1 - λ)ρ(x  ),

    so ρ(x) is convex.

  2. Assume we have a stochastic program with finite number of scenarios and complete recourse. A weighted combination of the standard stochastic program expectation objective with CVaR can be written
                     T       ∑           s     (    1-∑      )
minx,v,η         c x  +     s psQ(x, ξ ) + μ η + α   spsvs
subject to       Ax          =   b
            Q (x, ξs) -    η  ≤   vs  ∀s
                         vs  ≥   0  ∀s

                  x          ≥   0

    Here, we’ve weighted the CVaR by μ to trade it off with the expected return. We saw the L-shaped method for solving stochastic programming problems in class. At iteration K, a disaggregated Master Problem for problem (1) has the form

                            ∑            (      ∑       )
minx,t,μ,v     cT x  +     Ss=1psts + μ η +  1α-  spsvs
subject to     Ax              =   b
     (ζk)T(x - xk)  +       t  ≥   Q (xk,ξs)  for s = 1,...,S, k = 1,2,...,K  - 1
       sk T      k            s         k  s
     (ζs) (x - x )  +   η + vs ≥   Q (x ,ξ )  for s = 1,...,S, k = 1,2,...,K  - 1
                 x             ≥   0
                           vs  ≥   0          for s = 1,...,S

    where Q(xks) is the optimal value of the second stage problem with first stage decision xk and scenario ξs, and the subgradient vector ζ sk has the form

     k       s T s
ζs = T (ξ ) πk

    1. Exploit the relationships between η + vs, Q(x,ξs), and t s to derive an equivalent formulation with fewer constraints. (Hint: formulation (2) has 2S(K - 1) subgradient inequalities. Can you construct an equivalent formulation with approximately SK constraints to represent the subgradient information?)
    2. Assume (x,t,μ,v) is optimal in (2). How would you find upper and lower bounds on the optimal value of (1)?


    1. For any feasible choice of x and η in (2), the best choice for ts and vs for each s is:
                        k  s     k T       k
ts  =   maxk {Q (x  ,ξ ) - (ζks)s(x - xk T)}   k
vs  =   max {0, maxk {Q(x  ,ξ) - (ζs) (x - x )} - η} =  max {0,ts - η}

      Hence, we can use the constraint

      η + vs ≥ ts  for s = 1,...,S.

      These S constraints can replace the constraints

      (ζks)T(x - xk) + η + vs ≥ Q(xk,ξs)  for s = 1,...,S,k =  1,2,...,K  - 1.

      It is clear from the construction that any optimal solution to (2) must satisfy (3).

      Conversely, given x and t, if η and v satisfy (3) then

      η + vs ≥ ts ≥ Q (xk,ξs) - (ζks)T(x - xk)  for s = 1,...,S,k = 1, 2,...,K -  1

      so the solution is feasible in (2).

    2. Problem (2) is a relaxation of (1), so the optimal value of (2) gives a lower bound on the optimal value of (1).

      To find an upper bound, we need to find a feasible solution to (1). We take x = x and then we choose η and v to solve the linear program

                   (      ∑      )
minv,η     μ  η + 1α-  s psvs
subject to vs  +    η  ≥  Q (x,ξs)  ∀s
                   v   ≥  0   ∀s

      This is feasible in (1).

      Note that the LP can be solved very easily. For example, we can set up the dual problem and exploit complementary slackness, and all that is required is to order the values of Q(xs).

  3. Consider the capacity expansion problem considered in class, but with 30 scenarios. The ampl data file is here:
    1. Solve the formulation (1) using a direct formulation using ampl, with α = μ = 0.1. Hand in the optimal value and the optimal first stage decisions. A model file and a data file with the data specific to the CVaR formulation are available:
      1. model file:
      2. data file:

      You will need to load in both data files.

    2. Solve the formulation (1) using the L-shaped formulation you derived in Question 2(b). Hand in the optimal value and the optimal first stage decisions. Also hand in something to convince me you did solve the problem using an L-shaped method! For example, highlight the changes you made to the model and run files, and show some of the output reflecting the progress of the algorithm.

      Model and run files for minimizing the expectation using ampl are available:

      1. run file:
      2. model file:

      You will need to modify these files to handle the formulation in Question 2


    The same problem is solved in both parts, so they give the same solution with value 15623 with xrt = 2, xru = xrv = 12, xtu = xtv = 0. In the notation of Question 2, the optimal solution has η = 156, with unmet demand in scenarios 2 and 30.

    Here is the output for part (a).

    Here is the modified model file for part (b).

    Here is the modified run file for part (b).

    Here is the output for part (b).

  4. Consider the LP:
    min   2x1  +   6x2  +  3x3
s.t.    x1  +    x2  +    x3  =  6

       x1  -   2x2  +  4x3   =  9
                         xi  ≥  0  i = 1,...,3

    Show that x = (3, 1, 2) is on the central path for this problem.


    The point y = (0, 0), s = (2, 6, 3) is dual feasible, and then xisi = 6 =: μ for i = 1, 2, 3.

  5. Pick three problems from the netlib LP test suite, at including the problem boeing1. Solve each problem using both a simplex algorithm and an interior point algorithm. Do you get the same primal solution with each algorithm? What if you don’t crossover to get a BFS? (Differences less than 10-8 can be ignored.)

    (Most of the problems are also available in AMPL format and MATLAB format and uncompressed MPS format at with an AMPL model file available both at and at

    Alternatively, you can run cplex directly, without using ampl, in which case you would read in the MPS file. Useful commands in cplex include read, optimize, and display. You can choose the solver in cplex by issuing the command set lpmethod at the prompt. You can change options in the barrier solver (including the use of crossover) with the command set barrier. You can use these options in ampl with the syntax option cplex_options ’...’;)

    Hand in the primal solution for each problem, and discuss whether the three solutions differ for each of the models you chose. If the solutions have too many variables it is fine to select a few variables to display.


    The problem boeing1 should give three different solutions. For example, you can look at the variable PLAXSEA8 in these solutions.

  6. Hand in a progress report of your work to date on the course project.
    John Mitchell
    Amos Eaton 325
    mitchj at rpi dot edu
    Office hours: Thursday 11am–1pm., on webex