JuMPChance — JuMP extensions for chance constraints

JuMPChance is an extension for JuMP. We assume that the reader is familiar with the syntax of JuMP, which is described in its documentation.

JuMPChance supports a particular class of chance constraints, namely those involving affine combinations of jointly normal random variables. Distributionally robust chance constraints are supported in the form of intervals on the mean and variance of the normally distributed random variables.

JuMPChance is research code and not officially supported as part of JuMP. JuMPChance is released under the terms of the LGPL license version 3:

JuMPChance is free software: you can redistribute it and/or modify
it under the terms of the GNU Lesser General Public License,
Version 3, as published by the Free Software Foundation.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.

Contents

Installation Guide

JuMPChance requires Julia, JuMP, and a solver already installed; for instructions, see the corresponding JuMP installation guide. After these are set up, follow the instructions below to install JuMPChance.

Getting JuMPChance

JuMPChance is available in the Julia package manager. To install it, run:

julia> Pkg.add("JuMPChance")

This command will install JuMPChance with ECOS as the default solver.

To start using JuMPChance, it should be imported together with JuMP into the local scope:

julia> using JuMP, JuMPChance

Quick Start Guide

This quick start guide will introduce the syntax of JuMPChance, again assuming familiarity with JuMP.

Creating a model

JuMPChance models should be created by using the following constructor:

m = ChanceModel()

All variables and constraints are associated with this model object. As in JuMP, solvers can be specified by using the solver= argument to the constructor. For example:

using CPLEX
m = ChanceModel(solver=CplexSolver())

will set the solver to CPLEX, assuming that both CPLEX and the corresponding Julia package are properly installed.

By default, JuMPChance will use ECOS, a lightweight open-source solver which supports the conic constraints needed for the reformulation method for solving chance-constrained problems.

Defining variables

In JuMPChance, you can mix decision variables and random variables in expressions. Decision variables are declared by using JuMP’s @defVar syntax. Random variables are declared by using a similar syntax:

@defIndepNormal(m, x, mean=0, var=1)

creates a single independent normal random variable with the specified mean and variance. The mean and var arguments are always required. Variables indexed over a given set are supported, and the means and variances may depend on the given indices. For example:

@defIndepNormal(m, x[i=1:N,j=1:M], mean=i*j, var = 2j^2)

creates an N by M matrix of independent normally distributed random variables where the variable in index (i,j) has mean i*j and variance 2j^2.

Index sets do not need to be ranges; they may be arbitrary Julia lists:

S = [:cat, :dog]
@defIndepNormal(m, x[S], mean=0, var=1)

defines two variables x[:cat] and x[:dog].

Chance constraints

A JuMPChance model may contain a combination of standard JuMP constraints (linear and quadratic) and chance constraints.

Chance constraints are constraints which contain a mix of decision variables and random variables. Products with decision variables and random variables are allowed, but products between two decision variables or two random variables are not currently supported. This restriction ensures that the resulting chance constraint is a convex constraint on the decision variables.

Mathematically, the types of constraints supported are

\[P\left(\sum_{i=1}^k \left(c_i^Tx +d_i\right)z_i \geq b\right) \geq 1- \epsilon\]

and

\[P\left(\sum_{i=1}^k \left(c_i^Tx +d_i\right)z_i \leq b\right) \geq 1-\epsilon\]

where \(x\) are the decision variables, \(c_i\) are coefficient vectors, \(d_i\) and \(b\) are scalars, \(z_i\) are independent jointly normal random variables with provided means and variances for \(i=1,\ldots,k\), and \(\epsilon \in (0,1)\).

Chance constraints of the above form are added by using the @addConstraint macro. For example:

@defIndepNormal(m, x, mean=0,var=1)
@defVar(m, z)

@addConstraint(m, z*x >= -1, with_probability=0.95)

Adds the constraint \(P(z*x \geq -1) \geq 0.95\). Note that the with_probability argument specifies the minimum probability \(\epsilon\) with which the constraint may be satisfied, and so should be a number close to 1.

Distributionally robust chance constraints

One may also specify normally distributed random variables whose parameters (mean and variance) are uncertain, that is, known to fall within a certain interval. These random variables with uncertain distribution are declared as follows:

@defIndepNormal(m, x, mean=(-1,1), var=(20,30))

Any combination of the mean, variance, or both may be uncertain. When these variables appear in constraints, the constraint is interpreted to be robust, and implies that the the chance constraint must hold for all possible distributions, where the set of possible distributions will be defined more precisely below. Mathematically, this is

\[P\left(\sum_{i=1}^k \left(c_i^Tx +d_i\right)z_i \leq b\right) \geq 1-\epsilon, \forall \text{ distributions of } z_1,\ldots,z_n\]

Using the above notation, let the uncertainty interval on the mean of \(z_i\) be \([\hat\mu_i - \alpha_i,\hat\mu_i + \alpha_i]\) and on the variance \([\hat\sigma_i^2 - \beta_i, \hat\sigma_i^2 + \beta_i]\) where \(\alpha_i \geq 0\) and \(\beta_i \geq 0\).

Currently JuMPChance supports only the following uncertainty sets on the means and variances:

\[M = \left\{ (\mu_1,\ldots,\mu_k) : \exists (s_1,\ldots,s_k) \text{ such that }\mu_i = \hat\mu_i + s_i, |s_i| \leq \alpha_i, \sum_{i=1}^k \frac{|s_i|}{\alpha_i} \leq \Gamma_\mu \right\}\]\[V = \left\{ (v_1,\ldots,v_k) : \exists (s_1,\ldots,s_k) \text{ such that }v_i = \hat\sigma_i + s_i, |s_i| \leq \beta_i, \sum_{i=1}^k \frac{|s_i|}{\beta_i} \leq \Gamma_\sigma \right\}\]

where \(\Gamma_\mu\) and \(\Gamma_\sigma\) and given (integer) constants, known as the uncertainty budgets. The interpretation of these sets is that at most \(\Gamma\) out of \(k\) uncertain parameters are allows to vary from their nominal values \(\hat\mu_i\) and \(\hat\sigma_i^2\). This is the uncertainty set proposed by Bertsimas and Sim (2004). Note that the means and variances are allowed to vary independently.

The uncertainty budgets \(\Gamma_\mu\) and \(\Gamma_\sigma\) are specified as parameters to @addConstraint as follows:

@addConstraint(m, z*x >= -1, with_probability=0.95,
    uncertainty_budget_mean=1, uncertainty_budget_variance=1)

Solving the model

After the model m has been created and all constraints added, calling:

solve(m,method=:Cuts)

or:

solve(m,method=:Reformulate)

will tell JuMPChance to solve the model. The available solution methods are described in the following section.

The solve function also returns a solution status. This should be checked to confirm that the model was successfully solved to optimality, for example:

status = solve(m)
if status == :Optimal
    println("Solved to optimality")
else
    println("Not optimal, termination status $status")
end

Optimal values of the decision variables are available by using getValue, as with JuMP.

Solution methods and parameters

Standard reformulation

Consider the chance constraint

\[P\left(\sum_{i=1}^k \left(c_i^Tx +d_i\right)z_i \leq b\right) \geq 1-\epsilon\]

where \(z \sim \mathcal{N}(\mu,\Sigma)\) is a vector in \(\mathbb{R}^n\) of jointly normal random variables with mean \(\mu\) and covariance matrix \(\Sigma\). JuMPChance currently only supports a diagonal covariance matrix \(\Sigma\), i.e., all variables are independent, but we present the more general case here. For simplicity, we can introduce a new set of variables \(y_i = c_i^Tx + d_i\) and reduce the constraint to:

\[P\left(y^Tz \leq b\right) \geq 1-\epsilon\]

Recall that \(y^Tz\) is normally distributed with mean \(y^T\mu\) and variance \(y^T\Sigma y\). Then

\[P\left(y^Tz \leq b\right) = P\left(y^Tz - y^T\mu \leq b - y^T\mu\right) = P\left( \frac{y^Tz - \mu^Tz}{\sqrt{y^T\Sigma y}} \leq \frac{b - y^T\mu}{\sqrt{y^T\Sigma y}}\right)\]\[ = \Phi\left(\frac{b - y^T\mu}{\sqrt{y^T\Sigma y}}\right)\]

where \(\Phi\) is the standard normal cumulative distribution function.

Therefore the chance constraint is satisfied if and only if

\[\Phi\left(\frac{b - y^T\mu}{\sqrt{y^T\Sigma y}}\right) \geq 1- \epsilon\]

or, since \(\Phi^{-1}\) is monotonic increasing,

\[\frac{b - y^T\mu}{\sqrt{y^T\Sigma y}} \geq \Phi^{-1}(1-\epsilon)\]

which is

\[y^T\mu + \Phi^{-1}(1-\epsilon)\sqrt{y^T\Sigma y} \leq b.\]

For \(\epsilon \leq 0\), \(\Phi^{-1}(1-\epsilon) > 0\), so the above constraint is convex and equivalent to

\[||\Sigma^{\frac{1}{2}}y|| \leq (b-\mu^Ty)/\Phi^{-1}(1-\epsilon)\]

which is a second-order conic constraint, where \(\Sigma^{\frac{1}{2}}\) is the square root of \(\Sigma\).

Methods for distributionally robust constraints

Following the notation in the quick start guide, a distributionally robust chance constraint can be formulated as

\[||\Sigma^{\frac{1}{2}}y|| \leq (b-\mu^Ty)/\Phi^{-1}(1-\epsilon)\quad \forall\, \mu \in M, \Sigma \in V\]

This is a convex constraint because it is the intersection of a large (possibly infinite) set of convex constraints. These are challenging to reformulate into an explicit conic form. Instead, we approximate the constraint by a sequence of linear tangents, i.e., given a point \(y\), we detect if the constraint is violated for any choice of \(\mu\) or \(\Sigma\), and if so we add a separating hyperplane which is simple to compute.

solve parameters

The solve method has the following optional keyword parameters when invoked on a JuMPChance model:

  • method::Symbol, either :Reformulate to use the second-order conic formulation or :Cuts to approximate the constraints by a sequence of linear outer-approximations. Defaults to :Reformulate.
  • linearize_objective::Bool, either true or false indicating whether to provide a convex quadratic objective directly to the solver or to use linear outer approximations. Defaults to false.
  • probability_tolerance::Float64, chance constraints are considered satisfied if they actually hold with probability \(1-\epsilon\) minus the given tolerance. Defaults to 0.001.
  • debug::Bool, enables debugging output for the outer approximation algorithm. Defaults to false.
  • iteration_limit::Int, limits the number of iterations performed by the outer approximation algorithm. (In each iteration, a single linearization is added for each violated constraint.) Defaults to 60.
  • objective_linearization_tolerance::Float64, absolute term-wise tolerance used when linearizing quadratic objectives. Defaults to 1e-6.
  • reformulate_quadobj_to_conic::Bool, if true, automatically reformulates a quadratic objective into second-order conic form. This is necessary for some solvers like Mosek or ECOS which don’t support mixing quadratic and conic constraints. Defaults to false except if the solver is ECOS.
  • lazy_constraints::Bool, if true and method==:Cuts, use lazy constraints instead of re-solving the mixed-integer relaxation in a loop. This option is experimental and not yet implemented for distributionally robust problems. Defaults to false.

Two-sided chance constraints

JuMPChance supports two-sided chance constraints of the form

\[P(l \leq x^Tz \leq u) \geq 1- \epsilon\]

where \(l\), \(x\), and \(u\) are decision variables or affine functions of decision variables, \(z\) is a vector of independent jointly normal random variables with provided means and variances, and \(0 < \epsilon < \frac{1}{2}\). This constraint holds iff there exists \(t\) such that \(t \geq \sqrt{\sum_i (\sigma_ix_i)^2}\) and \((l-\mu^Tx,u-\mu^Tx,t) \in \bar S_\epsilon\) where

\[\begin{split}\bar S_\epsilon = \operatorname{closure} \{ (l,u,t) : \Phi(u/t) - \Phi(l/t) \geq 1-\epsilon, t > 0 \}\end{split}\]

Note that \(\bar S_\epsilon\) is the conic hull of \(S_\epsilon\) where

\[S_\epsilon = \{ (l,u) : \Phi(u) - \Phi(l) \geq 1-\epsilon \}\]

A report outlining these results is available on arXiv.

Using two-sided constraints

The syntax for two-sided constraints is as follows:

@addConstraint(m, l <= x*z <= u, with_probability = 0.95, approx="1.25")
@addConstraint(m, l <= x*z <= u, with_probability = 0.95, approx="2.0")

Any affine expression of the decision variables can appear as lower or upper bounds. Random variables may only appear in the expression in the middle. You can use sum{} as in standard JuMP constraints, e.g.:

@addConstraint(m, l <= sum{ x[i]*z[i], i=1:n } + sum{ c[j], j=1:m } <= u)

Given a chance constraint with probability \(1-\epsilon\), the current implementation provides two different formulations, indicated by the approx parameter. The approx parameter may be set to "1.25" or "2.0". The formulation guarantees that that the constraint will be satisfied with probability \(1-approx*\epsilon\). This is not a conservative approximation. After a model is solved, you can check the probability level at which a constraint holds as follows:

constraint_ref = @addConstraint(m, l <= x*z <= u, with_probability = 1-0.05, approx="1.25")
solve(m, method=:Reformulate)
satisfied_with = JuMPChance.satisfied_with_probability(constrinat_ref)
println("The chance constraint is satisfied with probability $satisfied_with.")

References

One of the first appearances of the convex reformulation discussed in the previous section was in 1958 by Charnes, Cooper, and Symonds, subsequently studied by Prékopa (1970) and others. It now appears in standard texts on stochastic programming. The field of chance constraints is quite broad, and a discussion of formulations which are not implemented here is beyond the scope of this document.

The class of distributionally robust chance constraints considered is a special case of so-called robust second-order conic optimization. Tractability is discussed by Ben-Tal, El Ghaoui, and Nemirovski (2009). The particular model was proposed by Bienstock, Chertkov, and Harnett (2014) with application to control of power systems under uncertainty.

JuMPChance is being used in ongoing research projects and reports which are currently in preparation.

See also JuMPeR, a JuMP extension for robust optimization.