Discrete minimax algorithm for equality and inequality
In this chapter, we consider a sequential quadratic programming algorithm for the discrete minimax problem. As described in Chapter 6, the constraints are used to formulate an augmented Lagrangian. The algorithm involves a sequential quadratic programming subproblem, an adaptive penalty parameter selection rule to regulate the emphasis on constraint satisfaction, an Armijo-type stepsize strategy, convergent to unit steps, that ensures progress towards optimality and feasibility of the constraints.
The algorithm is formulated for general nonlinear constrained problems in which the objective and the constraints are twice continuously differentiable. In the case of linear constraints, the algorithm simplifies considerably. The global convergence of the general algorithm is shown, along with the convergence of the stepsize to unity. It is also shown that the penalty parameter does not grow indefinitely. The local Q-superlinear convergence of the algorithm is demonstrated. The algorithm for linear constraints is also discussed and the convergence properties of this special case are deduced.
In this chapter, we consider an algorithm for solving the discrete minimax problemintroduced in Chapter 6. As we have already discussed, this formulation applies to decision models with a discrete set of scenarios. Throughout this chapter, the discussion relies on the equivalence of the above discrete minimax problem with a continuous minimax and a nonlinear programming problem. Thus, using the terminology introduced in Chapter 6, it is established in Lemma 6.2.1 that (1.1) is equivalent to where