A quasi-Newton algorithm for continuous minimax
In this chapter, we develop the algorithm models considered in Chapter 2 to consider a fast algorithm for the continuous minimax problem. This extends the steepest descent approach of Panin (1981) and the convex combination rule for subgradients of Kiwiel (1987) to quasi-Newton search directions, conditional on an approximate maximizer.
In effect, we evaluate the choice between two alternative directions. The first is relatively easy to compute and is based on an augmented maximization to ensure that the multiplicity of maximizers does not result in an inferior search direction. The second involves a quadratic suproblem to determine the minimum norm subgradient. In Chapter 5, we discuss the relative merits of an algorithm based only on the former, simpler, direction.
We establish the descent property of the direction chosen by the algorithm. A step is taken using an Armijo-type stepsize strategy consistent with the direction. This ensures the monotonic decrease of the maximizing function and the convergence of the algorithm. We show that the stepsize selected by the algorithm converges to unity and that the local convergence rate is Qsuperlinear.
As in Chapters 1 and 2, we consider the problem
where Y ⊂ m and f : n × Y ↦ 1. Let for all x ∈ n. We call Φ(x) the max-function. Hence, (1.1) can be written as In this chapter, we discuss a quasi-Newton algorithm to solve (1.3). The salient features of the algorithm are the generation of a descent direction based on a subgradient of f(x, ·) and an approximate Hessian, in the presence-63-