A survey of continuous minimax algorithms
We consider several continuous minimax algorithm models. All of these base their progress on gradient information. While some are implementable, others require substantial further development to be of practical use. In Chapter 4, we introduce and analyze in detail a quasi-Newton algorithm that builds upon some of the models introduced in the present chapter. In Chapter 5, we consider numerical experiments with a number of algorithms to justify empirically a simplified quasi-Newton algorithm.
Under the special assumption that f(x, y) is convex in x and concave in y continuous minimax can be formulated as a saddle point problem. This is an interesting special case for minimax problems and we discuss algorithms for computing saddle point solutions in Chapter 3. Another special case is discrete minimax and a superlinearly convergent quasi-Newton algorithm to solve this problem is discussed in Chapter 7.
In this chapter, we survey algorithms for solving the continuous minimax problem introduced in Section 1.2,A new quasi-Newton algorithm for this problem is developed and analyzed in detail in Chapter 4. Numerical experiments with a number of algorithms are discussed in Chapter 5.
Continuous minimax belongs to the general class of nonsmooth problems. The main reason for this nonsmooth character is the possible multiplicity of maximizers at any given point. The objective function has a different gradient, with respect to x corresponding to each maximizer. As such, continuous minimax problems may be solved using nonsmooth optimization methods such as subgradient and bundle methods. Subgradient methods require at least one subgradient to be evaluated at each iteration to find a direction of descent (see CN 1). Bundle methods use subgradient information from successive iterations, within a ball of radius r > 0. These methods have been developed to solve either the general class or particular types of nonsmooth