Algorithms for computing saddle points
Consider the basic continuous minimax problem when the underlying function f(x,y) is convex in x and concave in y. Then, there is a unique solution to minimax which can be computed using specialized algorithms. Saddle point solutions like this are also used by the decision maker to assess the worst-case strategy of an opponent and compute the optimal response to the worst case. In the present context, the opponent can be interpreted as nature choosing the worst-case value of the uncertainty. The solution is an equilibrium strategy which ensures an optimal response to the worst case. Neither the decision maker nor nature would benefit by deviating from this equilibrium.
We consider the computation of saddle points with gradient-based algorithms such as steepest descent and Newton-type algorithms. These aim to satisfy the optimality conditions for the decision maker and for nature.
Gradient-based algorithms approximate, and periodically evaluate numerically, the Jacobian of the simultaneous system which characterizes the equilibrium condition. The approximation uses the Broyden update to improve the Jacobian. Global convergence is maintained by an Armijo-type stepsize strategy.
The global and local convergence of the quasi-Newton algorithm is discussed. Convergence under relaxed descent directions is established. The achievement of unit stepsizes is related to the accuracy of the Jacobian approximation. Furthermore, a simple derivation of the Dennis-Moré characterization of the Q-superlinear convergence rate is given.
We extend the discussion in Chapter 2 to a special case of minimax characterized by the saddle point condition introduced in Section 1.4. We are specifically concerned with the equilibrium condition for the decision maker and nature such that the decision maker and nature seek to optimize their respective parts of the objective. We thus have the problem of simultaneously computing the solutions of