Computational Statistics and Optimization Theory at UCLA

Article excerpt

Computational statistics is both growing in importance and evolving in nature. Graduate courses in computational statistics need to incorporate recent advances in high-dimensional optimization and integration. These advances are being driven by applications in data mining, bioinformatics, and imaging. Modern algorithms for optimization and integration can only be fully understand and extended by statisticians with considerable mathematical sophistication. Thus, graduate courses in computational statistics should stress those principles of mathematical and numerical analysis most pertinent to algorithm design and evaluation.

KEY WORDS: Algorithms; Graduate curricula; Numerical analysis.


Everyone would agree that computational statistics is in flux. The last 50 years have brought enormous strides in hardware, software, modeling, inference, and numerical methods. We are both blessed and confused by these advances. Trying to guess where the field is headed is important because such prognostication will drive the education of the next generation of statisticians. Unfortunately, most of our crystal balls resemble the glass snow domes that I used to see in my youth. The discipline has been tipped upside down, and the obscuring snowflakes are descending quickly.

The two mathematical pillars upon which computational statistics rests are optimization and integration. Optimization propels maximum likelihood and least squares, the primary tools of frequentist inference. Markov chain Monte Carlo (MCMC) propels the sampling of posterior distributions, the staple diet of Bayesian inference. Our ability to optimize and integrate in high-dimensional spaces is what distinguishes the modern era. Despite the myriad directions statistics is taking, it is doubtful that either optimization or integration will be dislodged from their positions of primacy. The real issue is the right combination of theory and practice we should impart to students. My own bias is that we spend too much time on specific numerical techniques and too little time on general principles.

The ability to write computer code implementing statistical algorithms stands somewhere between practice and theory. Widely applied algorithms are available in commercial software. Research-level algorithms are not. Most students in the statistical sciences master SAS, S-Plus, or R. Despite their overall comfort with computers, surprisingly few students are facile in a lower-level language. Such ignorance is blissful until advanced students undertake time-consuming simulation studies in support of their doctoral dissertations. At this point, many students are forced to learn a lower-level language such as C or Fortran for the first time. Perhaps, waiting this long is just as well. The learning curve always seems less steep with proper motivation. While no one would argue that learning to program is wasted effort, most departments in the statistical sciences do little to foster programming directly. We all await the day when higher level systems such as MATLAB achieve speeds comparable to compiled languages.


The flux in computational statistics is certainly reflected in the curriculum at UCLA, my own institution. The Departments of Biomathematics, Biostatistics, and Statistics have introduced a raft of courses that would have been unrecognizable 20 years ago. At the same time, these departments have maintained a traditional graduate course on computational statistics. This course was first taught by Robert Jennrich, one of the pioneers of the subject, and is now taught by Yingnian Wu, one of its current stars. Under Jennrich, the course revolved around regression analysis and variance component models. Roughly two thirds of the course still stresses these topics. The biggest change has been the introduction of MCMC methods.

None of the departments crosslisting the course require it in their doctoral programs. …