Legal scholarship has been silent about a phenomenon with profound implications for governance: the automation of compliance with laws mandating risk management. Regulations-from bank capitalization rules, to Sarbanes-Oxley's provisions on financial fraud and misrepresentation, to laws governing information-privacy protection-frequently require regulated firms to develop internal processes to identify, assess, and mitigate risk. To comply, firms have turned wholesale to technology systems and computational analytics that measure and predict corporate risk levels and "force" decisions accordingly. In total, the third-party market for compliance-technology products-known generally as "governance, risk, and compliance" (GRC) software, systems, and services-alone grew to $52 billion last year, and this growth is poised to increase exponentially.
While these technology systems offer powerful compliance tools, they also pose real perils. They permit computer programmers to interpret legal requirements; they mask the uncertainty of the very hazards with which policy makers are concerned; they skew decisionmaking through an "automation bias" that privileges personal self-interest over sound judgment; and their lack of transparency thwarts oversight and accountability. These phenomena played a critical role in the recent financial crisis.
This Article explores these developments and the failure of risk regulation to address them. While regulators have lauded the turn to technology, they have ignored its perils. By contrast, this Article investigates the accountability challenges posed by these and other technologies of control, and suggests specific reform measures for policy makers revisiting the governance of risk. This Article argues for more activist regulator oversight backed by sanctions before disaster has occurred. But it also emphasizes collaboration in developing risk-management systems, drawing both on the granular expertise of firms and the broader vantage of administrative agencies. Most importantly, it seeks better to reflect the human decisionmaking element at both levels: to recognize the ways in which technology can hinder good judgment, to reintroduce human inputs in the decision process, and to reflect the limits of both human and computer reasoning.
In December 2006, executives at financial-services firm Goldman Sachs quickly convened a meeting of senior risk managers and traders. After three hours examining the breadth of its trading positions, the firm decided to limit exposure to a housing-market downturn by selling some of its mortgagebacked securities and diversifying its holdings to hedge the risk of others.1 While Goldman suffered losses in 2007, they reached nowhere near the scale of those suffered by its contemporaries.2 The firm avoided the fate of nowdefunct competitors such as Bear Stearns, Lehman Brothers, and Merrill Lynch,3 and went on to earn record profits in 2009.4
The meeting's fortuitous timing was no coincidence. Since the 1980s, Goldman had invested heavily in risk-modeling technology.5 Unlike some of its competitors, Goldman's system had incorporated into its monitoring capacity daily trend reporting based on sophisticated, quantitative riskprediction programs.6 In December 2006, Goldman's system indicated a problem - the firm's daily profit and loss reports showed that its mortgage business had posted a loss for ten straight days.7 The generation of those ten daily reports triggered the meeting, and the evaluation of firm-wide exposure measures generated by its risk-assessment technologies, in turn, prompted the subsequent realignment.8
Goldman's experience underscores a phenomenon about which legal scholarship has been remarkably quiet: the increasingly pervasive reliance on technology - in the form of information-technology and decision-automation system software and analytics - in assessing and controlling risk, and in complying with government regulation mandating its management. …