Academic journal article Research-Technology Management

Forecasting Innovation: Lessons from IARPA's Research Programs: IARPA's Forecasting Experiments Can Provide the Tools for Industrial R&D to Discern Technology Trends and Forecast Innovation Success

Academic journal article Research-Technology Management

Forecasting Innovation: Lessons from IARPA's Research Programs: IARPA's Forecasting Experiments Can Provide the Tools for Industrial R&D to Discern Technology Trends and Forecast Innovation Success

Article excerpt

The Intelligence Advanced Research Projects Activity (IARPA) is a research arm of the US intelligence community. The popular press likes to compare us to Q Branch from the James Bond movies. In the movies, all of the gadgets--the shoe phones, the jet packs, the pen cameras--are created by Q and his team of scientists. In reality, the challenges for national intelligence are too numerous and too complicated to solve in a single laboratory. Instead, IARPA funds the best and brightest scientists and engineers, working in academic and industrial research labs around the world, to take on these challenges. Our job, in other words, is to crowdsource Q Branch.

The IARPA Method

IARPA works for the Director of National Intelligence, who coordinates intelligence activities across the 16 agencies that make up the US intelligence community. We conduct research and develop technologies for all of those agencies, which means we cover everything from A to Z, or from artificial intelligence to zika. We have research programs in biosecurity, cyber defense, neuroscience, computing, political science, and psychology. IARPA's research is diverse, but every program must have two things: a clearly defined technical problem and a way to measure progress against it. For instance, the technical problem could be to build a superconducting computer, and we might set criteria for how many operations per second the computer should be able to perform per unit of energy. We don't know how to solve the problem in advance. We may have a hypothesis about one possible technical approach, but we remain technically agnostic and leave it open to the researchers to innovate. We frame the problem, we fund several possible solutions, and then we serve as a referee, measuring how well the various approaches satisfy the success criteria we've established.

IARPA awards its research contracts competitively, and the spirit of competition runs through our research programs from start to finish. We develop priorities from program managers' interests and from national security concerns around science and technology. Most program managers come into the organization with a problem they want to explore; I had a set of interests around improving human judgment under radical uncertainty. Like most program managers, I wanted to explore these problems because I thought they were important to society at large, not just to national intelligence. IARPA leadership and the Office of the Director of National Intelligence decide which of these align with national intelligence concerns. Right now, I'm concerned about things like gene editing, security in the Internet of Things, the increasing sophistication and availability of cyber weaponry, advances in neuro technologies, and safety in machine learning and autonomy. We also take input from other organizations about what might be just over the horizon.

Usually, no single organization has the combination of resources needed to work on our problems, so teams self-organize into a mix of industry and academic researchers from multiple organizations and disciplines. (As one example, we had one program that required expertise in computer science, statistics, sociology, political science, epidemiology, and economics.) The teams submit proposals, and we fund the strongest technical approaches. We're comfortable with risk. Our mandate is to go after hard problems, which means we don't focus on low-hanging fruit. We expect that only about half our efforts will succeed. If we're succeeding much more than half the time, then we're picking problems that are too easy. We celebrate technical failure when it reflects ambition and honest science--and we push to publish negative results so that we can learn from them. Science is hard enough without forgetting what you've done.

Because we don't know which approach will work, we fund multiple teams in parallel. At the end of a program, we have, in the case of a successful program, a new technology or scientific result that delivers a solution to the intelligence community. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.