Foresight and futures discussions--whether they take the form of a written article, a conference presentation, or an in-person consultation--always include a "speaker for the future" and the listener, his or her dialogue partner. Dialogue on futures and foresight is hence an essential search for communication and understanding between parties and viewpoints. The ideal foresight process involves asking and answering questions, as well as exchanging expertise in a working relationship.
The article explores the range of foresight tools and techniques needed to effectively address our changing world. I will offer a few "operational rules of the road" for foresight practice, based on my experience and observation.
Rule of the Road: Revisiting assumptions about research and analysis is always useful, even if only to reconfirm validity. But in a dynamic environment, little remains static. Old rules may become out of date.
The ability to capture, process, and report data is expanding dynamically, but these are largely quantitative capabilities. Discovering the meaning and gaining insight from data trends continue to challenge us. We need to cross the bridge between quantitative and qualitative analysis (aka, creativity) in order to integrate multiple insights and come to a holistic and useful set of conclusions. One of the most intriguing and yet elusive subjects of inquiry in this arena is what are called weak signals, and we will be pursuing that quarry with enthusiasm.
First, let us consider basic methodology. To start, we take a look at the functional aspects of the extremely large data sets that have become so commonplace in global modeling. The size of these sets is driven by the exponential growth of the Internet, global communications, and surveillance technology. There have been huge increases in incoming data at all times and from all places. Our ability to crunch massive amounts of the data is also rapidly expanding. But are we paying attention to the expansion of risk, as well? Are there standards that can help us avoid driving our model off the road at excessively high speed?
Some of the models built to interpret this data flood are highly opaque, and a major concern is how many operators of large data models are actually struggling with these rather opaque analytical tools or are inattentive to the risks. One of the most useful risk-reduction tactics is a more complete understanding of the fundamentals of model construction and management. I have relied below on the excellent work of Adam Gordon.
We should always understand the shape of the data relationships involved--the basic underlying data relations (usually mathematical), such as direct or inverse, at work in the data set. Connected with this is data interaction--the relations between multiple factors that affect outcomes, such as reinforcing loops (positive or negative feedback), balancing loops (change-dampers, such as thermostats), and causal loops (mixes of the two). One way to think about tipping points, for example, involves their activation of feedback loops, such as in climate-change dynamics. These points of activation are referred to as thresholds or discontinuities within data relationships where the rules can change (e.g., producing a catastrophe scenario).
Data potholes can also include stale data, such as in social and economic data (which always has a short shelf life). But ongoing and continuous data updates can be expensive, and using smaller data snapshots is a common compromise. Then there is data lag--a delay in response between a cause and effect that can range from minutes to years (e.g., birth defects triggered by a genetic disease or toxic exposure from decades before), thus complicating the accuracy of any causal analysis.
Finally, there is the challenge of data translation, where cross analysis between domains (e.g., between social and economic data) is complicated by disconnects across language, concepts, or assumptions. …