Academic journal article Educational Technology & Society

Computer-Mediated Counter-Arguments and Individual Learning

Academic journal article Educational Technology & Society

Computer-Mediated Counter-Arguments and Individual Learning

Article excerpt

Introduction

A decision support system (DSS) is a platform for learning in at least two ways. First, a DSS aids the user in extending his or her knowledge about the subject matter of the decision domain. For example, a person using a stock-selecting DSS may explore the relationships among interest rates and stock prices and, while exploring, learn knowledge about the markets. Second, a DSS may present the user with an opportunity for adjusting his or her knowledge about the decision-making process itself. In this case, the system aids a person in altering the decisionmaking process in an attempt to improve the outcome. An example of this second way of learning is shown by a user of a stock-selecting DSS who, having been warned of an error-producing cognitive bias known as gambler's fallacy, reacts to this warning by performing additional steps in the decision-making process, which in this case might include the reflection upon possible market outcomes, if future prices are not dependent upon historical prices. This second way of learning, an adjustment of the decision-making process, serves the decision maker by reconciling the many interlocking and incompatible beliefs about the problem domain, some of which may be based upon cognitive biases (Tverskey & Kahneman, 1974). Increasingly, attention is being focused upon cognitive theories of learning and how these theories can be understood and applied in the context of DSS, a trajectory of research that is distinct from much of DSS research, which largely focuses upon the end result of decision making: decision quality (Santana, 1995). Along these lines, our focus is upon the second type of learning and how a DSS might help to adjust a user's decision-making process for the purposes of overcoming a cognitive bias known as confirmation bias.

During DSS use, cognitive biases present a perilous downside. Users, enabled by DSS, may make poor decisions and enact seemingly limited decision-making processes, often while confident that all is well. Several examples illustrate cognitive biases arising during the use of DSS. A cognitive bias called the illusion of control occurs when a user of a DSS performs a what-if analysis and displays increased confidence, yet achieves no significant performance gain (Davis & Kottemann, 1994). The cognitive bias called confirmation bias occurs when a person who is gathering information restricts attention to only data that supports a favored hypothesis. The cognitive bias known as the illusion of knowledge occurs when a user is overconfident about having access to a greater amount of information, as is easily the case with a DSS, but makes poor decisions, despite the additional information. This last cognitive bias was detected when investors who had switched from a phone-based investing method to an online trading platform became more confident, and yet recorded poor performance despite having access to more information (Barber & Odean, 2002). Attempting to counter these cognitive biases is called de-biasing (Fischhoff, 1982). In general, debiasing techniques, such as educating users about biases, are believed to beneficially impact decision quality (Arnott, 2006) and de-biasing techniques applied to DSS have been successfully demonstrated (Bhandari, Deaves, & Hassanein, 2008). To investigate the embedding of a de-biasing technique into the design of a DSS we adopt mental-model theory.

The theory of mental models explains how people perform certain activities, for example, information processing. When a decision maker confronts a large amount of information, a mental model aids in the filtering of that information. The mental model fulfills a gatekeeper role for the mind by preventing unrelated and unimportant information from being consciously considered. However, along with the beneficial effects that are attributed to mental models, there are also error-producing effects. Inadequate mental models may block important information. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.