The Influence of Decision Commitment and Decision Guidance on Directing Decision Aid Recommendations
Woolley, Darryl J., Academy of Information and Management Sciences Journal
This study considers factors that influence whether decision-aid users direct decision aid recommendations toward their prior beliefs. Desiring to confirm their beliefs, decision aid users may seek to direct decision aid recommendations toward their prior opinions. Cognitive dissonance theory suggests that this is more likely to occur when users are strongly committed to their opinions. However, if decision-makers receive guidance from a decision aid, they may be less likely to direct decision aid recommendations toward their prior belief. In the first of two experiments, professional auditors were more likely to direct decision aid recommendations if they were committed to a decision before using the aid. In the second experiment, graduate business students were more likely to accept a decision aid's initial recommendation when the decision aid provided guidance. However, the decision aid guidance did not stop users from directing the decision aid recommendation toward their prior belief; rather, it appeared to influence users who otherwise would have shifted away from both the decision aid recommendation and their own prior belief. This study contributes to research on decision aid use by finding that both professional and non-professional decision aid users direct decision aid recommendations toward their prior belief, and that they are influenced by the degree of their decision commitment and the guidance they receive from the decision aid.
In March 1942, the Japanese naval staff began to plan a campaign designed to draw the United States navy into a decisive battle near the Pacific island of Midway. The campaign had been proposed by Admiral Matome Ugaki in January. As part of the planning, the naval staff conducted a simulation to determine the feasibility of the plan. Admiral Ugaki refereed the simulation. As a strong supporter of the campaign, "... he allowed nothing to happen which would seriously inconvenience the smooth development of the war games to their predestined conclusion. He did not scruple to override unfavorable rulings of other umpires." (Frange, et. al., 1982, p. 31)
The battle simulation used to plan the battle of Midway is an example of using a decision aid, a tool used to assist its user in improving a decision. Decision aids are used in a variety of contexts, including medical diagnosis, bankruptcy prediction, and audit tasks. Research on decision aids has investigated whether they yield superior judgments (Dawes, Faust and Meehl 1989) and, especially in the accounting domain, whether and why decision makers decide whether to rely and decision aid decision recommendations.
As Admiral Ugaki's war game suggests, a user may face more choices than to rely or not on an aids recommendation. A user may control the aid's result so that it is consistent with his or her own opinion. Research in accounting suggests that auditors control a decision aid results to concur with their personal opinion (Kachelmeier and Messier, 1990; Messier, Kachelmeier and Jensen, 2001). Organizations implement decision aids to guide or direct a decision process (Silver, 1990); understanding why people control decision aid recommendations and how this behavior effects decisions will help an organization match its decision support technology to its strategy. For an aid that is designed to guide decisions to add value to an organization, the aid should produce a different decision than an unaided judgment. By controlling a decision aid's recommendation to confirm a prior belief, users obtain the same decision as they would without use of the aid and diminish the aid's value. Decision aids are expensive to develop and often fall into disuse (Gill 1995). Whenever an organization implements a decision aid, it faces the threat that the aid's users will circumvent the aid's recommendation to rely on their own judgment.
This paper builds upon Kachelmeier and Messier (1990) to investigate why people control decision aid results and the effectiveness of strategies to reduce decision aid controlling. An experiment with CPA participants tests whether decision commitment, a construct drawn from social psychology (Festinger, 1957), increases how much users control decision aid recommendations, and an experiment with MBA student participants tests whether using decision aid explanations reduces the extent that users control decision aid recommendations. The results partially support that decision commitment influences controlling decision aid recommendations and support that decision aid explanations reduces controlling decision aid recommendations.
THEORY AND HYPOTHESES
Decision aids usually improve decision results (Dawes, Faust and Meehl, 1989), but decision makers often choose to rely upon their own judgment rather than upon an aid's recommendation. Incentives, feedback, and justification requirements decrease reliance on decision aids (Ashton, 1990). Experienced or confident decision makers believe that their ability is adequate and are less likely to rely upon decision aids than less experienced users (Arkes, Dawes and Christensen, 1986; Whitecotton, 1996). Decision makers who are informed about the decision aid algorithm or who are able to interact with the aid have greater confidence in the aid and feel more in control of the decision process, and are more likely to rely upon the aid (Davis, 1998; Eining, Jones and Loebbecke, 1997).
The decision aid reliance research cited above examines decision aids with inputs provided by the researcher and measures reliance as the difference between decision aid recommendations and decision aid users' decisions. In comparing human judgment to decision aid effectiveness, however, people are found to be better at measuring variables whereas decision aids are better at combining those inputs into a combined judgment (Klienmuntz, 1990). In practice decision aids often consist of subjective inputs based on users' judgment that are combined by the aid to arrive at a recommended coursed of action (e.g., Shelton, Whittington and Landsittel, 2001). Allowing users to decide the inputs of a decision aid may substantially alter how a decision aid is used. When the aid inputs are set externally to the user, the user is faced with either accepting the aid's recommendation or not accepting the recommendation. An aid that allows users to decide the inputs allows the user to use a decision aid recommendation to confirm their prior belief or to reach a strategic recommendation by setting the inputs.
Researchers in social psychology have long noted that people manage evidence based on their prior beliefs or social pressure (Festinger, 1957). Theories developed as a result of these observations provides a framework that describes a lack of reliance on decision aids, including managing an aid to get an expected result. These theories predict that people will use evidence to confirm beliefs to which they are strongly committed or to which they feel social pressure to conform. People search for or interpret evidence to support committed beliefs, opinions, or decisions, rather than rationally combine the evidence in the decision or belief-formulation process. If we regard a decision aid's recommendation as a piece of evidence for a decision-maker to consider, then we can investigate how the recommendation is treated as any other type of evidence is investigated.
The theory of cognitive dissonance proposes that a disagreement between beliefs or between a decision and new evidence may cause psychological (Elliot and Devine, 1994) or possibly physical discomfort (Kiesler and Pallak, 1976). Impression management theory predicts that people desire to appear consistent because of the others' expectations. Both theory predict that the more committed a person is to a decision, either internally or externally, the more they will manage evidence to agree with their prior belief. They create a balance between their former belief and new evidence by interpreting new evidence in a manner consistent with their former belief. "Rational" decisions based on an unbiased evaluation of all evidence could cause discomfort by forcing decision makers to become aware of their incorrect judgment or beliefs. Whether people are willing to change their beliefs based on new evidence depends upon the relative strength of their commitment to their prior belief and of the persuasiveness of the new evidence (Aronson, 1968). If decision makers are strongly committed to their prior belief, the path of least resistance to cope with discontinuing evidence is to avoid or to discredit the new information. For example, they may seek out evidence consistent with their prior belief or avoid discontinuing evidence (Frey, 1986). If new information cannot be avoided, people discredit the new information (Lord, Ross, and Lepper, 1979).
Commitment occurs when a decision is irreversible (Wicklund et al., 1976), publicly communicated (Jecker, 1963), linked to a subsequent action (Kiesler and Sakumura, 1966), or based on a strong belief (Brock et al., 1967; Sweeney and Gruber, 1984). If decision makers are not committed to their prior decision or belief, they simply reverse their decision or belief when confronted with contrary information. For example, inexperienced decision makers over-rely on decision aids, using them when not appropriate because they have no committed opinion (Glover, Prawitt, and Spilker, 1997). People who are strongly committed to their decision defend their decision and are biased in their information evaluation or in their future decisions (Staw, 1976). Social psychology research usually manipulates decision commitment as a proxy for the presence of discomfort caused by inconsistent new information (Brehm and Cohen, 1962).
These findings also apply in professional contexts. Auditors, for example, indicate that the ability to justify or document a decision is more important than decision consensus, compromise, or finding a single best answer (Gibbins and Emby, 1984). They often make judgments early in the decision process and then gather evidence to support that judgment. They also point out the importance of protecting themselves from potential negative consequences of their decisions and that they often use audit working papers to justify decisions rather than to record audit procedures (Gibbins, 1984).
Kachelmeir and Messier (1990) and Messier, Kachelmeier and Jensen (2001) found an apparent example of auditors evaluating evidence based on existing opinion in an audit sample size task. They based their research on the AICPA's suggested sample size selection aid (Audit Sampling, 1983, 1999), and found that auditors select smaller sample sizes than recommended by the aid. When using the AICPA's decision aid, auditors make audit judgments concerning audit risk, tolerable misstatement, and the role of other audit procedures. The decision aid combines these subjective judgments to calculate a recommended sample size. They compared sample sizes obtained using the aid from two groups of participants. The first sample calculated a sample size by inputting audit judgments to the aid, but the second sample only supplied audit judgments without using the aid; the authors then calculated sample size by inputting the judgments into the aid. They found that participants that used the aid calculated a lower sample size than the aid calculated for participants who only provided the audit judgments. Apparently the auditors that participated in their study adjusted their audit judgments to obtain an acceptable sample size.
Auditors routinely make sample size decisions, and may have boundaries on what they believe are reasonable sample sizes. Strong belief is a form of commitment, and if Kachelmeier and Messier's (1990) auditors found that the sample size recommended by the decision aid was outside of their accepted range, they may have attempted to find a way to reconcile the aid to their opinion by controlling the aids recommendation through changing their audit judgments. The effect of decision commitment can be tested by changing the level of commitment. One way to manipulate commitment is to require people to communicate their belief before asking for a decision.
H1 When users' prior opinions differ from a decision aid recommendation, users committed to a decision before using a decision aid
A: will direct the decision aid recommendation toward a sample size consistent with their prior belief more often than other auditors and
B: will indicate less agreement with the decision aid than other auditors.
Various factors may influence the level of commitment to a belief. More experienced auditors are expected to have stronger beliefs about the range of appropriate sample sizes and to be more committed to their prior belief. More experienced decision-makers tend to be more confident, and confidence has been found to be associated with lower decision aid reliance (Whitecotton, 1996). Experienced and more confident auditors are therefore expected to react more strongly to the decision aid recommendation. Their reaction may consist of both directing the decision aid toward a lower sample size more than their less experienced or confident colleagues and of evaluating the decision aid more critically.
H2: More experienced and more confident users are expected to
A: direct the decision aid toward a sample size more consistent with their prior belief more than less experienced auditors, and
B: indicate less agreement with the decision aid than less experienced auditors.
We suppose that people act to maintain balance between new evidence and their prior beliefs. Thus, the participants in Kachelmeir and Messier (1990) control the decision aid recommendation to agree with their belief. Balance can also be reached if new evidence is strong enough to convince a person to change their belief to be consistent with the new evidence. A decision aid may include characteristics to increase user confidence in the aid's recommendation; by doing so, the aid design may lower how much users' control decision aid recommendations by adjusting inputs. A suggested decision aid characteristic to increase user reliance in a decision aid is guiding explanations. Just as a user guides a decision aid's recommendations, a decision aid may guide the user in the decision making process by suggesting decision steps and explaining judgments (Silver 1990). Decision aid guidance has been effective in encouraging information processing strategies (Todd and Benbasat, 1994), and when this guidance consists of an argument in favor of a decision aid recommendation decision aid users are more likely to adopt that recommendation (Eining, Jones, and Loebbecke, 1997).
Silver (1990) suggests that guidance may consist either of promoting a course of action (active understanding) or of supplying information about the decision process (passive understanding), and may be used to direct the decision process toward a desired conclusion, through a desired decision strategy, or to a more effective or efficient decision. This is especially relevant to auditing practice where decision consistency is a desired decision characteristic (Ashton, 1974). Explanation of decision aid recommendations has been found to increase agreement with a decision aid's outcome (Eining et al., 1997) and to increase belief change in the direction of the decision aid recommendation (Ye and Johnson, 1995). Simply informing users that not using an aid would result in poor performance increases reliance in an aid (Arkes et al., 1986), but providing feedback on an aid's underperformance decreases reliance (Kaplan, Reneau, and Whitecotton, 2000). Information on an aid's rule that appears invalid users decreases reliance (Ashton, 1990).
System guidance may consist of a listing of the rules by which a decision aid reaches its conclusions or a justification of the aid's conclusions (Ye and Johnson, 1995). The rule listing, sometimes known as a trace, originally developed as a list of rules applied in an expert system session, informs the user of the processes used by the decision aid to reach its conclusion. For example, a decision aid based on a formula may let the users know the formula's structure and weights. A rule listing promotes reliance in a decision aid if the rules appear valid or inspire confidence in the aid's user, but does not argue in favor of the decision aid's recommendation. It is informative rather than persuasive. If the rule listing increases the aid's validity in the user's perception, it increases the effectiveness of the decision aid's recommendation in influencing the decision of the user, and may reduce dissonance by persuading the user that the aid's method produces an accurate or acceptable recommendation. However, if the rule listing does not increase the aid's validity in the user's perception, it may reduce the user's reliance on the aid and increase directing the aid's recommendation.
Justification guidance, in contrast to a rule listing, attempts to persuade users that the aid's conclusion is valid. It consists of an explanation of why the rule applies, and therefore of why the explanation is valid. For example, a formula-based decision aid may explain the reason behind the formula's weights or why the formula itself is useful. Because it actively promotes the aid's conclusion, it should be more persuasive than a rule listing. Again, justification guidance may reduce the user direction of decision aid guidance by increasing the user's perception of the aid's validity.
Whereas decision commitment increases the strength of prior beliefs, decision aid guidance strengthens the new evidence presented by the decision aid recommendation, giving it more weight to overcome the user's prior belief and persuade the user to adopt the decision aid's unbiased recommendation.
H3 : Participants who are guided when receiving a decision aid's recommendation will
A: direct the decision aid's recommendation toward the decision aid's initial recommendation more often than unguided participants
B: direct the decision aid recommendation toward their own prior decision less often than unguided participants.
These three hypotheses were tested in two experiments. The first experiment tested whether belief commitment increases the amount of directing decision aid results; if so, then the users likely use the aids to confirm prior beliefs. The second experiment tested whether structuring the decision aid to guide the users would reduce the amount of directing decision aid results.
Experiment 1 tested whether auditors direct decision aid outcomes to confirm decisions and tests hypotheses one through three. Thirty-three auditors from several Big 5 firms participated in the experiment. The experiment was conducted on a Web page for the convenience of the participants and their employers. The average age of participants was 27, with an average of 31 months audit experience. Experience (t = 1.06, p = .16) and age (t = .393, p = .91) did not differ across experimental conditions.
The participants completed the same task as those in Kachelmeier and Messier (1990), except that the decision aid was computerized to enable data collection (Cook and Swain, 1993). Pilot testing found that the decision aid provides a larger sample size than auditor sample size judgments without the aid. The task consisted of reviewing a case and using a decision aid to indicate a sample size for substantive audit tests. Experimental steps are shown in Table 1. The participants used the decision aid after reviewing the instructions and reading the case.
To use the decision aid, the participants input three subjective judgments: tolerable misstatement, an assessment of combined control and inherent risk, and a reliance on other audit procedures. Tolerable misstatement is "a planning concept [that is] related to the auditor's preliminary judgments about materiality levels in such a way that tolerable misstatement, combined for the entire audit plan, does not exceed those estimates" (Audit Sampling, 1983). It is the dollar amount of misstatement the subject judges can be present before the financial statements are not reasonably presented. To enter the risk assessment and reliance on other audit procedures, the participants selected from a menu of items ranging, for the risk assessment, from Low to Maximum, and for the reliance on other procedures, from None to Substantial. Both menus consisted of four items. After entering the inputs the aid users clicked on a button to compute the sample size. They could repeat the process as often as they wished. They clicked on a finish button to signal that they were finished with the decision aid. When they clicked on the finish button, they were asked if they had finished calculating the sample size. After using the decision aid, the participants completed a questionnaire that gathered demographic information and information about their satisfaction with the aid.
Kachelmeier and Messier (1990) compared sample sizes calculated from participants' audit judgments in the control group and sample sizes from participants' use of the decision aid in the experimental group. The average sample size from the experimental group was significantly lower than in the control group. To test if commitment is responsible for lowering the calculated sample size, we break Kachelmeier and Messier's experimental group into two different groups and vary the commitment. The participating auditors were randomly assigned into one of the two groups. Our control group is equivalent to Kachelmeier and Messier's experimental group. The final decision aid recommendation served as the control group's sample size decision. In Kachelmeier and Messier, auditors in this condition directed decision aid outcomes to a result that were, on average, between auditor's unaided sample size judgment and a sample size derived from auditor judgments of decision aid parameters, input into the decision aid by the researchers without the opportunity for decision aid user manipulation of results.
The participants in the second group, hereafter called the decision-first group, provided an unaided sample size after reading the case but before using the decision aid. They then used the sample size decision aid. Because they communicated their decision before using the aid, auditors in the decision-first group were expected to be more committed to a lower sample size than those in the control group, and therefore to direct the decision aid recommendation to a lower value.
Unlike Kachelmeier and Messier (1990), who required their participants to calculate the recommended sample size using a calculator, we were able to gather the participants' sample size results from first to last calculation through a computerized decision aid. They were able to gather only the participants' final decision. Because we gathered the decision aid recommendation from the first time the participants put their audit judgments into the aid and through every time they changed those judgments to obtain a different sample size, we could measure the change from the first to the last use of the ad. We found that participants often used the aid to recalculate a sample size after first use. The subjects iterated through the decision aid an average of 4.2 times, with a minimum of 1 and a maximum of 20. The number of iterations did not differ significantly between experimental groups. On average, aid users in both groups decreased the sample size from their first iteration to their last iteration using the decision aid. The first time they used the aid, the decision aid's recommended mean sample size was 391; but the last decision aid iteration produced an average recommendation of 65. The Wilcoxan Signed Ranks test, used because of non-normality, was significant, Z = 2.782, p < .01. With 2 outliers removed, the test was still significant, Z = 2.396, p < .05. Results did not qualitatively differ in any other test between the full sample and the sample with outliers removed. Therefore, the participants in this task performed in a similar manner to the experimental groups in the prior research (Kachelmeier and Messier, 1990; Messier, Kachelmeier, and Jensen, 2001).
Because the participants could use the decision aid multiple times to recalculate a sample size, we were able to measure both the aid's first recommendation and last recommendation. Hypothesis 1A was investigated with three tests: the decision aid sample size recommendation means, the ratio of the first use decision aid recommendation to the last use decision aid recommendation, and the proportion of participants that change their inputs to the aid so that the aid recommendation becomes a more acceptable value. If the decision-first group were to direct the decision-aid toward their original belief more than the control group, the decision-first group should arrive at a lower final sample size. The means of both the first and last recommendations do not differ between the control and decision-first groups, however (Table 2). To further test Hypothesis 1A, the ratio of the initial decision aid iteration to the last decision aid iteration was compared between groups. The decision-first group was expected to decrease the recommended sample size more than the control group through use of the decision aid and therefore to have a larger ratio of initial recommendation to last recommendation. As shown in Table 2, the ratio is higher in the decision-first group than in the control group.
The difference in the proportion of participants that obtained a decreased decision aid recommendation as they iterated through the decision aid is significantly greater in the decision-first group than in the control group, as expected (Table 3). Any participant who accepted the initial decision aid recommendation (total iterations =1) was excluded to avoid including participants that did not pursue the task seriously. The proportion that accepted the initial recommendation did not significantly differ between groups. In all but one case, the decision-first participants who increased or decreased the sample size moved the decision aid recommendation toward the sample size decision they indicated before using the aid. Because of the small expected cell-size of the sample in Table 3, the proportion was also tested using Fishers Exact Test. The three x two matrix in Table 3 was transformed into a two x two matrix by combining the Increase and Even rows. The results were marginally significant (p < .07).
The second part Hypothesis 1 regarded the users' evaluation of the decision aid. Participants answered a question with a seven point response indicating agreement with the decision aid, with higher values indicating greater agreement. As expected, the decision-first group (4.56) agreed less with the decision aid than the control group (5.47)(Mann-Whitney U 86.5, ñ < .05).
Hypothesis 2 states the expectation that experienced and confident auditors would direct the decision aid to a lower sample size and agree with the aid less than in-experienced and unconfident auditors. Experience was measured by the number of months audit experience indicated by participants. Confidence was measured by asking how capable the auditors believed themselves to be on a one-to-seven scale. Experience and confidence were significantly correlated with each other, but neither was correlated with the ratio change between first and last decision aid recommendations and agreement with the decision aid; therefore Hypothesis 2 was not supported.
Despite a lack of linear correlation between experience and ratio change, experienced participants demonstrated different behavior than inexperience participants. The auditors whose decision aid recommendations remained even had an average of 49 months experience and stepped through the decision aid an average of 6.33 times. Subjects who increased or decreased the decision aid recommendation had an average experience of 23 and 27 months, respectively, and stepped through the decision aid an average of 3.88 and 4.25 times. More experienced auditors were more likely to experiment with the aid, but were probably more efficient in calibrating the aids inputs with its output.
We also found that participants who directed the recommended sample size either up or down were more uneasy about their decision than those who maintained the decision aid recommendation constant. Those participants who maintained the decision aid's recommendation experienced less unease (6.3 on a 7 point scale, with uneasiness increasing with low scores) than those who either increased (5.2) or decreased (5.0) the decision aid recommendation from first to last iteration.
The second experiment tested whether decision aid guidance decreases the extent that people direct decision aid outcomes to confirm beliefs. The experiment consisted of a between-subjects structure including three groups: a) control, b) rule explanation, and c) justification explanation. Students in a MBA course were offered extra credit to complete the task and were randomly assigned to the three groups. Sixty-six students began the experiment; 21 were eliminated because they did not complete the web-based task. The average age of participants was 28 and the average work experience was 6 years. Age and experience were not significantly different across experimental conditions. Exactly one-third of participants were female; an equal ratio held across all experimental conditions.
The task consisted of two separate stages. In the first stage, the participants read a case about a business purchasing an information system. They were given fictional information about four systems and instructed to pick one. The second stage was administered between one and five days after the first stage, and consisted of reading a second case. The participants were informed that the business in the second case had heard about the decision made in the first case and requested their help in selecting an information system. The first case was used to commit them to a decision; the second case tested subjects' belief confirmation.
After reading the second case, the subjects used a decision aid that recommended an information system based on scores supposedly assigned to the information systems by an independent consultant. The web page containing the decision aid reminded them of their original decision. The decision aid made an initial recommendation based on three out of the possible ten information attributes. The participants were free to change which three attributes the system considered and to calculate a new recommendation as often as they wanted. The final decision aid recommendation was the users' recommendation to the second company. In contrast to the first experiment, participants were able to direct the aid's recommendation by selecting which attributes were used by the aid rather than by changing the values of those attributes. When they were finished, they clicked on a "Finish" button and were asked to confirm that the last decision aid recommendation was their choice for the second case. They then advanced to a debriefing questionnaire.
The control group used a decision aid that initially selected three attributes and recommended an information system. The values assigned to the attributes were concealed. The users could change the attributes, as long as three remained selected. The decision aid in the rule explanation group was the same as that used by the control group with two additions. First, the formula used by the decision aid was revealed to the users (the formula was a simple addition of the assigned score of each of the three selected attributes). second, the decision aid displayed the attribute scores. The justification explanation group used the same decision aid used by the control group except that it provided an explanation of why the decision aid selected the initial three attributes used to calculate a recommendation. For example, a message at the top of the decision aid said "The chosen attributes are associated with the cost effectiveness of a system. Systems that have a long-term positive return on investment tend to be more successful. Cost effectiveness includes both system cost and system efficiency. In other words, cost effectiveness leads to system success."
The decision aid recommendation of the participants' final iteration using the decision aid was compared to both a) their recommendation made after reading the first case (hereafter called "prior"), and 2) the decision aid's initial recommendation for the second case (hereafter called "DA"). Because the initial decision aid recommendation was randomly assigned, in eight cases it was the same as the participants' first case recommendation. These cases were removed from the analysis.
When a decision aid explains its recommendation, the aid's persuasiveness is expected to be higher than a decision aid without explanations and users should be more willing to accept the decision aid outcome without directing the decision aid recommendation toward their prior decision. Indeed, the proportion of the explanation groups whose final decision aid recommendation agreed with the decision aid recommendation was significantly higher than for the control group (Table 4, Panel C). The decision aid explanation of its recommendation appears to have convinced some users to adopt the decision aid recommendation. The proportion whose final recommendation was consistent with their initial decision did not differ from the control group, however (Table 4, Panel B). Thus, Hypotheses 3A was supported, but Hypothesis 3B was not. A higher proportion of participants in the control group obtained a response different than both the participant's prior decision and the initial decision aid recommendation (Table 4, Panel A).
Most decision aid reliance research has used aids that rely upon inputs from the experimenter and require experiment participants to either accept and reject the decision aid recommendation. The experiments conducted in this study, in contrast, expanded upon research by Kachelmeier and Messier (1990) using an aid using subjective inputs and allowing users to direct the decision aid recommendation and found evidence that user direction of decision aids can be explained using the theory of cognitive dissonance, and that decision aid guidance partially offsets direction of decision aids. Both experiments found that many decision aid users, when given the opportunity, will direct the decision aid. Participants in both experiments directed the decision aid outcome toward their prior belief. In experiment 1, two out of the three measures tested found that participants simply stating a conclusion before using the decision aid intensified the tendency to direct the decision aid. That the third measure failed is not surprising given the small sample size and the large variance within the sample.
Given prior research and the assumption that higher confidence is associated with greater commitment to a decision, the finding that more experienced and more confident participants did not direct decision aids more than their less-experienced and less-confident peers is somewhat surprising. As less-experienced auditors are more likely to use a decision aid, it is significant to note that they tend to direct the decision aid used in this study in the same manner as more experienced auditors. It is possible that the auditors participating in the study form the commitment to a reasonable sample size early in their careers. As would be expected, participants who directed the decision aid toward their stated opinion tended to trust the decision aid less and to be less satisfied with their own decision process.
The experimental instructions may have induced the decision-first group to apply more effort in the decision and to thereby obtain different results than the control group. For example, making a decision with consequences increases effort expended relative to making a judgment without consequences (Bukszar, 2003). Both groups applied the same effort to using the decision aid, as shown by the number of iterations they processed the aid, however. If their effort was offline from the aid in attending more to the aid outputs, than the question becomes what motivated the decisionfirst group to expend more effort. The motivation would have to arise asserting a sample size before using the aid, which suggests in increase in decision commitment as the underlying factor.
The results provide evidence that providing decision guidance, whether a simple description of the method used or an explanation arguing in favor of the decision aid recommendation increases the extent that users accept the decision aid's recommendation. The findings parallel prior research in finding that decision aid guidance increases acceptance of a decision aid recommendation; this increased acceptance, however, did not decrease the extent that users direct the decision aid recommendation to their prior decision. Rather, decision aid guidance reduced the proportion of aid users who selected an outcome different from either the decision aid or their own prior decision.
The results also show that decision aid users use decision aids to explore alternative decisions, whether they accept the decision aid recommendation or not. This finding is similar to other research that has shown that decision makers prefer to evaluate what-if scenarios even if they do not improve performance (Kottemann, Davis, and Remus, 1994). Enabling decision makers to explore alternative consequences to their judgments may serve a valuable purpose in decision aid use; if so, decision aid implementers would not wish to limit how decision aid users direct aid recommendations. Often decision aid implementers may wish to restrict aid recommendations, however, or direct decision aid users decisions or processes though use of an aid (Silver, 1990). When decision aids allow user direction of recommendations, the aid is unlikely to change decisions from what a user would obtain unaided, a finding that increases with commitment to the unaided decision. This study's findings suggest two strategies for encouraging reliance on the decision aid outcome. First, the decision process may decrease commitment to the decision. For example, the decision process may emphasize the parameter judgments used by the aid rather than the final decision decision. This emphasis on the parameters may remove some motivation to direct the outcome of the global decision. Organizations may even consider separating the parameter judgments from the decision to limit bias of the parameters. Second, organizations may implement guidance within the decision aid itself to increase the balance of evidence supporting unbiased use of the aid and thereby overcome user commitment an unaided decision.
The findings must be tempered by several observations. The experiment was conducted on the Internet, an environment which may have influenced the results because of a lack of control and may have led to a bias in the results based on those who choose to participate. The results must be interpreted conservatively when applied outside of the specific decision contexts conducted within this study. The two experiments themselves were not based on the same task and their findings may not apply to each other. Further research may increase the ability to generalize the findings.
Participants in this study used the decision aid recommendation as their own decision. Further studies may find whether people direct decision aid outcomes even when they are able to override the decision aid. If so, that finding would indicate that the decision aid recommendation is directed not only to obtain a result that is desirable to the user, but also to confirm the users' prior beliefs to themselves or to justify the decision to others. In addition, future research may identify whether a change of focus to the input parameters of a decision aid or some form of training can reduce the extent of decision aid direction. Frameworks may be developed to indicate under which circumstances user direction of decision aids is desirable. In conclusion, decision aid direction appears to be associated with users' commitment to their prior decision. Identification of commitment with decision aid directions may provide guidance to remedies, such of decision guidance, which may increase the impact of decision aid recommendations to users or lower users' commitment.
Arkes, H. R., Dawes, R. M. & Christensen, C. (1986). Factors influencing the use of a decision rule in a probabilistic task. Organizational Behavior and Human Decision Processes, 37: 93-110.
Aronson, E. (1968). Dissonance theory: Progress and problems. In R.P.Abelson, E. Aronson, W. J. McGuire, T. J. Newcomb, M. J. Rosenberg & P. H. Tannenbaum (Eds.), Theories of cognitive consistency: A sourcebook (pp. 5-27). Chicago: Rand McNally.
Ashton, R. H. (1974). An experimental study of internal control judgments. Journal of Accounting Research, 12: 143157.
Ashton, R. H. (1990). Pressure and performance in accounting decision settings: Paradoxical effects of incentives, feedback and justification. Journal of Accounting Research, 28: 148-180.
Audit sampling. (1983) New York: American Institute of Certified Public Accountings.
Audit sampling. (1999) New York: American Institute of Certified Public Accountings.
Brehm, J. W. & Cohen, A. R. (1962). Explorations in cognitive dissonance. New York: John Wiley and Sons.
Brock, T. C. &Balloun, J. C. (1967). Behavioral receptivity to dissonant information. Journal of Personality and Social Psychology, 6:413-428.
Bukszar, E. (2003). Does overconfidence lead to poor decisions? A comparison of decision making an djudgment under uncertainty. Journal of Business and Management, 91 115-136.
Cook, G. J. & Swain, M. R. (1993). A computerized approach to decision process tracing for decision support system design. Decision Sciences, 24: 931-952.
Davis, E. B. (1998). Decision aids for going-concern evaluation: Expectations of partial reliance. Advances in Accounting Behavioral Research, 1: 33-59.
Dawes, R. M., Faust, D., & Meehl, P. E. (1989). Clinical versus actuarial judgment. Science, 243, 1668-1674.
Eining, M. M., Jones, D. R. & Loebbecke, J. K. (1997). Reliance on decision aids: An examination of auditors' assessment of management fraud. Auditing: A Journal of Practice and Theory, 16: 1-19.
Elliot, A. J. & Devine, P. G. (1994). On the motivational nature of cognitive dissonance: Dissonance as psychological discomfort. Journal of Personality and Social Psychology, 67: 382-394.
Festinger, L. (1957). A theory of cognitive dissonance. Stanford, CA: Stanford University Press.
Frey, D. (1986). Recent research on selective exposure to information. Advances in Experimental Social Psychology, 19:41-80.
Gibbins, M. (1984). Propositions about the psychology of professional judgment in public accounting. Journal of Accounting Research, 22: 103-125.
Gibbins, M. & Emby, C. (1984). Evidence on the nature of professional judgment in public accounting. Auditing Research Symposium 198: 181-212.
Gill, T. G. (1995). Early expert systems: Where are they now? MIS Quarterly, 19: 51-81.
Glover, S. M., Prawitt, D. F. & Spilker, B. C. (1997). The influence of decision aids on user behavior: Implications for knowledge acquisition and inappropriate reliance. Organizational Behavior and Human Decision Processes, 72: 232-255.
Jecker, J. D. (1963). Conflict and dissonance: A time of decision. In R.P.Abelson, E. Aronson, W. J. McGuire, T. J. Newcomb, M. J. Rosenberg, and P. H. Tannenbaum (Eds.), Theories of cognitive consistency: A sourcebook (pp. 571-576). Chicago: Rand McNally.
Kachelmeier, S. J. & Messier, W. F. (1990). An investigation of the influence of a nonstatistical decision aid on auditor sample size decisions. The Accounting Review, 65: 209-226.
Kaplan, S. E.,Reneau, J. H. & Whitecotton, S. M. (2001). The effects of predictive ability information, locus of control, and decision maker involvement on decision aid reliance. Journal of Behavioral Decision Making, 14: 35-50.
Kiesler, C. & Pallak, M. (1976). Arousal properties of dissonance manipulations. Psychological Bulletin, 83:1014-1023.
Kiesler, C. & Sakumura, J. (1966). A test of a model for commitment. Journal of Personality and Social Psychology, 3: 349-365.
Klienmuntz, B. (1990). Why we still use our heads instead of formulas: Toward an Integrative Approach. Psychological Bulletin, 107, 3:296-310.
Kottemann, J., Davis, F. & Remus, W. (1994). Computer-assisted decision making: Performance, beliefs, and the illusion of control. Organizational Behavior and Human Decision Processes, 57: 26-37.
Lord, C. G., Ross, L. &Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37: 2098-2109.
Messier, W. F., Kachelmeier, S. J. & Jensen, K. (2001). An experimental assessment of recent professional developments in nonstatistical audit sampling guidance. Auditing: A Journal of Practice and Theory, 20: 81-96.
Prange, G. W., Goldstein, D. M. & Dillon, K. V. (1982). Miracle at Midway. New York: Penguin Books.
Shelton, S., Whittington, O. & Landsittel, D. (2001). Auditing firms' fraud risk assessment practices. Accounting Horizons, 15: 19-23.
Silver, M. S. (1990). Decision support systems: Directed and nondirected change. Information Systems Research, 1: 4770.
Staw, B. M. (1976). Knee-deep in the big muddy: A study of escalating commitment to a chosen course of action. Organizational Behavior and Human Decision Processes, 16: 27-44.
Sweeney, P. D. & Gruber, K. L. (1984). Selective exposure: Voter information preferences and the Watergate Affair. Journal of Personality and Social Psychology, 46: 1208-1221.
Todd, P. A. & Benbasat, I. (1994). The influence of DSS on choice strategies: An experimental analysis of the role of cognitive effort. Organizational Behavior and Human Decision Processes, 60: 36-74.
Whitecotton, S. M. (1996). The effects of experience and confidence on decision aid reliance: A Causal Model. Behavioral Research in Accounting, 8: 194-216.
Wicklund, R. A. & Brehm, J. W. (1976). Perspectives on cognitive dissonance. New York: John Wiley and Sons.
Ye, L. R. & Johnson, P. E. (1995). The impact of explanation facilities of user acceptance of expert systems advice. MIS Quarterly, 19: 157-172.
Darryl J. Woolley, University of Idaho…
Questia, a part of Gale, Cengage Learning. www.questia.com
Publication information: Article title: The Influence of Decision Commitment and Decision Guidance on Directing Decision Aid Recommendations. Contributors: Woolley, Darryl J. - Author. Journal title: Academy of Information and Management Sciences Journal. Volume: 10. Issue: 2 Publication date: July 1, 2007. Page number: 39+. © The DreamCatchers Group, LLC 2007. Provided by ProQuest LLC. All Rights Reserved.
This material is protected by copyright and, with the exception of fair use, may not be further copied, distributed or transmitted in any form or by any means.