Camerer and Johnson review literature on expertise in decision making domains—that is, in domains in which predictions about a complicated outcome (e.g., admissions, clinical assessment, criminal recidivism, etc.) are made on the basis of a set of observable variables. They define an expert as "a person who is experienced at making predictions in a domain and has some professional or social credentials" (p. 196).
A finding on decision making is that actuarial models (i.e., regression equations) are quite often more accurate predictors than experts. Moreover, professional experience is not always beneficial in terms of improved accuracy of predictions. For example, for some types of clinical assessment, professionals and graduate students outperform novices, but there is no difference between professionals and graduate students. Full-time radiologists are no better than advanced medical students at detecting lesions in abnormal lungs. In short, Camerer and Johnson conclude that in some domains, training, but not professional experience, improves prediction.
Why do experts predict poorly? Process tracing methodologies (e.g., verbal protocol analysis) provide one approach for answering this question. Verbal protocols indicate that, unlike an actuarial model, experts do not always use the same variables to make decisions. Rather, they use subsets of variables and combine them in different ways depending on the situation. In addition, experts seem to use less, rather than more, information than novices. That is, they consider putatively diagnostic subsets of variables, which in reality may not be strongly related to the outcome.
The use of configural rules in decision making is a likely explanation for experts’ poor decision making performance. A configural rule states that the impact of one variable on an outcome depends on the level of another variable. The following is an example of a configural rule for graduate school admissions: If the applicant has good grades and good GRE scores, then consider for acceptance. Three questions with respect to experts’ use of configural rules are considered: 1) Why do experts use configural rules, 2) Why are configural rules inaccurate, and 3) Why do experts persist in using configural rules?
Camerer and Johnson suggest that experts tend to rely on configural rules for at least three reasons. First, configural rules are easier than disjunctive (or rules) because the need to weight and add cues is bypassed. Also, crude cutoff values are used in configural rules (e.g., if the applicant has above a 3.0 GPA and above 1200 on the GRE.) By contrast, linear combination requires more precise specification of cue weights. Second, it is easier to fit a "causal narrative" to configural rules. For example, in an hallucination, a woman sees a raven perched next to her husband’s head. The woman says the hallucination reminds her of Edgar Allen Poe’s The Raven. A clinical psychologist makes the following assessment: ‘The [woman’s] fantasy is that like Poe’s Lenore, she will die or at least go away and leave him [the husband] alone’ (p. 208). This assessment is based on a configural rule: If the vision includes a Raven and the woman has knowledge of Poe’s poem, then the fantasy must symbolize a certain thing. It would be much more difficult to assign cue weights to "raven" and "knowledge of Poe." Such an approach would also not suggest a causal narrative. Finally, configural rules offer explanatory flexibility. To illustrate, six predictor variables can be combined into 15 different interactions. Interactions can be used to explain specific occurrences.
Why, though, are configural rules inaccurate? First, while they might explain specific occurrences, they are sometimes applied more generally.