• Start
  • Previous
  • 13 /
  • Next
  • End
  •  
  • Download HTML
  • Download Word
  • Download PDF
  • visits: 5266 / Download: 3506
Size Size Size
Social Psychology and World Politics

Social Psychology and World Politics

Author:
Publisher: McGraw Hill
English

III. Psychological Challenges to Deterrence

A. Deterrence

The dominant paradigm. Whereas neorealism has been the most influential theory in the academic study of world politics, deterrence has been the most influential theory among the policy elites responsible for statecraft in the second half of the twentieth century (Achen & Snidal, 1989; Jervis, 1978). And, just as it is misleading to place neorealism in conceptual opposition to psychology, it is misleading to do so for deterrence as well. Like neorealism, deterrence theory is best viewed as a particular kind of psychological theory--one that places a premium on conceptual parsimony and that emphasizes the amoral rationality of foreign policy actors. Although deterrence theory comes in many forms (from thoughtful prose to game theoretic models), there are certain recurring themes that justify the common label:

1) the world is a dangerous place. One is confronted by a power-maximizing rational opponent who will capitalize on every opportunity to expand its influence at one's expense. Whenever the option to attack becomes sufficiently attractive (i.e., has greater expected utility than other available options), the likelihood of attack rises to an unacceptably high level;

2) to deter aggression, one should issue retaliatory threats that lead one's opponent to conclude that the expected utility of aggression is lower than the expected utility of the status quo and its projected value into the foreseeable future;

3) to succeed, deterrent threats must be sufficiently potent and credible to overcome an adversary's motivation to attack. Potential aggressors must believe that the defender possesses the resolve and capability to implement the threat. Deterrence will fail if either of these conditions is not met.

Although most deterrence theorists accepted these principles or variants of these principles in the abstract (Kaufman, 1956; Kissinger, 1957; Wohlstetter, 1958), they often disagreed vigorously over how to operationalize them in policy, especially in a nuclear-armed world in which, for the first time in history, the “loser” in war retained the capacity to destroy the “winner”. Consider a classic debate within the deterrence camp during the Cold War. Some theorists argued that, in a MAD world (of mutually assured destruction), nuclear weapons could only deter attacks on one's own territory (Type I or basic deterrence); others argued that nuclear threats could also deter attacks on allies (Type II or extended deterrence) (Kahn, 1965). For the former camp, nuclear threats were of limited utilitybecause , in McNamara’s words, “one cannot fashion a credible deterrent from an incredible action” (quoted in Freedman, 1981, p. 298). Why would a sane American leadership value the political independence of its allies over its own physical survival? This argument highlighted the need for a massive strengthening of conventional deterrence.

The NATO nations balked, however, at matching what they perceived to be massive Soviet spending on conventional forces (Thies, 1991). Deterrence theorists were then assigned the task of infusing credibility into the seemingly suicidal threat of nuclear retaliation. One strategy was the rationality of irrationality. Nuclear threats may gain credibility if one can convince one's opponent that one is crazy enough to follow through on them (Schelling, 1966; Mandel, 1987). Used judiciously, "irrational" threats are effective because "a bluff taken seriously is more useful than a serious threat taken as a bluff" (Kissinger, quoted in Gaddis, 1982, p. 300). One danger is, of course, that if the threatener does not appear crazy enough, the bluff will be called. The strategy can also be dangerous by working too well. For example, during the border skirmishes of the late 1960s, Soviet leaders concluded that Mao was so irrational that he might use nuclear weapons. To preclude this possibility, the Soviets seriously considered a pre-emptive attack against Chinese nuclear facilities (Whiting, 1991).

A second strategy--the threat that leaves something to chance--emphasizes the uncertainties inherent in military confrontations (the fog of war). Even if both sides want to limit a conflict, once hostilities begin, the conflict can escalate far beyond the worst case expectations of the antagonists. Threats that appear incredible become plausible when the two sides find themselves on the slippery slope of military engagements in which neither side completely controls the escalation process (Schelling, 1966). From this perspective, American forces in Europe did not need to be sufficient to halt a Soviet invasion; they functioned as a tripwire that raised the likelihood of eventual American nuclear involvement to an unacceptable level. The essence of this strategy is that potential aggressors will be induced to behave cautiously by the non-zero probability that conflicts, once initiated, will lead to mutual assured destruction (MAD).

Other deterrence theorists denounced the MAD strategy as morally and intellectually bankrupt. They advocated a war-fighting or "countervailing" strategy. Even defensive states need to develop conventional and nuclear capabilities that will give them a wide array of options when confronted by a challenger. The stated goal was to "prevail" in war with any potential aggressor at any step in the ladder of escalation (Kahn, 1965). The reasoning was straightforward. If the aggressors know they have nothing to gain by initiating a conflict or moving up the ladder of escalation, they will refrain from doing so (see Jervis, 1984, for a critique of this strategy).

In brief, MAD theorists emphasized the existence of secure second-strike forces as the best guarantee of peace in a world with two or more nuclear powers. The goal was to prevent war by stressing the risk of mutual annihilation. By contrast, war-fighting theorists were more concerned with what happens should deterrence fail. When challenged, states need the capability to respond in a controlled manner to contain the damage and yet force opponents to back down (Gray and Payne, 1980).

Critics of deterrence theory and its diverse doctrinal offshoots have raised numerous objections. George and Smoke (1974) noted that deterrence theory lacks motivational diagnostics (cf. Herrmann, 1988; Jervis, 1979; Mercer, 1996). It assumes an expansionist adversary, takes conflict for granted, and underestimates the variety of interpretations that can be placed on supposedly unambiguous, reputation-building acts. It also says little about: (1) how risk-seeking oraverse one's opponent might be in sizing up options (joining deterrence theory to prospect theory can be helpful here--Huth and Russett, 1993); (2) how one might change an opponent's motives and transform a competitive into a cooperative relationship (cf. Lindskold 1978). Critics have also complained about the emphasis of deterrence theory on threats and its concomitant neglect of the role that rewards, concessions and integrative problem-solving can play in mitigating conflicts (Jervis, 1979). Threats are not only sometimes ineffective; they sometimes backfire (Lebow, 1981). Finally, critics have objected to the notion that decision-makers in highly stressful crises are as coolly rational as many deterrence theorists, especially the "war fighters", imply (Jervis, Lebow, & Stein, 1985; Holsti, 1989). From the critics’ perspective, it is necessary to replace a narrow focus on deterrence with a broader focus on international influence by building psychological and political moderators into our analysis of when, where, and how threats -- alone or in combination with other tactics -- work.

B. Testing, Clarifying and Qualifying Deterrence

Theory. Any serious evaluation of deterrence theory must grapple with the methodological problems of determining whether deterrence worked or failed from the historical record. To be sure, dramatic failures of deterrence as policy are easy to identify. Country x wanted to stop country y from attacking it or a third country, but failed to do so. The historical data are, however, sufficiently ambiguous to allow seemingly endless arguments on whether individual cases also represent failures of deterrence theory (see Orme, 1987; Lebow & Stein, 1987). An equally imposing obstacle is presented by cases of deterrence success; no one knows how to identify them (George and Smoke, 1974; Achen and Snidal, 1989). When crises do not occur, is it due to the credibility of threats (successes for deterrence theory) or to the fact that the other states never intended to attack? Causal inference requires assumptions about what would have happened in the missing counterfactual cells in the contingency table in which the defender issued no threats.

These issues are not just of academic interest. The events that transpired between 1945 and 1991 in American-Soviet relations underscore both the logical problems in determining who is right and the magnitude of the political stakes in such debates. Although very few predicted when, how, and why the Cold War would come to an end (Gaddis, 1993), neither conservative deterrence theorists nor their liberal conflict spiral opponents were at a loss for retrospective explanations. Conservative observers argued that the collapse of the Soviet Union vindicated the policies of containment and deterrence that the United States pursued, in one form or another, since World War II. Partisans of the Reagan administration argued, more specifically, that the new Soviet thinking was a direct response to the hard-line initiatives of the 1980s and to the technological threat posed by the Strategic Defense Initiative. By contrast, liberal critics of deterrence argued that the policies of the 1980s (and, for many, earlier policies as well) were a massive exercise in overkill, with much blood spilled in unnecessary Third World wars and much treasure wasted in defense expenditures. The Cold War ended as a result of the internal failures of communist societies. If anything, Gorbachev and his policies emerged despite, not because of, the Reagan administration (Lebow and Stein, 1994).

Perhaps historians will someday adjudicate this dispute--although the lack of success on the abundantly documented origins of World War I should constrain optimism here. What is most remarkable for current purposes is how easily the disputants could have explained the opposite outcome. If the Soviet Union had moved in a neo-Stalinist direction in the mid-1980s (massive internal repression and confrontational policies abroad), conservative deterrence theorists could have argued and were indeed prepared to argue that the adversary had merely revealed its true nature (Pipes, 1986), and liberal spiral theorists could have argued and were indeed prepared to argue that "hard-liners beget hard-liners" in the escalatory dynamic (White, 1984). In short, we find ourselves in an epistemological quagmire--an example of what Einhorn and Hogarth (1981) aptly termed an "outcome-irrelevant learning situation."

Assessing the efficacy of deterrence is obviously deeply problematic (indeed, the game theorist Barry Weingast (1996) has shown that if deterrence does indeed work, then there will be many contexts in which the correlation between implementing deterrence and war or peace will be zero). Suffice it to say here that, contrary to White (1984), there is no evidentiary warrant for concluding that deterrence theory is wrong in general or even most of the time. The literature does, however, highlight important gaps in deterrence theory. From a social psychological standpoint, deterrence is but one of many instruments of social influence and the analytic task is to clarify the conditions under which these diverse strategies elicit desired responses from other states (George and Smoke, 1974, 1989; Jervis, Lebow, & Stein, 1985). Excellent reviews of work on bargaining and negotiation exist elsewhere (Druckman and Hopman, 1990; Pruitt, Handbook Chapter). This chapter offers a condensed summary of research that bears most directly on international influence, with special attention to hypotheses that have passed multi-method tests.

C. Influence Strategies

(1) Pure threat strategies. Threats sometimes work (McClintock et al., 1987; Patchen, 1987). Laboratory studies of bargaining have shown, for example, that: (a) threats of defection can lead to beneficial joint outcomes when interests do not conflict (Stech et al., 1984); (b) the mere possession of threat capabilities can reduce defection and increase mutual outcomes in games that prevent communication between the parties (Smith and Anderson, 1975). The evidence is, however, mixed. Other studies have found that threats impede cooperation and lower joint outcomes (Deutsch and Krauss, 1960; Kelley, 1965). Threats have also interfered with cooperation when interests were in conflict (Friedland, 1976) and when communication between bargainers was possible (Smith and Anderson, 1975). Brehm's (1972) reactance theory suggests that threats may backfire by provoking counter-efforts to assert one's freedom to do what was forbidden.

Evidence on international conflict is equally mixed. Although threats may be essential against some opponents, this strategy is counterproductive when directed at nations with limited goals (Kaplowitz, 1984). Several studies of interstate disputes have discovered that even though threats occasionally yield diplomatic victories, they can also lead to unwanted escalation of severe crises (Leng, 1988; Leng and Wheeler, 1979; Leng and Gochman, 1982). Case studies of American foreign policy have drawn a similar conclusion. A strategy of coercive diplomacy emphasizing military threats is appropriate only when restrictive preconditions are met; for example, when the coercing power is perceived to be more motivated than the target of coercion to achieve its objectives, when adequate domestic support can be generated for the policy, when there are usable military options, and when the opponent fears escalation more than the consequences of appearing to back down (George et al., 1971).

(2) Positive inducements. Since Munich gave appeasement a bad name, international relations scholars have largely neglected the role of positive inducements in foreign affairs (for exceptions, see Baldwin, 1971; Milburn and Christie, 1989). The primary advocates of positive inducements have been conflict-spiral theorists who emphasize the debilitating consequences of action-reaction cycles in international conflict (Deutsch, 1983; White, 1984). Although these theorists stress conciliatory gestures, few advocate total unilateral disarmament. And, for good reason: experimental evidence indicates in mixed-motive games, such as Prisoner's Dilemma, unconditional cooperators are ruthlessly exploited (e.g., Stech et al., 1984). In their study of international disputes, Leng and Wheeler (1979) found that nations adopting an appeasement strategy manage to avoid war but almost always suffered a diplomatic defeat. Positive inducements such as financial rewards for compliance can also be very expensive if the other side complies (particularly if it quickly becomes satiated and ups its demands for compensation), and they can foster unwanted dependency and sense of entitlement (Leng, 1993). Finally, just as deterrence theorists face difficulties in operationalizing threats, so reward theorists encounter problems in operationalizing positive inducements, which may be perceived as overbearing, presumptuous, manipulative, or insultingly small or large (Milburn and Christie, 1989).

The picture is not, however, uniformly bleak. Komorita (1973), for example, showed that unilateral conciliatory acts by one party in experimental bargaining games resulted in increased communication, perceptions of cooperative intent, and mutually beneficial outcomes. In reviewing studies of America-Soviet arms control negotiations, Druckman and Hopmann (1989) found that concessions by one side were generally met by counter-concessions by the opponent, whereas retractions provoked counter-retractions.

For the most part, conflict-spiral theorists have advocated combining conciliatory policy initiatives with adequate military strength and nonprovocative threats. The next section turns to these "mixed" strategies.

(3) Mixed-influence strategies. Spurred by Robert Axelrod's (1984) "the evolution of cooperation," a great deal of attention has been directed to firm-but-fair approaches to resolving conflict. This chapter focuses on Axelrod's (1984) tit-for-tat strategy (TFT), Osgood's (1961) strategy of "graduated and reciprocated initiatives in tension reduction" (GRIT) and the Nixon-Kissinger strategy of detente (George, 1983).

(I) Tit-for-tat. TFT is straightforward. One begins by cooperating and thereafter simply repeats one's opponent's previous move. Considerable research demonstrates that TFT is as effective as it is simple. In Axelrod's (1984) round robin PD computer tournaments in which expert-nominated strategies were pitted against one another, TFT--the simplest entrant--earned the highest average number of points. Axelrod (1984) argued that TFT works because it is nice (never defects first), perceptive (quickly discerns the other's intent), clear (easy to recognize), provocable (quickly retaliates), forgiving (willing to abandon defection immediately after the other side's first cooperative act), and patient (willing to persevere).

Although numerous experiments (Pilisuk & Skolnick, 1968), case studies (Snyder & Diesing, 1977) and event-analytic studies (Leng, 1993) have shown TFT-like strategies to be more effective than either pure threat or appeasement strategies in averting both war and diplomatic defeat, an equally sizable body of work has highlighted serious drawbacks to TFT in the international arena:

(1) Two parties can easily get caught up in a never-ending series of mutual defections. One solution is to be less provocable and more forgiving: to respond to defection with a smaller defection or to refrain altogether from retaliating to the first defection and respond in kind only to the second defection. These kinder, gentler variants of TFT outperform simple TFT in computer simulations that permit even low levels of “noise” in which players occasionally misclassify cooperation as defection and defection as cooperation (Bendor, Kramer, & Stout, 1991; Downs, 1991; Molander, 1985; Signorini, 1996). But the price of preventing conflict spirals from escalating out of control may be steep in environments in which predatory powers stand ready to exploit signs of weakness or generosity. A possible corrective here is to couple a slow-to-retaliate rule with a slow-to-forgive rule, thereby perhaps simultaneously averting spirals and deterring opportunists (Pruitt, Handbook Chapter).

(2) TFT applies primarily to Prisoner's Dilemma games in which both sides prefer mutual cooperation to mutual defection. Many conflicts, however, may best be described as games of "Deadlock" in which at least one party prefers unilateral defection or mutual defection to cooperation (Oye, 1985). In such games, TFT will not induce an opponent to cooperate. In arms races, for example, one or both nations might prefer a mutual build-up to an arms control treaty, especially if trust is low or if there is an opportunity to benefit from the race (a charge often leveled at the "military-industrial complexes" within major powers that are an important part of two-level games).

(3) TFT implies perfect perception and control--the ability to identify cooperation and defection correctly and to respond to an opponent in ways that will not be misconstrued. In PD games, moves are unambiguous and this condition can be satisfied; in international politics, policy makers must interpret actions that are ambiguous and, therefore, often controversial. Ambiguity may arise, in part, because there are both temptations and opportunities for nations to disguise defection as cooperation (e.g., by secretly developing chemical weapons or by surreptitiously deploying new missiles or by turning a blind eye to patent pirates). Ambiguity may also arise because nations are complex stimuli that lend themselves to multiple conflicting interpretations. It is often difficult to say whether a given foreign policy act merits a direct dispositional attribution or should be written off as domestic political posturing in a two-level game or perhaps even as a “normal accident” of complex institutional functioning. Arguments of this sort are common place in foreign policy. In the mid 1980's, American observers were deeply divided over the significance of the unilateral suspension of nuclear testing by the Soviet Union. Some treated it as sincere, others dismissed it as a propaganda ploy, and still others took it as a sign of weakness. In October, 1962, Soviet leaders confronted some puzzling mixed signals from the United States, including back-channel diplomatic assurances that the U. S. Navy would avoid provocative confrontations with Soviet vessels in the course of implementing the blockade of Cuba and reports from the Soviet Navy that Soviet submarines were being compelled to surface by aggressive interdiction tactics of which the White House was not fully aware but that were part of standard operating procedure for the U. S. Navy. Whatever the diverse causes of misperception, mistakes can be consequential. Even low levels of “noise” (random misperception) can affect the relative performance of influence strategies in simulations. And misperception with a consistent bias toward encoding cooperative and ambiguous acts as defections can prove devastating to TFT. The literature on cognitive biases warns us to expect exactly this latter form of misperception whenever mutual suspicion has hardened into hostile stereotypes.

(ii) GRIT. Like TFT, GRIT is designed both to resist exploitation and to shift the interaction onto a mutually beneficial, cooperative plane. Unlike TFT, however, GRIT does not assume that the game has yet to begin. Rather, GRIT assumes that the parties are already trapped in a costly conflict spiral. To unwind the spiral, Osgood proposed that one side should announce its intention to reduce tensions and then back up its talk with unilateral conciliatory gestures such as troop reductions and dismantling missiles. These actions are designed to convince the opponent of the initiator's peaceful intentions, but not to weaken the military position of the initiator. The opponent is then invited to respond with conciliatory gestures, but warned that attempts to exploit the situation will force the initiator to return to a hard-line posture. In contrast to TFT, GRIT is nicer (it cooperates in the face of defection) and less provocable (it continues to cooperate even when the opponent ignores what one has done).

Several experiments suggest that GRIT stimulates cooperation. The most impressively cumulative evidence comes from Lindskold’s (1978) research program. The paradigm involves a PD game in which subjects face an opponent (actually a preprogrammed strategy) who is initially competitive (to produce a climate of hostility) but then practices GRIT. In the final phase, the simulated other returns to a neutral strategy to test the persistence of GRIT's effects. Key findings include: a) GRIT leads to more integrative agreements than do competitive and no-message strategies; b) GRIT elicits more cooperation when initiated from a position of strength than weakness (a finding that could be invoked as support for major defense build-ups as a necessary prelude to GRIT); c) GRIT's general statement of cooperative intent is more effective than both promises of conditional cooperation and no statements at all, and GRIT statements are particularly effective when repeated and rephrased; d) GRIT elicits more cooperation than TFT and 50% cooperative strategies; e) GRIT produces more cooperation than a 50%-cooperative strategy, regardless of whether the subject responds before, after, or during the simulated other's response.

Some historical evidence can also be interpreted as consistent with GRIT. In the previous Handbook chapter on international relations, Etzioni argued that a quasi-GRIT strategy adopted by President Kennedy in 1963 promoted a short-lived period of cooperation between the United States and Soviet Union. Larson (1987) credited GRIT with producing the Austrian State Treaty of 1955. And some Sovietologists believe that Western thinking about conflict management influenced the policy strategies of Gorbachev in the late 1980s (Legvold, 1991). Gorbachevian initiatives such as the withdrawal from Afghanistan, the nuclear test moratorium, and unilateral troop reductions were all in the spirit of GRIT. Indeed, Gorbachev responded to American claims that the Soviet initiatives were merely "propaganda" by demonstrating an intuitive awareness of the logic of GRIT:

If all that we are doing is indeed viewed as mere propaganda, why not respond to it according to the principle of "an eye for an eye, and a tooth for a tooth"? We have stopped nuclear explosions. Then you Americans could take revenge by doing likewise. You could deal us another propaganda blow, say, by suspending the development of one of your strategic missiles. And we would respond with the same kind of "propaganda." And so forth and so on. Would anyone be harmed by competition in such propaganda? (Time, September 9, 1985, p. 23).

GRIT can be criticized for being too soft ("surrender on the installment plan") or as too tough (insufficiently sensitive to the psychological obstacles to resolving protracted and bitter conflicts). In the spirit of toughening GRIT, some researchers have argued that a combination of TFT and GRIT is the best strategy of conflict management in many contexts (Downs, 1991). Initial use of a TFT strategy would demonstrate one's willingness to endure a painful stalemate. Conciliatory offers could then be extended with a diminished fear that they will be interpreted as a sign of weakness (Snyder and Diesing, 1977). Others, however, argue that early competitiveness can too easily escalate into all-out war or poison the atmosphere so that later conciliatory initiatives will be ignored or discounted (Kelman & Bloom, 1973; Kriesberg & Thorson, 1991). Indeed, the now extensive literature on conflict resolution workshops suggests that, in emotionally and politically polarized disputes with long histories of violence (northern Ireland, Cypus, Israelis-Palestinians,... ) even GRIT is too insensitive to the difficulties of breaking down psychological barriers to peace (Azar, 1990; Burton, 1987; Fisher, 1990; Kelman, 1993; Rouhana & Kelman, 1994). It may be necessary to bring high-ranking disputants together in nonofficial workshops in which they are encouraged to understand each other’s needs and to engage in joint problem-solving exercises that gradually build up trust as each side acquires the ability to state the other’s position to the other’s satisfaction and acquires the willingness to consider and even generate integrative proposals that concede some legitimacy to the other’s concerns (Kelman & Cohen, 1976). Third-party mediation can also prove helpful in encouraging disputants: (a) to see the conflict as a disinterested but thoughtful and fair observer might; (b) to consider compromise packages that they would have categorically rejected if simply proposed by the adversary (Rubin, 1981). But the moderators of mediational success appear to be numerous and subtle, including the "ripeness" of the conflict (the parties perceive a mutually debilitating stalemate to exist), the types of issues (the conflict does not focus on territory or rights that the parties endow with sacred or transcendental significance) and the perceived impartiality of the mediator and of the mediation process (Kleiboer, 1996; Vasquez, 1993; Zartman & Touval, 1985).

Evaluating the efficacy of both workshop and third-party interventions raises unsolved methodological issues (from selection biases to the counterfactual vagaries of inferring what would have happened absent any intervention). And all of these approaches assume that there is hidden integrative potential and that both sides can be induced to prefer "jawing" to "warring" -- assumptions that do not hold when one side has so completely dehumanized the other that a dialogue of equals is impossible (e.g., Nazi attitudes toward Jews; Khymer Rouge attitudes toward class enemies).

(iii) The Nixon-Kissinger strategy of detente. Shifting from unofficial to official diplomacy, some scholars argue that the Nixon-Kissinger policies of detente in the early 1970s constituted a carefully crafted mixed-influence strategy, albeit with more emphasis on deterrence that in either GRIT or TFT (George, 1983). In this view, Nixon and Kissinger sought to shift the superpower relationship from “confrontational competition” to “collaborative competition” in which the United States and Soviet Union would both show restraint in the Third World and in weapons programs. The American strategy relied on both carrots (enhanced trade and credits, reduced military competition, and access to advanced technology) and sticks (a renewed arms race that would strain the Soviet economy and a suspension of trade that would deny access to American goods). For reasons still vigorously debated, the Nixon-Kissinger policy failed, competition in the Third World heated up, and arms control sputtered and eventually stalled with the SALT II treaty (Gaddis, 1982). Some suggest that the Nixon-Kissinger policy was ill-conceived, poorly implemented, or undermined by Congressional opponents who insisted on linking improved relations to human rights issues. Others blame the Soviet Union for exploiting détente by intervening in Angola, Ethiopia, and Afghanistan. As usual, we discover conflicting policy postmortems, each resting on distinctive counterfactual claims and each linked to different assessments of the adversary.

D. Reprise

Research on social influence points to a number of policy-relevant conclusions. At a minimum, the findings demonstrate that the simplistic remedies for complex conflicts are untenable. An exclusive emphasis on threats can provoke otherwise avoidable conflicts (Leng, 1993); so can calls for unilateral disarmament, albeit via a different mechanism -- by tempting aggressors. Encouraging, though, is the multi-method convergence suggesting that in many situations a firm-but-fair reciprocating bargaining strategy works reasonably well by both protecting vital interests and preventing conflicts from getting out of control. On a more pessimistic note, current findings are incomplete and poorly integrated. Although we know more than we once did about when alternative strategies are likely to be successful, our contingent generalizations are still crude (George, 1993). The more specific the policy question -- for example, will economic sanctions work against this adversary in this time frame? -- the more equivocal the answer we can justifiably derive from the literature. There remains a yawning gap between the idiographic and nomothetic -- the particular concerns of the policy community and the theoretical abstractions of academia.

II. Psychological Challenges to Neorealist Rationality

A. Neorealism

The dominant paradigm. Neorealism is often misleadingly portrayed as an anti-psychological theory of world politics (Waltz, 1959,1979). Neorealism could, however, just as justifiably be viewed as an unusually parsimonious psychological theory that:

(a) characterizes the environment within which states must survive as ruthlessly competitive -- a world in which the strong do what they will and the weak accept what they must (Thucydides, 400 B.C./1972). Each state is only as safe as it can make itself by its own efforts or by entering into protective alliances. Moreover, there is no value in appealing to shared norms of fairness or to the enforcement power of a "sovereign" because there are no shared norms and no world government to hold norm violators accountable;

(b) assumes that, to survive in this anarchic or self-help environment, decision-makers must act like egoistic rationalists (in clinical language, calculating psychopaths). They must be clear-sighted in appraising threats, methodical in evaluating options, and unsentimental in forming and abandoning "friendships." Altruists and fuzzy thinkers are speedily selected out of the system, thereby minimizing variation, at least in security policy, among states;

(c) makes no claims to predicting specific policy initiatives of states but does claim to explain long-term historical patterns of "balancing" among states designed to prevent the emergence of global hegemons (hence the embracing of such ideological odd couples as Churchill and Stalin in World War II or Nixon and Mao in the 1970's). Some neorealists also claim to identify the conditions under which war is especially likely (e.g., power transitions in multipolar systems in which regional hegemons are in relative decline vis-a-vis emerging challengers -- Gilpin, 1981).

Although neoliberal and social-constructivist critics denounce neorealism as theoretically simplistic and sometimes even ethically unsavory (Katzenstein, 1996; Keohane, 1984; Kratochwil, 1989; Wendt, 1992), many scholars find it useful as a crude first-order approximation of world politics. Neorealism calls our attention to the structural incentives for pursuing "balance-of-power" policies. From Thucydides to Bismarck to Kissinger, the analytical challenge has been to anticipate geopolitical threats and opportunities. The logic of the "security dilemma" implies that we should expect a high baseline of competition. In an anarchic system, each state is responsible for its own security which it attempts to achieve by building up its military capabilities, its economic infrastructure, and its alliances with other states. Often other states cannot easily differentiate defensive from offensive strategies and respond by making their own preparations, which can also be mistaken for offensive preparations, triggering further "defensive activity" and so the cycle goes.

Most neorealists and game theorists, do, however, allow for the possibility of cooperation. They expect cooperation when that response is indeed prudent--specifically, when the penalties for non-cooperation are steep (e.g., violating an arms-control agreement would motivate the other side to develop destabilizing first-strike weapons), the rewards of cooperation are high (e.g., the economic and security benefits of some form of détente), and the "shadow of the future" looms large (it does not pay to cross an adversary with whom one expects to deal over a prolonged period). Some neorealists argue that all three conditions were clearly satisfied in the American-Soviet relationship of the late 1980s. The Reagan defense build-up made clear to the Soviets that the penalties for non-cooperation were steep (sharp Western responses to the Soviet build-up of ICBMs, to SS20s in eastern Europe, and to Soviet interventions in Afghanistan, Ethiopia, and Angola); the prospect of revitalizing the moribund Soviet economy by redirecting scarce resources from defense to economic restructuring enhanced the rewards of cooperation; and the shadow of the future looked ominous (it seems rash to antagonize adversaries who have decisive technological and economic advantages in long-term competition). Gorbachev, in short, did not have to be a nice guy and political psychologists who made such trait attributions succumbed to one of their own favorite biases--the "fundamental attribution error." Why invoke altruism when enlightened self-interest is up to the explanatory task? The Soviet empire -- like the Roman, Byzantine, and Venetian empires before it -- simply tried to slow its relative decline by strategically abandoning initially peripheral and then increasingly core commitments (Gilpin, 1981).

Whatever neorealism's merits as an explanation for geostrategic maneuvering over the centuries, the theory is -- from a social psychological perspective -- suffocatingly restrictive. Objections can be organized under three headings. First, sociological theorists have challenged whether international politics is truly anarchic. The mere absence of effective world government does not automatically imply that world politics is a Hobbesian war of all against all (Barkdull, 1995). A powerful case can be made that shared normative understandings -- sometimes formally enforced by international institutions -- standardize expectations within security and economic communities about who is allowed to do what to whom (Deutsch, 1957; Katzenstein, 1996; Ruggie, 1986). Second, historically-minded theorists have challenged the neorealist view of national interest as something that rational actors directly deduce from the distribution of economic-military power in the international system and their own state's location in that system. Conceptions of national interest sometimes change dramatically and are linked to social and intellectual trends that are difficult, if not impossible, to reduce to standard geostrategic computations of power (Mueller, 1989). Policies deemed essential to national security in the late 19th century -- encouraging high birth rates, colonizing foreign lands -- are widely condemned in the late 20th century. And policies widely endorsed in the late 20th century – ceding components of national sovereignty to international institutions, military intervention to save the lives of citizens of other states of negligible strategic value -- would have been regarded as extraordinarily naive a century ago. In this view, it is an error to equate security with military strength (an error famously captured by Stalin's pithy dismissal of Vatican protests: "How many divisions does the Pope have?"). Finally, cognitive theorists have challenged the notion that rationality -- high-quality cognitive functioning -- is a prerequisite for success, or even survival, in world politics. Just as one does not need to be a grandmaster to win every chess tournament (it depends on the cleverness of the competition), so one may not need to be an exemplar of rational decision-making to manage the security policies of most states most of the time (Tetlock, 1992b). It may suffice to be not too much worse than the other players.

B. Cognitivism

An emerging alternative research program. This three-pronged critique -- that neorealism exaggerates the anarchic nature of world politics, exaggerates the role of economic-military power in determining national interests, and exaggerates the urgency of the need for rationality -- makes room for psychological approaches to world politics. One implication of the critique is that if we seek explanations of why national decision-makers do what they do, we will have to supplement the insights of macro theories (which locate nation-states in the international power matrix) with micro assumptions about decision-makers' cognitive representations of the policy environment, their goals in that environment, and their perceptions of the normative and domestic political constraints on policy options.

The recent end of the Cold War provides a deeply instructive example. It is disconcertingly easy to generate post hoc geostrategic rationalizations for virtually anything the Soviet leadership might have chosen to do in the mid-1980's (Lebow, 1995). One could counter the earlier argument that the conciliatory Gorbachevian policies were dictated by systemic imperatives by invoking the same imperatives to explain the opposite outcome: namely, the emergence of a militantly neo-Stalinist leadership committed to holding on to superpower status by reasserting discipline on the domestic front and by devoting massive resources to defense programs? Indeed, some influential observers read the situation exactly this way in the mid-1980's (Pipes, 1986). Systemic theories do not answer the pressing policy question: Why did the Soviet leadership jump one way rather than the other?

From a cognitive perspective, the key failing of neorealism is its inability to specify how creatures of bounded rationality will cope with the causal ambiguity inherent in complex historical flows of events. The feedback that decision makers receive from their policies is often delayed and subject to widely varying interpretations. What appears prudent to one ideological faction at one time may appear foolish to other factions at other times. Consider two examples:

a) In the 1950s and 1960s, mostly conservative supporters of covert action cited the coup sponsored by the Central Intelligence Agency against Iranian Prime Minister Mossadegh as a good example of how to advance U.S. strategic interests in the Middle East and elsewhere; after the fundamentalist Islamic revolution of the late 1970s, mostly liberal opponents of covert action argued for a reappraisal;

b) In the 1970s, Soviet policy in the Third World seemed to bear fruit, with pro-Soviet governments sprouting up in such diverse locations as Indochina, Afghanistan, Ethiopia and Nicaragua. Conservatives in Western countries warned that the Soviets were on the march and blamed the lackadaisical liberal policies of the Carter administration. By the late 1980's, however, these recently acquired geostrategic assets looked like liabilities. Conservatives claimed credit for wearing the Soviets down whereas liberals argued that the threat had existed largely in vivid anti-communist imaginations.

If this analysis is correct--if what counts as a rewarding or punishing consequence in the international environment depends on the ideological assumptions of the beholder--it is no longer adequate to "black-box" the policy-making process and limit the study of world politics to covariations between systemic input and policy outcomes. It becomes necessary to study how policy makers think about the international system. The cognitive research program in world politics rests on a pair of simple functionalist premises:

a) world politics is not only complex but also deeply ambiguous. Whenever people draw lessons from history, they rely -- implicitly or explicitly -- on speculative reconstructions of what would have happened in possible worlds of their own mental creation;

b) people--limited capacity information processors that we are--frequently resort to simplifying strategies to deal with this otherwise overwhelming complexity and uncertainty.

Policy makers, like ordinary mortals, see the world through a glass darkly--through the simplified images that they create of the international scene. Policy makers may act rationally, but only within the context of their simplified subjective representations of reality (the classic principle of bounded rationality--Simon, 1957).

To understand foreign policy, cognitivists focus on these simplified mental representations of reality that decision makers use to interpret events and choose among courses of action (Axelrod, 1976; Cottam, 1986; Jervis, 1976; George, 1969, 1980; Herrmann 1982; Holsti, 1989; Hudson, 1991; Larson, 1994; Sylvan et al., 1990; Thorson & Sylvan, 1992; Vertzberger,1990 ) . Although there is considerable disagreement on how to represent these representations (proposals include all the usual theoretical suspects: schemata, scripts, images, operational codes, belief systems, ontologies, problem representations, and associative networks), there is consensus on a key point: foreign policy belief systems have enormous cognitive utility. Belief systems provide ready answers to fundamental questions about the political world. What are the basic objectives of other states? What should our own objectives be? Can conflict be avoided and, if so, how? If not, what form is the conflict likely to take? Belief systems also facilitate decision making by providing frameworks for filling in missing counterfactual data points (if we had not done x, then disaster), for generating conditional forecasts (if we do x, then success), and for assessing the significance of the projected consequences of policies (we should count this outcome as failure and this one as success). Finally, and perhaps most importantly, belief systems often permit us to predict policy choices with a specificity that can rarely be achieved by purely systemic theories (cf. Blum, 1993; George, 1983; Herrmann, 1995; Rosati, 1984; Shimko, 1992; Tetlock, 1985; Walker, 1977; Wallace, Suedfeld, & Thachuk, 1993).

There is, however, a price to be paid for the cognitive and political benefits of a stable, internally consistent world view. Policy makers often oversimplify. Evidence has accumulated that the price of cognitive economy in world politics is--as in other domains of life--susceptibility to error and bias. The next sections consider this litany of potential errors and biases, paying special attention to those that have received sustained research attention in political contexts. The discussion then examines how motivational and social processes may amplify or attenuate hypothesized cognitive shortcomings, again with special reference to processes likely to be engaged in political contexts such as coping strategies in response to stress, time pressure and accountability demands from diverse constituencies.

(1) The Fundamental Attribution Error. People often prefer internal, dispositional explanations for others' conduct, even when plausible situational accounts exist (Gilbert and Malone, 1995; Jones, 1979; Ross, 1977). This judgmental tendency can interact dangerously with key properties of the international environment. Consider the much discussed security dilemma (Jervis, 1976, 1978). To protect themselves in an anarchic environment, states must seek security either through costly defense programs of their own or by entering into entangling alliances that oblige others to defend them. Assessing intentions in such an environment is often hard. There is usually no easy way to distinguish between defensive states that are responding to the competitive logic of the situation and expansionist states. If everyone assumes the worst, the stage is set for conflict-spiral-driven arms races that no one wanted (Downs, 1991; Kramer, 1988). The fundamental attribution error exacerbates matters by lowering the perceptual threshold for attributing hostile intentions to other states. This tendency--in conjunction with the security dilemma--can lead to an inordinate number of "Type I errors" in which decision-makers exaggerate the hostile intentions of defensively motivated powers. The security dilemma compels even peaceful states to arm; the fundamental attribution error then leads observers to draw incorrect dispositional inferences. The actor-observer divergence in attributions--the tendency for actors to see their conduct as more responsive to the situation and less reflective of dispositions than do observers (Jones & Nisbett,1971)-- can further exacerbate matters. So too can ego-defensive motives (Heradstveit & Bonham, 1996). Both sets of processes encourage leaders to attribute their own military spending to justifiable situational pressures (Jervis, 1976). These self-attributions can contribute to a self-righteous spiral of hostility in which policy makers know that they arm for defensive reasons, assume that others also know this, and then conclude that others who do not share this perception must be building their military capabilities because they have aggressive designs (cf. Swann, Pelham, & Roberts, 1987). At best, the result is a lot of unnecessary defense spending; at worst, needless bloodshed (White, 1984).

The fundamental attribution error may encourage a second form of misperception in the international arena: the tendency to perceive governments as unitary causal agents rather than as complex amalgams of bureaucratic and political subsystems, each pursuing its own missions and goals (Jervis, 1976; Vertzberger, 1990). Retrospective reconstructions of the Cuban missile crisis have revealed numerous junctures at which American and Soviet forces could easily have come into violent contact with each other, not as a result of following some carefully choreographed master plan plotted by top leaders, but rather as a result of local commanders executing standard operating procedures (Blight, 1990; Sagan & Waltz, 1995). The organizational analog of the fundamental attribution error is insensitivity to the numerous points of slippage between the official policies of collectivities and the policies that are actually implemented at the ground level.

Claims about the fundamental attribution error in world politics should however be subject to rigorous normative and empirical scrutiny. On the normative side, skeptics can challenge the presumption of "error". Deterrence theorists might note that setting a low threshold for making dispositional attributions can be adaptive. One may make more Type I errors (false alarms of malevolent intent) but fewer Type II errors (missing the threats posed by predatory powers such as Hitler’s Germany). And theorists of international institutions might note the value of pressuring states to observe the rules of the transnational trading and security regimes that they join. One way of exerting such pressure is to communicate little tolerance for justifications and excuses for norm violations, thereby increasing the reputation costs of such conduct. A balanced appraisal of the fundamental attribution “error” hinges on our probability estimates of each logically possible type of error (false alarms and misses) as well as on the political value we place on avoiding each error -- all in all, a classic signal detection problem. On the empirical side, skeptics can challenge the presumption of "fundamental" by pointing to Confucianist or, more generally, collectivist cultures in which sensitivity to contextual constraints on conduct is common (Morris & Peng, 1994). Skeptics can also raise endless definitional questions about what exactly qualifies as a "dispositional" explanation when we shift from an interpersonal to an international level of analysis and the number of causal entities expands exponentially. The domestic political system of one's adversary might be coded either as a situational constraint on leadership policy or as a reflection of the deepest dispositional aspirations of a "people".

(2) Overconfidence. Experimental work often reveals that people are excessively confident in their factual judgments and predictions, especially for difficlut problems (Einhorn & Hogarth 1981; Fischhoff, 1991). In the foreign policy realm, such overconfidence can lead decision makers to: 1) dismiss opposing views out of hand; 2) overestimate their ability to detect subtle clues to the other side's intentions; 3) assimilate incoming information to their existing beliefs. Overconfident decision makers in defender states are likely to misapply deterrence strategies--either by failing to respond to potential challenges because they are certain that no attack will occur (e.g., Israel in 1973) or by issuing gratuitous threats because they are certain that there will be an attack, even when no attack is actually planned (Levy, 1983). Overconfident aggressors are prone to exaggerate the likelihood that defenders will yield to challenges (Lebow, 1981). In addition, overconfidence can produce flawed policies when decision makers assess military and economic capabilities. For instance, the mistaken belief that one is militarily superior to a rival may generate risky policies that can lead to costly wars that no one wanted (Levy, 1983). By contrast, a mistaken belief that one is inferior to a rival can exacerbate conflict in either of two ways: (1) such beliefs generate unnecessary arms races as the weaker side tries to catch up. The rival perceives this effort as a bid for superiority, matches it, and sets the stage for an action/reaction pattern of conflict spiral; (2) the weaker state will be too quick to yield to a rival's demands (Levy, 1989). At best, such capitulation produces a diplomatic defeat; at worst, it leads aggressors to up the ante and ultimately produces wars that might have been avoided with firmer initial policies (a widely held view of Chamberlain's appeasement policy of 1938).

In a comprehensive study of intelligence failures prior to major wars, the historian Ernest May (1984, p. 542) concluded "if just one exhortation were to be pulled from this body of experience, it would be, to borrow Oliver Cromwell's words to the Scottish Kirk: “I beseech you in the bowels of Christ think it possible you may be mistaken.” This ironic call for cognitive humility (considering the source) is echoed in the contemporary literature on "de-biasing," with its emphasis on encouraging self-critical thought and imagining that the opposite of what one expected occurred (Lord et al., 1984; Tetlock and Kim, 1987). Of course, such advice can be taken too far. There is the mirror-image risk of paralysis in which self-critical thinkers dilute justifiably confident judgments by heeding irrelevant or specious arguments (Nisbett, Zukier, & Lemley, 1981; Tetlock & Boettger, 1989a) -- an especially severe threat to good judgment in environments in which the signal-to-noise ratio is unfavorable and other parties are trying to confuse or deceive the perceiver.

(3) Metaphors and Analogical Reasoning. People try to understand novel problems by reaching for familiar concepts. Frequently, these concepts take the form of metaphors and analogies that illuminate some aspects of the problem but obscure other aspects.

Lakoff and Johnson (1980) argue that metaphors pervade all forms of discourse. Discourse on deterrence is no exception. Metaphorical preferences are correlated closely with policy preferences. Consider the "ladder of escalation" and the "slippery slope" (Jervis, 1989). The former metaphor implies that just as we can easily climb up and down a ladder one step at a time, so we can control the escalation and de-escalation of conventional or even nuclear conflicts (Kahn, 1965); the latter metaphor implies that once in a conflict, leaders can easily lose control and slide helplessly into war (Schelling, 1966). In Cold War days, adherents of the "ladder" metaphor supported a war-fighting doctrine that stressed cultivating counterforce capabilities; although they did not relish the prospect, they believed that nuclear war could, in principle, be controlled. By contrast, "slippery slopers" endorsed MAD--both as a policy and as strategic reality--and they feared that, once initiated, conflicts would inevitably escalate to all-out war. They argued that nuclear powers need to avoid crises. Managing them once they break out is too risky. As President Kennedy reportedly remarked after the Cuban Missile crisis, "One can't have too many of these" (Blight, 1990).

Containment and deterrence theorists -- who put special emphasis on reputation (Mercer, 1996) -- have long been attracted to contagion and domino metaphors that imply that failure to stand firm in one sphere will undermine one's credibility in other spheres (Kissinger, 1993). In the early 19th century, Metternich believed that revolution was contagious and required quarantine-like measures; in the late 1960s, Brezhnev took a similar view of Dubcek’s reforms in Czechoslovakia; at roughly the same time, the Johnson and Nixon administrations used the domino metaphor to justify American involvement in Vietnam.

Metaphorical modes of thought continue to influence and to justify policy in the post-Cold War world. Some writers invoke communitarian metaphors and claim that international relationsis undergoing an irreversible transformation that will soon invalidate rationales for weapons of mass destruction. "A community of states united by common interests, values, and perspectives is emerging because of technology and economics. Among the modernist states belonging to that community, new norms of behavior are replacing the old dictates of realpolitik: they reject not only the use of weapons of mass destruction, but even the use of military force to settle their disputes" (Blechman and Fisher, 1994, p. 97). By contrast, other writers such as Christopher Layne (1993) believe that nothing fundamental has changed and rely on the social Darwinist metaphors of self-help and survival of the fittest. We have merely moved from a bipolar system to a multipolar one ("with a brief unipolar moment" in which the United States enjoyed global dominance at the end of the 20th century). Indeed, it is only a matter of time before the major non-nuclear powers--Japan and Germany--acquire nuclear weapons now that they no longer depend on extended deterrence protection from the United States.

People also give meaning to new situations by drawing on historical precedents and analogies (Gilovich, 1981; May, 1973; Jervis, 1976; Neustadt and May, 1986; Verzberger, 1986). Although a reasonable response by creatures with limited mental resources to a demanding environment, this cognitive strategy can be seriously abused. One mistake is to dwell on the most obvious precedent -- a pivotal event early in one's career (Barber, 1985; Goldgeier, 1994) or perhaps the most recent crisis or war (Jervis, 1976; Reiter, 1996) -- rather than survey a diverse set of precedents. Consider, for instance, the potpourri of Third World conflicts that American observers in the elite press compared to Vietnam between 1975 and 1995: Lebanon, Israel's Vietnam; Eritrea, Ethiopia's Vietnam; Chad, Libya's Vietnam; Angola, Cuba's Vietnam; Afghanistan, the Soviet Union's Vietnam; Bosnia, the European Community's Vietnam; Nicaragua, potentially a new American Vietnam, and, of course, Kampuchea, Vietnam's Vietnam. To be sure, there are points of similarity, but the differences are also marked and often slighted (Tetlock, et al., 1991).

Khong (1991) reports arguably the most systematic study of analogical reasoning in foreign policy. Drawing on process-tracing of high-level deliberations in the early 1960's, he documents how American policy in Vietnam was shaped by the perceived similarity of the Vietnamese conflict to the Korean war. Once again, a Communist army from the north had attacked a pro-western regime in the south. This diagnosis led to a series of prescriptions and predictions. The United States should resist the aggression with American troops and could expect victory, albeit with considerable bloodshed. A side-constraint lesson drawn from the Korean conflict was that the United States should avoid provoking Chinese entry into the Vietnam war and hence should practice "graduated escalation."

This example illustrates a second pitfall in analogical foreign policy reasoning: the tendency to neglect differences between the present situation and the politically preferred precedent. Not only in public but also in private, policy makers rarely engage in balanced comparative assessments of historical cases (Neustadt and May, 1986). From a psychological viewpoint, this result is not surprising. Laboratory research suggests that people often overweight hypothesis-confirmatory information (Klayman & Ha, 1987). To re-invoke the Vietnam example, American policy makers concentrated on the superficial similarities between the Vietnamese and Korean conflicts while George Ball--virtually alone within Johnson's inner circle--noted the differences (e.g., the conventional versus guerrilla natures of the conflicts, the degree to which the United States could count on international support). Whereas doves complained about this analogical mismatching, hawks complained about the analogical mismatching that led decision makers to exaggerate the likelihood of Chinese intervention. China had less strategic motivation and ability to intervene in the Vietnam War in 1965 than it had to intervene in the Korean War of 1950, preoccupied as Beijing was in the mid to late 1960s by the internal turmoil of the Great Proletarian Cultural Revolution and the external threat of the Soviet Union which had recently announced the Brezhnev Doctrine (claiming a Soviet right to intervene in socialist states that strayed from the Soviet line). From this standpoint, the U.S. could have struck deep into North Vietnam, with negligible risk of triggering Chinese intervention on behalf of the North Vietnam (toward whom the Chinese were ambivalent on both cultural and political grounds).

A third mistake is to permit preconceptions to drive the conclusions one draws from history. In the United States, for instance, hawks and doves drew sharply divergent lessons from the Vietnam war (Holsti and Rosenau, 1979). Prominent lessons for hawks were that the Soviet Union is expansionist and that the United States should avoid graduated escalation and honor alliance commitments. Prominent lessons for doves were that the United States should avoid guerrilla wars, that the press is more truthful than the administration, and that civilian leaders should be wary of military advice. No lesson appeared on both the hawk and dove lists! Sharply divergent lessons are not confined to democracies, as a content analysis of Soviet analyses of the Vietnam war revealed (Zimmerman and Axelrod, 1981). Different constituencies in the Soviet Union drew self serving and largely incompatible lessons from the American defeat in Asia. "Americanists" in foreign policy institutes believed that Vietnam demonstrated the need to promote détente while restraining wars of national liberation; the military press believed that the war demonstrated the implacable hostility of Western imperialism, the need to strengthen Soviet armed forces, and the feasibility and desirability of seeking further gains in the Third World. In summary, although policy makers often use analogies poorly, virtually no one would argue that they should ignore history; rather, the challenge is to employ historical analogies in a more nuanced, self-critical, and multidimensional manner (Neustadt and May, 1986).

(4) Belief Perseverance. Foreign policy beliefs often resist change (George, 1980). Cognitive mechanisms such as selective attention to confirming evidence, denial, source derogation, and biased assimilation of contradictory evidence buffer beliefs from refutation (Nisbett & Ross, 1980). Although foreign belief systems take many forms (for detailed typologies, see Herrmann and Fischerkeller,1995; Holsti,1977), the most widely studied is the inherent bad faith model of one's opponent (Holsti,1967;Silverstein ,1989; Stuart and Starr, 1981; Blanton, 1996). A state is believed to be implacably hostile: contrary indicators, that in another context might be regarded as probative, are ignored, dismissed as propaganda ploys, or interpreted as signs of weakness. For example, Secretary of State John Foster Dulles tenaciously held to an inherent bad faith model of the Soviet Union (Holsti, 1967) and many Israelis believed that the PLO was implacably hostile (Kelman, 1983) and some still do even after the peace accord. Although such images are occasionally on the mark, they can produce missed opportunities for conflict resolution (Spillman & Spillman, 1991). More generally, belief perseverance can prevent policy makers from shifting from less to more successful strategies. In World War I, for example, military strategists continued to launch infantry charges despite enormous losses, leading to the wry observation that men may die easily, but beliefs do not (Art and Waltz, 1983, p. 13).

Some scholars hold belief perseverance to be a powerful moderator of deterrence success or failure (Jervis, 1983, p. 24). Aggressors can get away with blatantly offensive preparations and still surprise their targets as long as the target believes that an attack is unlikely (Heuer, 1981). An example is Israel's failure to respond to numerous intelligence warnings prior to the Yom Kippur War. Israeli leaders believed that the Arabs would not attack, given Arab military inferiority, and dismissed contradictory evidence that some later acknowledged to be probative (Stein, 1985). Conversely, a nation that does not plan to attack, yet is believed to harbor such plans, will find it difficult to convince the opponent of its peaceful intentions.

It is easy, however, to overstate the applicability of the belief perseverance hypothesis to world politics. Policymakers do sometimes change their minds (Bonham, Shapiro, & Trumble, 1979; Breslauer & Tetlock, 1991; Levy, 1994; Stein, 1994). The key questions are: Who changes? Under what conditions? And what forms does change take? Converging evidence from archival studies of political elites and experimental studies of judgment and choice suggest at least four possible answers: (a) when policy-makers do change their minds, they are generally constrained by the classic cognitive-consistency principle of least resistance which means that they show a marked preference for changing cognitions that are minimally connected to other cognitions (McGuire, 1985). Policy-makers should thus abandon beliefs about appropriate tactics before giving up on an entire strategy and abandon strategies before questioning fundamental assumptions about other states and the international system (Breslauer, 1991; Legvold, 1991; Spiegel, 1991; Tetlock, 1991); (b) timely belief change is more likely in competitive markets (Smith, 1994; Soros, 1990) that provide quick, unequivocal feedback and opportunities for repeated play on fundamentally similar problems so that base rates of experience can accumulate, thereby reducing reliance on theory-driven speculation about what would have happened if one had chosen differently. Policy-makers should thus learn more quickly in currency and bond markets than, say, in the realm of nuclear deterrence (who knows what lessons should be drawn from the nonoccurrence of a unique event such as nuclear war?); (c) timely belief change is more likely when decision-makers are accountable for bottom-line outcome indicators and have freedom to improvise solutions than when decision-makers are accountable to complex procedural-bureaucratic norms that limit latitude to improvise (Wilson, 1989); (d) timely belief change is more likely when decision-makers -- for either dispositional or situational reasons -- display self-critical styles of thinking that suggest an open-mindedness to contradictory evidence (Kruglanski, 1996; Tetlock, 1992a).

Applying these generalizations to world politics is not, however, always straightforward. For example, it is hard to determine whether certain decision-makers were truly more open-minded or were always more ambivalent toward the attitude object (Stein, 1994). For instance, was Gorbachev a faster learner than his rival, Yegor Ligachev, about the inadequacies of the Soviet system or was he less committed all along to the fundamental correctness of the Soviet system? It is often even harder to gauge whether policymakers are being good Bayesians who are adjusting in a timely fashion to “diagnostic” evidence? One observer's decisive clue concerning the deteriorating state of the Soviet economy in 1983 might have led another observer to conclude "disinformation campaign" and a third observer to conclude "interesting but only moderately probative." The problem here is not just the opacity of the underlying reality; one's threshold for belief adjustment hinges on the political importance that one attaches to the twin errors of underestimating and overestimating Soviet potential. For instance, one might see the evidence as significant but opt for only minor belief system revision because one judges the error of overestimation (excessive defense spending) as the more serious. Defensible normative evaluations of belief persistence and change must ultimately rest on game-theoretic assumptions about the reliability and validity of the evidence (what does the other side want us to believe and could they have shaped the evidence before us to achieve that goal?) and signal-detection assumptions about the relative importance of avoiding Type I versus Type II errors (which mistake do we dread more?).

(5) Avoidance of value trade-offs. For an array of cognitive, emotional and social reasons, politicians find value trade-offs unpleasant and frequently define issues in ways that bypass the need for such judgments (Steinbruner, 1974; Jervis, 1976; Tetlock, 1986b). Trade-off avoidance can, however, be dangerous. Operationalizing a policy of deterrence, for example, raises trade-offs that one ignores at one's peril. On the one hand, there is a need to resist exploitation and deter aggression. On the other hand, prudent policy makers should avoid exacerbating the worst-case fears of adversaries. The first value calls for deterrence; the second calls for reassurance. National leaders also confront a conflict between their desire to avoid the devastation of all-out war and their desire to deter challengers and avoid even limited military skirmishes. Jervis (1984, p. 49) referred to this dilemma as the "great trade-off" of the nuclear age: "states may be able to increase the chance of peace only by increasing the chance that war, if it comes, will be total. To decrease the probability of enormous destruction may increase the probability of aggression and limited wars."

There may also be higher-order geopolitical trade-offs. Gilpin (1981) and Kennedy (1987) have noted how great powers over the centuries have consistently mismanaged the three-pronged trade-off among defense spending, productive investment, and consumer spending. Although policy makers must allocate resources for defense to deter adversaries and for consumption to satisfy basic needs, too much of either type of spending can cut seriously into the long-term investment required for sustained economic growth.

Research suggests that policy makers often avoid trade-offs in a host of ways:(a) holding out hope that a dominant option (one superior on all important values) can be found; (b) resorting to dissonance reduction tactics such as bolstering (Festinger,1964) and belief system overkill (Jervis,1976) that create the psychological illusion that one’s preferred policy is superior on all relevant values to all possible alternatives; (c) engaging in the decision-deferral tactics of buckpassing and procrastination that diffuse responsibility or delay the day of reckoning ( Janis and Mann,1977;Tetlock & Boettger, 1994); (d) relying on lexicographic decision rules -- such as elimination-by-aspects (Tversky, 1972) -- that initially eliminate options that fail to pass some threshold on the most important value and then screen options on less important values (Mintz, 1993; Payne et al., 1992). Whichever avoidance strategy they adopt, policy makers who fail to acknowledge the trade-off structure of their environment may get into serious trouble: in somecases by provoking conflict spirals when they overemphasize deterrence; in other cases by inviting attacks when they over-emphasize reassurance. Similarly, policy makers can err by over-protecting their short term security through heavy defense spending and foreign adventures while compromising their long-term security by neglecting investment needs and consumer demands (Kennedy, 1987). This imperial overstretch argument is widely viewed as at least a partial explanation for the collapse of the Soviet Union (its advocates including Mikhail Gorbachev--Lebow and Stein, 1994).

It would be a mistake to imply that policy makers are oblivious to trade-offs. Although complex trade-off reasoning is rare in public speeches, policy makers may know more than they let be known. Acknowledging trade-offs can be embarrassing. In addition, some policy makers display an awareness of trade-offs even in public pronouncements. Content analysis of the political rhetoric of Gorbachev and his political allies revealed considerable sensitivity to the multi-faceted trade-offs that had to be made by the Soviet Union if it were to survive, in Gorbachev's words, into the next century in a manner befitting a great power (Tetlock and Boettger, 1989a). Of course, as Gorbachev's career illustrates, holding a complex view of the trade-off structure of one's environment is no guarantee that one will traverse the terrain successfully.

It would also be a mistake to imply that trade-off avoidance invariably leads to disaster. It may sometimes be prudent to wait. An expanding economy or evolving international scene may eliminate the need for trade-offs that sensible people once considered unavoidable. Passing the buck may be an effective way to diffuse blame for policies that inevitably impose losses on constituencies that politicians cannot afford to antagonize. And simple lexicographic decision rules may yield decisions in many environments that are almost as good as those yielded by much more exhaustive, but also exhausting, utility-maximization algorithms.

(6) Framing effects. Prospect theory asserts that choice is influenced by how a decision problem is "framed" (Kahneman and Tversky, 1979). When a problem entails high probability of gain, people tend to be risk-averse; when it entails high probability of loss, people tend to be risk-seeking. This prediction has been supported in numerous experiments (Bazerman, 1986; Tversky and Kahneman, 1981) as well as in case studies of foreign policy decisions (Farnham, 1992; Levy, 1992).

Framing effects can create severe impediments in international negotiations (Bazerman, 1986; Jervis, 1989). When negotiators view their own concessions as losses and concessions by the opponent as gains, the subjective value of the former will greatly outweigh the subjective value of the latter. Both sides will therefore perceive a "fair" deal to be one in which the opponent makes many more concessions--hardly conducive to reaching agreements. “Reactive devaluation” makes matter even worse. When both sides distrust each other, concessions by the other side are often minimized for the simple (not inherently invalid) reason that the other side made them (Ross & Griffin, 1991). For instance, in 1981, President Reagan unveiled his zero-option proposal calling for the Soviet dismantling of hundreds of intermediate-range missiles (SS-20s) in eastern Europe while the United States would refrain from deploying new missiles in western Europe. The Kremlin categorically rejected this proposal. In 1986, however, the new Soviet leadership embraced the original zero-option plan and agreed to eliminate all intermediate-range nuclear missiles on both sides. Gorbachev's concessions stunned many Western observers, who now assumed that the zero-option must favor the Soviets because of their conventional superiority and urged the United States to wiggle out of the potential agreement.

As prospect theory has emerged as the leading alternative to expected utility theory for explaining for decision making under risk, its influence has proliferated throughout international relations. Levy (1992) notes a host of real-world observations on bargaining, deterrence and the causes of war that are consistent with the spirit of prospect theory: 1) it is easier to defend the status quo than to defend a recent gain; 2) forcing a party to do something ("compellence") is more difficult than preventing a party from doing something (deterrence); 3) conflict is more likely when a state believes that it will suffer losses if it does not fight; 4) superpower intervention will be more likely if the client state is suffering; 5) intervention for the sake of a client's gain is not as likely as intervention to prevent loss; 6) states motivated by fear of loss are especially likely to engage in risky escalation.

Although prospect theory fits these observations nicely, much has been lost in the translation from the laboratory literature (in which researchers can manipulate the framing and likelihood of outcomes) to historical accounts of world politics (Boettcher, 1995). One critical issue for non-tautological applications of prospect theory is "renormalization"--the process of adjusting the reference point after a loss or gain has occurred. Jervis (1992a) speculates that decision makers renormalize much more quickly for gains (what they have recently acquired quickly becomes part of their endowment) than for losses (they may grieve for centuries over their setbacks, nurturing irredentist dreams of revenge and reconquest). The need remains, however, for reliable and valid research methods -- such as content analysis of group discussions (Levi & Whyte, 1996) -- for determining whether a specific actor at a specific historical juncture is in a loss or gain frame of mind. An equally critical issue concerns how the risk preferences of individuals are amplified or attenuated by group processes such as diffusion of responsibility, persuasive arguments, cultural norms, and political competition for power (Vertzberger, 1995). This argument reminds us of the need to be vigilant to the ever-shifting dimensions of value on which people may be risk-averse or risk-seeking. Normally cautious military leaders may suddenly become recklessly obdurate when issues of honor and identity are at stake. Saddam Hussein, just prior to the annihilation of his armies in Kuwait, is reported to have invoked an old Arab aphorism that “it is better to be a rooster for a day than a chicken for all eternity” (Post, 1992).

C. Are Psychologists Biased to Detect Bias?

The preceding section has but skimmed the surface of the voluminous literature on cognitive shortcomings. Although most research has taken place in laboratory settings, accumulating case-study and content analysis evidence indicates that policy makers are not immune to these effects. Researchers have emphasized the role that these cognitive processes can play in creating conflicts that might have been avoided had decision makers seen the situation more accurately. It is worth noting, however, that these same judgmental biases can attenuate as well as exacerbate conflicts. Much depends on the geopolitical circumstances. The fundamental attribution error can alert us quickly to the presence of predatory powers; simplistic analogies are sometimes apt; belief perseverance can prevent us from abandoning veridical assessments in response to "disinformation" campaigns; and high-risk policies sometimes yield big pay-offs. Indeed, efforts to eliminate these “biases” through institutional checks and balances are likely to be resisted by skeptics who argue that these cognitive tendencies are often functional. Consider overconfidence. Some psychologists have made a strong case that when this “judgmental bias” takes the form of infectious “can-do” optimism, it promotes occupational success and mental health (Seligman, 1990; Taylor & Brown, 1988). And this argument strikes a resonant chord within the policy community. To paraphrase Dean Acheson's response to Richard Neustadt (both of whom advised John F. Kennedy during the Cuban missile crisis): "I know your advice, Professor. You think the President needs to be warned. But you're wrong. The President needs to be given confidence." This anecdote illustrates the dramatically different normative theories of decision making that may guide policy elites from different historical periods, cultural backgrounds, and ideological traditions. Advice that strikes some academic observers as obviously sound will strike some policy elites as equally obviously flawed. Decision analysts face an uphill battle in convincing skeptics that the benefits of their prescriptions outweigh the costs. Whether or not they acknowledge it, policy makers must decide how to decide (Payne et al., 1992) by balancing the estimated benefits of complex, self-critical analysis against the psychological and political costs. What increments in predictive accuracy and decision quality is it reasonable to expect from seeking out additional evidence and weighing counterarguments? Some observers see enormous potential improvement (e.g., Janis, 1989); others suspect that policy makers are already shrewd cognitive managers skilled at identifying when they have reached the point of diminishing analytical returns (e.g. Suedfeld, 1992b). These strong prescriptive conclusions rest, however, on weak evidentiary foundations. We know remarkably little about the actual relations between styles of reasoning and judgmental accuracy in the political arena (Tetlock, 1992b).

D. Motivational processes

Neorealist assumptions of rationality can be challenged not only on “cold” cognitive grounds, but also on “hot” motivational grounds (Lebow & Stein, 1993). Decision-making, perhaps especially in crises, may be more driven by wishful thinking, self-justification, and the ebb and flow of human emotions than it is by dispassionate calculations of power. It is unwise, however, to dichotomize these theoretical options. Far from being mutually exclusive, cognitive and motivational processes are closely intertwined. Cognitive appraisals activate motives that, in turn, shape perceptions of the world. This sub-section organizes work on motivational bias into two categories: (a) generic processes of stress, anger, indignation, and coping hypothesized to apply to all human beings whenever the necessary activating conditions are satisfied; (b) individual differences in goals, motives, and orientations to the world hypothesized to generalize across a variety of contexts.

(1) Disruptive-Stress and Crisis Decision-Making. Policy-makers rarely have a lot of time to consider alternative courses of action. They frequently work under stressful conditions in which they must process large amounts of inconsistent information under severe time pressure, always with the knowledge that miscalculations may have serious consequences for both their own careers and vital national interests( C. Hermann, 1972; Holsti, 1972, 1989). This combination of an imperative demand for crucial decisions to be made quickly, with massive information overload, is a form of psychological stress likely to reduce the information processing capacity of the individuals involved (Suedfeld and Tetlock, 1977).

Both experimental and content analysis studies of archival records offer suggestive support for this hypothesis. The laboratory literature has repeatedly documented that stress -- beyond a hypothetical optimum -- impairs complex information processing (for examples see Gilbert, 1989; Kruglanski & Freund, 1983; Streufert & Streufert, 1978; Svenson & Maules, 1994). Impairment can take many forms, including a lessened likelihood of accurately discriminating among unfamiliar stimuli, an increased likelihood of relying on simple heuristics, rigid reliance on old, now inappropriate, problem-solving strategies, reduced search for new information, and intolerance for inconsistent evidence (Janis & Mann, 1977; Staw, Sandelands, & Dutton, 1981).

Archival studies reinforce these pessimistic conclusions, most notably, the work of Suedfeld and colleagues on declining integrative complexity in response to international tension (Suedfeld & Tetlock, 1977; Maoz & Shayer, 1992; Raphael, 1982; Suedfeld, 1992). These downward shifts are especially pronounced in crises that culminate in war. It is tempting here to tell a causal story in which crisis-induced stress impairs the capacity to identify viable integratively complex compromises, thereby contributing to the violent outcome. It is, however, wise to resist temptation in this case -- at least until two issues are resolved. First, falling integrative complexity may be a sign not of simplification and rigidification of mental representations but rather of a quite deliberate and self-conscious hardening of bargaining positions. Policy-makers may decide to lower their integrative complexity (closing loopholes, eliminating qualifications, denying trade-offs, and disengaging from empathic role-taking) as a means of communicating firmness of resolve to adversaries (Tetlock, 1985). Here, we need more studies that trace shifts in cognitive and integrative complexity in both private (intragovernmental) and public (intergovernmental) documents (Guttieri, Wallace, & Suedfeld, 1995; Levi & Tetlock, 1980; Walker & Watson, 1994). Second, there are numerous exceptions to the generalization that high stress produces cognitive simplification. Individual decision makers and decision making groups have sometimes risen to the challenge and responded to intensely stressful circumstances in a complex and nuanced fashion (Brecher, 1993). For instance, during the Entebbe crisis (Maoz, 1981), and the Middle East crisis of 1967 (Stein and Tanter, 1980), Israeli policy makers performed effectively under great stress. They considered numerous options, assessed the consequences of these options in a probabilistic manner, traded off values, and demonstrated an openness to new information.

The theoretical challenge isidentify when crisis-induced stress does and does not promote simplification of thought. One approach is to look for quantitative moderator variables such as intensity of stress that fit the rather complex and nonlinear pattern of cognitive performance data (Streufert & Streufert, 1978). Another approach is to look for qualitative moderator variables that activate simple or complex coping strategies. For instance, the Janis and Mann conflict model predicts simplification and rigidification of thought (“defensive avoidance”) only when decision makers confront a genuine dilemma in which they must choose between two equally unpleasant alternatives and are pessimistic about finding a more palatable alternative in the time available. Under these conditions, decision makers are predicted to choose and bolster one of the options, focusing on its strengths and the other options’ weaknesses (thereby spreading the alternatives). By contrast, when decision makers are more optimistic about finding an acceptable solution in the available time, but still perceive serious risks (and hence are under considerable stress), they will shift into vigilant patterns of information processing in which they balance conflicting risks in a reasonably dispassionate and thoughtful way.

In a programmatic effort to apply the Janis and Mann (1977) model, Lebow (1981) and Lebow and Stein (1987, 1994) have focused on crises in which policies of deterrence apparently failed. Specifically, they propose that “aggressive” challengers to the status quo are often caught in decisional dilemmas likely to activate defensive avoidance and bias their assessments of risk. For instance, Argentina’s leaders felt that they had to do something dramatic to deflect domestic unrest in 1982, relied on the classic Shakespearian tactic of “busying giddy minds with foreign quarrels” and invaded the Falklands, and then convinced themselves that Britain would protest but ultimately acquiesce. In a similar vein, in 1962, Soviet leaders reacted to an apparently otherwise intractable strategic problem -- vast American superiority in ICBM’s -- by placing intermediate-range missiles in Cuba and then convincing themselves that the United States would accept the fait accompli. In Lebow and Stein’s view, efforts to deter policy-makers engaging in defensive avoidance are often counter-productive, fueling the feelings of insecurity and desperation that inspired the original challenge.

(2) Justice Motive and Moral Outrage. Welch (1993) challenged the core motivational axioms of neorealism by proposing thatmost Great Power decisions to go to war over the last 150 years have been driven not by security and power goals but rather by a concern for "justice". To activate the justice motive, one must convince oneself that the other side threatens something -- territory, resources, status -- to which one is entitled (cf. Lerner, 1977). The resulting reaction is not cold, rational and calculating, but rather emotional, self-righteous, moralistic, and simplistic. Outrage triggered by perceived threats to entitlements provides the psychological momentum for dehumanizing adversaries, deactivating the normative constraints on killing them, and taking big risks to achieve ambitious objectives.

Welch carefully tries to show that the justice motive is not just a rhetorical ploy for arousing the masses by demonstrating its influence on private deliberations as well as on public posturing. The inferential problems, however, run deeper than the familiar "do-leaders-really-believe-what-they-are-saying?" debate. Welch assigns causal primacy to moral sentiments and that requires demonstrating that those sentiments are not epiphenomenal rationalizations -- perhaps sincerely believed but all the same driven by the material interests at stake. Proponents of prospect theory might be tempted to treat Welch’s examples of the justice motive as special cases of loss aversion. And cognitive dissonance theory warns us not to underestimate the human capacity for self-deception and self-justification. One of social psychology's more frustrating contributions to world politics may be to highlight the futility of many debates on the reducibility of ideas to interests. The line between moral entitlement and material interest may seem logically sharp but in practice it is often psychologically blurry.

(3) Biomedical Constraints. Policy-makers are human beings (despite occasional attempts by propagandists to obscure that fact) and, as such, subject to the scourges of the flesh. There is now much evidence that cerebral-vascular and neurological illnesses have impaired policy judgment at several junctures in modern diplomatic history (Park, 1986; Post and Robbins, 1993). One can make a good case, for example, that the origins and course of World War II cannot be understood without knowledge of the health of key players at critical choice points. It is well documented that: (a) Paul von Hindenburg, Hitler’s predecessor as Chancellor of Germany, showed serious signs of senility before passing the torch to his tyrannical successor (at a time when resistance was still an option); (b) Ramsay MacDonald (British Prime Minister) and Jozef Pilsudski (Polish head of state) showed palpable signs of mental fatigue in the early 1930's -- exactly when assertive British and Polish action might have nipped the Nazi regime in the bud; (c) During the latter part of World War II, Hitler showed symptoms of Parkinson’s disease and suffered serious side-effects from the dubious concoctions of drugs that his doctors prescribed; (d) Between 1943 and 1945, Franklin Roosevelt suffered from atherosclerosis and acute hypertension as well as from a clouding of consciousness known as “encephalopathy”.

Claims of biomedical causation do, however, sometimes prove controversial. Consider the debate concerning Woodrow Wilson’s rigidity in the wake of World War I when he needed to be tactically flexible in piecing together the Congressional support for American entry into the League of Nations. Whereas George and George (1981) offer a neo-Adlerian interpretation that depicts Wilson as a victim of narcissistic personality disorder in which threats to an idealized self-image, especially from people resembling authority figures from the past, trigger rigid ego-defensive reactions, neurologists trace the same combination of traits -- stubbornness, overconfidence, suspiciousness -- to the hypertension and cerebral-vascular disease that ultimately led to Wilson’s devastating stroke and death (Weinstein et al., 1978).

(3) Personality and policy preferences. From a neorealist perspective, foreign policy is constrained by the logic of power within the international system. Individual differences among political elites are thus largely inconsequential--virtually everyone who matters will agree on what constitutes the "rational" response. Although some crises do produce such unanimity (e.g., the American response to the attack on Pearl Harbor), in most cases large differences of opinion arise. By combining laboratory and archival studies, researchers have built a rather convincing case for some systematic personality influences on foreign policy (Etheredge, 1980; Greenstein, 1975; M. Hermann, 1977, 1987; Runyan, 1988; Tetlock, 1981a; Walker, 1983; Winter, 1992).

(I) Interpersonal Generalization. This hypothesis depicts foreign policy preferences as extensions of how people act toward others in their everyday lives. In archival analyses of disagreements among American policy makers between 1898 and 1984, Etheredge (1980) and Shephard (1988) found that policy preferences were closely linked to personality variables. Working from biographical data on the personal relationships of political leaders, coders rated leaders on interpersonal dominance (strong need to have their way and tendency to respond angrily when thwarted) and extroversion (strong need to be in the company of others). As predicted, dominant leaders were more likely to resort to force than their less dominant colleagues and extroverted leaders advocated more conciliatory policies than their more introverted colleagues. Laboratory work suggests causal pathways for these effects, including perceptual mediators (dominant people see high-pressure tactics as more efficacious) and motivational mediators (dominant people try to maximize their relative gains over others (like neorealists) whereas less dominant people try to maximize either absolute gains (like neoclassical economists) or joint gains (like communitarian team players)). (See Bem and Funder, 1978; Brewer & Kramer, 1985; Sternberg and Soriano, 1984).

(ii) Motivational Imagery. In a programmatic series of studies, Winter (1993) has adapted the content analysis systems for assessing motivational imagery in the semi-projective Thematic Apperception Test (TAT) to analyze the private and public statements of world leaders. He has also proposed a psychodynamic conflict-spiral model which posits that combinations of high power motivation and low affiliation motivation encourage resort to force in international relations. Winter (1993) tested this prediction against archival materials drawn from three centuries of British history, from British-German communications prior to World War I, and from American-Soviet communications during the Cuban Missile Crisis. In each case, Winter observed the predicted correlations between motives and war versus peace. In another study, Peterson, Winter, and Doty (1994) linked motivational theory to processes of misperception hypothesized to occur in conflict spirals (cf. Kelman & Bloom, 1973). In this integrative model, international conflicts escalate to violence when three conditions are satisfied: a) there are high levels of power motivation in the leadership of both countries; b) each side exaggerates the power imagery in communications from the other side; c) each side expresses more power motivation in response to its exaggerated perceptions of the power motivation of the other side. In ingenious laboratory simulations, Peterson et al. used letters exchanged in an actual crisis as stimulus materials and showed that subjects with high power motivation were especially likely to see high power motivation in communications from the other side and to recommend the use of force.

The motive-imagery explanation is parsimonious. The same content analytic method yields similar relationships across experimental and archival settings. But the interpretive difficulty is the same as that encountered in integrative complexity research: the possibility of spurious multi-method convergence. Political statements cannot be taken as face-value reflections of intrapsychic processes. Leaders use political statements both to express internalized beliefs and goals as well as to influence the impressions that others form of how leaders think. There are already good reasons for supposing that people can strategically raise and lower their integrative complexity (Tetlock, 1981a, 1992a) and it would be surprising if motivational imagery were not also responsive to shifts in impression management goals.

(iii) Computerized content-analytic programs. In contrast to the complex semantic and pragmatic judgments required in integrative-complexity and motive-imagery coding, Hermann (1980, 1987) has developed computerized methods of text analysis that rely on individual word and co-occurrence counts and are designed to assess an extensive array of belief system and interpersonal style variables studied in the personality literature (including nationalism, trust, cognitive complexity, locus of control and achievement, affiliation, and power motives). Although we know less than we need to know about the interrelations among content analytic measures of similar theoretical constructs (but see Winter, Hermann, Weintraub, and Walker, 1991) we have learned a good deal about the political and behavioral correlates of Hermann’s indicators. For instance, nationalism/ethnocentrism and distrust of others are two well-replicated components of authoritarianism and, within samples of national leaders, tend to be linked to hostility and negative affect toward other nations and an unwillingness to come to their aid. Hermann’s measure of cognitive complexity tends to be linked to expressing positive affect toward other states and to receiving positive responses from them. These two results are strikingly compatible with Tetlock’s (1981b) work on isolationism and (1985) work on American and Soviet foreign policy rhetoric. As with Winter’s, Suedfeld's and Tetlock’s work, there is, however, the troublesome difficulty of disentangling intrapsychic from impression management explanations. Are we measuring underlying psychological processes or public posturing?

E. Placing Psychological Processes

In Political Context. Thus far, we have been content to identify cognitive and motivational processes and to ask (a)how well do the hypothesized processes hold up in this or that politicalcontext?; (b) what political consequences flow from the operation of the hypothesized processes?; (c) does the cumulative weight of multi-method evidence give us strong grounds for doubting rational-actor models of world politics?

Policy-makers do not, however, function in a social vacuum. Most national security decisions are collective products, the result of intensive interactions among small groups, each of which represents a major bureaucratic, economic, or political constituency to whom decision-makers feel accountable in varying ways and degrees. Considerations such as “Could I justify this proposal or agreement to group x or y?” loom large in biographical, autobiographical and historical accounts of governmental decision-making. Indeed, the major function of thought in political contexts is arguably the anticipatory testing of political accounts for alternative courses of actions: If “we” did this, what would “they” say? How could we reply? Who would emerge from the resulting symbolic exchange in a more favorable light? (Cf. Farnham, 1990; Kramer, 1995; Schlenker, 1980; Tetlock, 1992a).

The key question now becomes how cognitive and motivational properties of individual decision-makers interact with the matrix of political accountability relationships within which those decision-makers live and work. The literature points to a panoply of possibilities. Certain types of political accountability magnify shortcomings of individual judgement; other types check these shortcomings. There is plenty of room for argument, however, over whether any given accountability arrangement falls in the former or latter category (where, for example, should we place democratic accountability?) And, once again, there is vigorous disagreement over exactly what counts as "improvement" in the quality of judgement and choice. (For more detailed taxonomies of foreign policy-making systems, see Hermann & Hermann, 1989; Hermann & Preston, 1994; t’Hart, Stern, & Sundelius, 1995.)

1. Accountability demands likely to amplify deviations from rational-actor standards. The experimental literature suggests at least three sets of conditions (often satisfied in foreign policy settings) in which pressures to justify one’s decisions will interfere with high-quality decision-making: (a) one is accountable to an important constituency whose judgment is deeply flawed; (b) one feels insulated from accountability to out-groups but is highly motivated to please a homogeneous and self-satisfied in-group (as in “groupthink”); (c) one is accountable for difficult-to-reverse decisions that cast doubt on one’s competence or morality.

(I) The perils of foolish audiences. Decision-makers sometimes experience intense pressure to subordinate their own preferences to those of the constituency to whom they must answer (because their jobs, sometimes even their lives, depend on doing so). Insofar as decision-makers find themselves accountable to a fickle, superficial or impulsive constituency, the result will often be a decision considerably worse than the one they would have made on their own.

This process can occur in dictatorships or in democracies. In dictatorships, the argument reminds us that enormous power might be centralized in pathological personalities (Hitlers, Stalins, Saddam Husseins,...)whose judgment no senior advisor dares to challenge. In democracies, the argument reminds us that leaders are ultimately accountable to public opinion which, in the foreign policy domain, has historically been characterized as ill-informed, volatile and incoherent. Half a century ago, Gabriel Almond warned that the net effect of public opinion was to increase “irrationality” (1950, p. 239). And George Kennan (1951, p. 59) saw democratic accountability and rational foreign policy as inherently incompatible, comparing democracies to a "dinosaur with a body as long as this room and a brain the size of a pin: he lies there in his comfortable primeval mud and pays little attention to his environment; he is slow to wrath -- in fact, you practically have to whack his tail off to make him aware that his interests are being disturbed; but, once he grasps this, he lays about him with such determination that he not only destroys his adversary but largely wrecks his native habitat.”

Although more recent analyses cast some doubt on this view of public opinion in general (Page & Shapiro, 1992; Popkin, 1991; Sniderman et al., 1991; Sniderman et al., 1996) and of foreign policy attitudes in particular (Holsti, 1992; Russett, 1990), (but see Zaller, 1991), there is still a strong case against tight democratic oversight of foreign policy (Kissinger, 1993). This elitist position -- “let the professionals get on with the job” -- carries, however, its own risks. Lack of accountability can lead to lack of responsiveness to the legitimate concerns of now-excluded constituencies. For every example that the elitists can invoke (e.g., F.D.R. was far more alert to the Nazi threat than was the isolationist public in the 1930's), the proponents of democratic accountability can offer a counter example (e.g., the “best and the brightest” of the Kennedy-Johnson administration had to mobilize an apathetic public to support American intervention in Vietnam in the 1960's). The proponents of democratic accountability can also point to a remarkably robust statistical generalization: democracies virtually never go to war against each other (Russett, 1996), although they are no less prone to war with dictatorships. The mechanisms--psychological and institutional--underlying this "democratic-peace effect" remain controversial but the finding should give pause to those who argue that elites, left to their own devices, are best equipped to run foreign policy.

The foregoing analysis points to good arguments for both loose and tight democratic oversight of foreign policy. The deeper challenge is how to manage the inevitable trade-off between the demands of short-run accountability (maximize immediate public approval) and long-run accountability (craft foreign policies that may be unpopular now but yield benefits over several decades). This “accountability dilemma" (March and Olson, 1995) was the subject of intense debate two centuries ago (Burke 1774/1965)-- debate that continues to rage today (especially in domains thought too esoteric for public comprehension like monetary and foreign policy) and will probably rage two centuries hence.

(ii) The perils of oligarchy. With the noteworthy exceptions of totalitarian states that centralize extraordinary authority in one person whose will is law (Bullock, 1991), accountability in the political world rarely reaches the zero point. Accountability can, however, become intellectually incestuous when policy makers expect to answer only to like-minded colleagues and constituencies. This concentration of accountability to an in-group is a defining feature of groupthink (Janis, 1982). The combination of opinionated leadership, insulation from external critics, and intolerance of dissent often appears sufficient to amplify already dangerous tendencies in individual judgment. Groupthink decision-makers are more prone to jump to premature conclusions, to dismiss contradictory evidence, to deny trade-offs, to bolster preferred options, to suppress dissent within the group and to display excessive optimism. According to Janis, the result is often the undertaking of ill-conceived foreign policy projects that lead to disastrous consequences such as provoking Chinese intervention in the Korean war, the abortive Bay of Pigs invasion of Cuba, and the escalation of the Vietnam War. Janis contrasted these “fiascoes” with cases such as the Marshall Plan and Cuban Missile Crisis in which policy-making groups adopted a much more self-critical and thoughtful style of decision-making and which led to far more satisfactory outcomes (given the values of the decision-makers).

Janis’s case studies represent the best-known effort to apply work on group dynamics to elite political settings. The groupthink model does, however, have serious limitations (t’Hart, Stern, & Sundelius, 1995). First, the evidence -- from case studies to experiments -- is mixed. Close inspection of case studies underscores the ambiguity of many diagnoses of “groupthink” in foreign policy contexts. For example, comparing Berman’s (1982) and Janis’s (1982) accounts of Johnson’s decision to intervene in Vietnam, one almost needs to be a mindreader to determine whether: (a) a manipulative Johnson had made up his mind in advance and used group deliberations merely to justify a predetermined policy; (b) an uncertain Johnson leaned heavily upon a cliquish advisory group for cognitive and emotional support. Equally mixed is the content analytic and Q-sort evidence (Tetlock, 1979; Tetlock et al., 1992; Walker & Watson, 1994). These studies have supported some aspects of the model (increased rigidity and self-righteousness in hypothesized cases of groupthink) but not others (there is little evidence that cohesiveness alone or in interaction with other antecedents contributes to defective decision-making). And laboratory studies have been even less supportive of the hypothesized necessary and sufficient conditions for defective decision-making (Aldag & Fuller, 1993; Turner et al., 1993) -- although defenders of the model can always invoke the external-validity argument that experimental manipulations pale next to their dramatic real-life counterparts.

Second, the groupthink model oversimplifies process-outcome linkages in world politics and probably other spheres of life (t’Hart, 1994; Tetlock et al., 1992). It is easy to identify cases in which concurrence-seeking has been associated with outcomes that most observers now applaud (e.g., Churchill’s suppression of dissent in cabinet meetings in 1940-41 when some members of the British government favored a negotiated peace with Hitler) and cases in which vigilant decision-making has been associated with outcomes that left group members bitterly disappointed (e.g., Carter encouraged rather vigorous debate over the wisdom of the hostage-rescue mission in Iran in 1980). The correlation between quality of process and of outcome was perfectly positive in Janis’ (1982) case studies but is likely to be much lower in more comprehensive samplings of decision-making episodes (Bovens & t’Hart, 1996). We need contingency theories that identify: (a) the distinctive patterns of group decision-making that lead, under specified circumstances, to political success or failure (Stern & Sundelius, 1995; Vertzberger, 1995); (b) the diverse organizational and societal functions that leadership groups serve. Groups do not just exist to solve external problems; they provide symbolic arenas in which, among other things, bureaucratic and political conflicts can be expressed, support for shared values can be reaffirmed and potentially divisive trade-offs can be concealed (t’Hart, 1995). As with cognitive biases, patterns of group decision-making judged maladaptive within a functionalist framework that stresses scientific problem-solving appear quite reasonable from functionalist perspectives that stress other imperatives such as the needs for quick, decisive action, forging a united front and mobilizing external support.

(iii) The perils of backing people into a corner. The timing of accountability can be critical in political decision-making. Experimental work suggests (Brockner & Rubin, 1985; Staw, 1980; Tetlock, 1992a), and case studies tend to confirm, that policy makers who are accountable for decisions that they cannot easily reverse often concentrate mental effort on justifying these earlier commitments rather than finding optimal courses of action given current constraints. These exercises in retrospective rationality -- whether viewed as dissonance reduction or impression management -- will tend to be especially intense to the degree that earlier decisions cast doubt on decision-makers’ integrity or ability.

Situations of this sort -- military quagmires such as Vietnams and Afghanistans or financial quagmires such as unpromising World Bank projects with large sunk costs -- are common in political life. Indeed, the primary job of opposition parties in democracies is to find fault with the government and to refute the justifications and excuses that the government offers in its defense. The psychodynamics of justification become politically consequential when they extend beyond verbal sparring at press conferences and bias policy appraisals. After all, if one convinces oneself and perhaps others that a bad decision worked out pretty well, it starts to seem reasonable to channel even more resources into the same cause. Such sincerity can be deadly when decision-makers must choose among courses of action in international confrontations under the watchful eyes of domestic political audiences (Fearon, 1994).

2. Accountability demands likely to motivate self-critical thought. The experimental literature also identifies two sets of conditions (often satisfied in foreign policy settings) in which justification pressures will encourage high-quality decision-making: (a) one is not locked into any prior attitudinal commitments and one is accountable to an external constituency whose judgement one respects and whose own views are unknown; (b) one is accountable to multiple constituencies that make contradictory but not hopelessly irreconcilable demands.

(I) The benefits of normative ambiguity. In many institutions, there is a powerful temptation to curry the favor of those to whom one must answer. The right answer becomes whatever protects one’s political identity. This incentive structure can encourage a certain superficiality and rigidity of thought. One possible corrective -- proposed by George (1972) and Janis (1982) -- is to create ambiguity within the organization concerning the nature of the right answer. The rationale is straightforward: ambiguity will motivate information search. People will try to anticipate a wider range of objections that a wider range of critics might raise to their policy proposals. Moreover, insofar as people do not feel “frozen” into previous public commitments, people will not simply attempt to refute potential objections; they will attempt to incorporate those “reasonable” objections into their own cognitive structure, resulting in a richer, dialectically complex, representation of the problem (Tetlock, 1992a). From this perspective, wise leaders keep subordinates guessing about what the “right answer” is.

(ii) The benefits of political pluralism. A long and illustrious tradition upholds the cognitive benefits of political pluralism (Dahl, 1989; George, 1980; Mill, 1857/1960). Political elites can be compelled to be more tolerant and open-minded than they otherwise would have been by holding them accountable to many constituencies whose voices cannot be easily silenced. There is considerable experimental evidence to support this view (Nemeth and Staw, 1989; Tetlock, 1992a) -- although paralysis in the form of chronic buckpassing and procrastination is always a danger (Janis & Mann, 1977; Tetlock & Boettger, 1994).

Accountability cross-pressure is arguably the defining feature of life for international negotiators who must cope with “two-level games” (Evans et al., 1992; Putnam, 1988) that require simultaneously satisfying international adversaries as well as domestic constituencies, including government bureaucracies, interest groups, legislative factions, and the general public. Negotiators who rely on the simple “acceptability” heuristic will generally fail in this environment either because they put too much weight on reaching agreement with other powers (and lose credibility with domestic constituencies) or because they concede too much veto power to domestic audiences (and lose necessary bargaining flexibility with other powers). Drawing on Pruitt’s (1981) model of negotiation behavior, decision-makers who are subject to high role or value conflict -- who want to achieve positive outcomes for both sides -- are most likely to search vigilantly for viable integrative agreements that fall within the “win-sets” of both domestic constituencies and international negotiating partners. There is a reasonable chance, moreover, that their search will be successful, permitting agreements that hitherto seemed impossible because observers had fallen prey to the “fixed-pie” fallacy and concluded that positive-sum conflicts were zero-sum (Thompson, 1990). To take two recent momentous examples, most experts in 1988 thought a negotiated, peaceful transition to multiracial democracy in South Africa or an Israeli - P.L.O. peace treaty was either improbable or impossible in the next 10 years (Tetlock, 1992b). Unusually integratively complex leadership of the key political movements may well have played a key role in confuting the expert consensus (cf. Kelman, 1983). But the rewards of searching for integratively complex agreements will be meager when the intersection of win-sets is the null set. Instead of being short-listed for Nobel peace prizes, some integratively complex compromisers -- those who tried to reconcile American democracy and slavery in the 1850's or British security and Nazism in the 1930's -- find themselves denounced in the historical docket as unprincipled appeasers (Tetlock et al., 1994; Tetlock & Tyler, 1996).

F. Reprise

he literature gives us many reasons for suspecting that the policy process deviates, sometimes dramatically, from the rational actor baseline. One could, however, make a social psychological case that rationality is not a bad first-order approximation of decision-making in world politics. The argument would stress the elaborate procedures employed in many states to screen out “irrational decision-makers”, the intricate checks and balances designed to minimize the influence of deviant decision-makers who get through the screening mechanisms, the intense accountability pressures on decision-makers to make choices in a rigorous, security-maximizing fashion, and the magnitude of the decision-making stakes and the capacity of people to shift into more vigilant modes of thinking in response to situational incentives (Payne, Bettman, & Johnson, 1992). On balance though, the case for error and bias is still stronger than the case for pure rationality. Some biases and errors appear to be rooted in fundamental associative laws of memory (e.g., priming of analogies and metaphors) and psychophysical laws of perception (e.g., status quo as reference point) where individual differences are weak and incentive effects are negligible (Arkes, 1991; Camerer, 1995). Nonetheless, the choice is not either/or, and the literature itself may be biased in favor of ferreting out bloopers that enhance the prestige of psychological critics who enjoy the benefits of hindsight. As a discipline, we need to be at least as self-critical as we urge others to be.


3

4

5

6