Ethical Leadership and the Psychology of Decision Making

Reading Time: 54 min 

Topics

Permissions and PDF Download

Changes in today’s business environment pose vexing ethical challenges to executives. We propose that unethical business decisions may stem not from the traditionally assumed trade-off between ethics and profits or from a callous disregard of other people’s interests or welfare, but from psychological tendencies that foster poor decision making, both from an ethical and a rational perspective. Identifying and confronting these tendencies, we suggest, will increase both the ethicality and success of executive decision making.

Executives today work in a moral mine field. At any moment, a seemingly innocuous decision can explode and harm not only the decision maker but also everyone in the neighborhood. We cannot forecast the ethical landscape in coming years, nor do we think that it is our role to provide moral guidance to executives. Rather, we offer advice, based on contemporary research on the psychology of decision making, to help executives identify morally hazardous situations and improve the ethical quality of their decisions.

Psychologists have discovered systematic weaknesses in how people make decisions and process information; these new discoveries and theories are the foundation for this paper. These discoveries involve insights into errors that people make when they estimate risks and likelihoods, as well as biases in the way they seek information to improve their estimates. There are new theories about how easily our preferences can be influenced by the consequences we consider and the manner in which we consider them. Social psychologists have new information about how people divide the world into “us” and “them” that sheds new light on how discrimination operates. Finally, there has been important new research into the dimensions along which people think that they are different from other people, which helps explain why people might engage in practices that they would condemn in others.1

We focus on three types of theories that executives use in making decisions — theories about the world, theories about other people, and theories about ourselves. Theories about the world refer to the beliefs we hold about how the world works, the nature of the causal network in which we live, and the ways in which our decisions influence the world. Important aspects of our theories about the world involve our beliefs about the probabilistic (or deterministic) texture of the world and our perceptions of causation.

Theories about other people are our organized beliefs about how “we” are different from “they.” Interestingly, “they” may be competitors, employees, regulators, or foreigners, and whoever is “we” today may be “them” tomorrow. Our beliefs about others influence the ways in which we make judgments and decisions about other people, and these influences are often unconscious.

Finally, we all correctly believe that we are unique individuals. However, theories about ourselves lead us to unrealistic beliefs about ourselves that may cause us to underestimate our exposure to risk, take more than our fair share of the credit for success (or too little for failure), or be too confident that our theory of the world is the correct one. If most of the executives in an organization think that they are in the upper 10 percent of the talent distribution, there is the potential for pervasive disappointment.

Our discussion of these three theories focuses on the ways they are likely to be incorrect. Our message, however, is not that executives are poor decision makers. We focus on problem areas because they are the danger zones where errors may arise. They are the places where improvements may be achieved, areas in which executives would like to change their decision making if only they better understood their existing decision processes.

Theories about the World

Successful executives must have accurate knowledge of their world. If they lack this knowledge, they must know how to obtain it. One typical challenge is how to assess the risk of a proposed strategy or policy, which involves delineating the policy’s consequences and assessing the likelihood of various possibilities. If an executive does a poor assessment of a policy’s consequences, the policy may backfire and cause financial as well as moral embarrassment to the firm and the decision maker. There are three components to our theories of the world: the consideration of possible consequences, the judgment of risk, and the perception of causes.

The Cascade of Consequences

A principle in ecology that Hardin has called the First Law of Ecology is, simply stated, “You can never do just one thing.”2 Major decisions have a spectrum of consequences, not just one, and especially not just the intended consequence. Everyday experience as well as psychological research suggests that, in making complex choices, people often simplify the decision by ignoring possible outcomes or consequences that would otherwise complicate the choice. In other words, there is a tendency to reduce the set of possible consequences or outcomes to make the decision manageable. In extreme cases, all but one aspect of a decision will be suppressed, and the choice will be made solely on the basis of the one privileged feature. The folly of ignoring a decision’s possible consequences should be obvious to experienced decision makers, but there are several less obvious ways in which decision errors can create moral hazards. The tendency to ignore the full set of consequences in decision making leads to the following five biases: ignoring low-probability events, limiting the search for stakeholders, ignoring the possibility that the public will “find out,” discounting the future, and undervaluing collective outcomes.

  • Ignoring Low-Probability Events. If a new product has the potential for great acceptance but a possible drawback, perhaps for only a few people, there is a tendency to underestimate the importance of the risk. In the case of DES (diethylstilbestrol), a synthetic estrogen prescribed for women with problem pregnancies, there was some early indication that the drug was associated with a higher than normal rate of problems not only in pregnant women but also in their daughters. The importance of this information was insufficiently appreciated. Worrisome risks may be ignored if they threaten to impede large gains.
  • Limiting the Search for Stakeholders. DES’s most disastrous effects did not befall the consumers of the drug, namely, the women who took it; the catastrophe struck their daughters. When there is a tendency to restrict the analysis of a policy’s consequences to one or two groups of visible stakeholders, the decision may be blind-sided by unanticipated consequences to an altogether different group. A careful analysis of the interests of the stakeholders (those persons or groups whose welfare may be affected by the decision under consideration) is essential to reasonably anticipating potential problems. A basic tenet of moral theories is to treat people with respect, which can be done only if the interests of all concerned people are honestly considered. Assessing others’ interests would have required research, for instance, on the long-term effects of DES.
  • Ignoring the Possibility That the Public Will “Find Out.” The stakeholder who should always be considered is the public in general. Executives should ask, “What would the reaction be if this decision and the reasons for it were made public?” If they fear this reaction, they should reconsider the decision. One reason for the test is to alert executives that if the decision is made, they will have to conceal it to avoid adverse public response. The need to hide the decision, and the risk that the decision and its concealment might be disclosed, become other consequences to face. The outrage provoked by the revelation that a crippling disease, asbestosis, was caused by asbestos exposure was partly due to the fact that Johns Manville had known about and hidden this relationship for years while employees and customers were continuously exposed to this hazard. A decision or policy that must be hidden from public view has the additional risk that the secret might be revealed. Damage to self-respect and institutional respect of those who must implement and maintain the concealment should also be considered a consequence.
  • Discounting the Future. The consequences that we face tomorrow are more compelling than those we must address next week or next year. The consequences of decisions cascade not only over people and groups, but also over time. Figuring out how to address the entire temporal stream of outcomes is one of the most challenging tasks executives face. Policy A will earn more money this year than Policy B, but a year from now, if we get there, Policy B will probably leave us stronger than Policy A. Theories of the world that fail to cope with the temporal distribution of consequences will not only leave executives puzzled about why they are not doing better; they will also expose executives to accusations that they squandered the future to exploit the present. The tendency to discount the future partly explains the decaying urban infrastructure, the U.S. budget deficit, the collapse of fisheries, global warming, and environmental destruction. While there is much debate about the destructiveness of these issues, in each instance, the key decision makers have clearly underweighed the future in making the appropriate balanced decisions.
  • Undervaluing Collective Outcomes. Accurate theories of the world must also be sensitive to the collective consequences of decisions. When E.F. Hutton’s managers decided to earn money by kiting checks, not only did they put the reputation of their own firm in jeopardy, they also endangered the reputation of the entire securities industry. When a chemical firm decides to discharge waste into a public lake, it pollutes two collective resources, the lake and the reputation of the chemical industry in general. There is a tendency to treat these collective costs as externalities and to ignore them in decision making. To do so, however, is to ignore a broad class of stakeholders whose response could be, “If they voluntarily ignore the collective interests, then it is in the collective interest to regulate their activity.”

Ethical decisions must be based on accurate theories about the world. That means, at a minimum, examining the full spectrum of a decision’s consequences. Our perspective suggests that a set of biases reduces the effectiveness of the search for all possible consequences. It is interesting to evaluate the infamous Pinto decision from this consequential perspective. Ford executives knew that the car had a fire risk, but the cost they associated with it was small. Their deliberations gave no consideration to their customers’ interests. They made no effort to ask car buyers if they were willing to pay an extra $10 to shield the gas tank. The Pinto decision proved a colossal embarrassment to Ford; when the documents were released, the effort to conceal the decision failed, and public opinion, fueled by Ralph Nader’s book Unsafe at Any Speed, ran deeply and strongly against Ford.3 The public felt that there was a collective interest in automobile safety and that Ford and, by association, the other auto manufacturers, were indifferent to that concern. From the public’s perspective, it would be stupid to permit unethical firms to police themselves.

Judgment of Risk

Theories of the world will be inaccurate if they systematically fail to account for the full spectrum of consequences associated with decisions. And they will be inaccurate if they systematically err in assessing the probabilities associated with the consequences. Let’s first consider these two scenarios:

  • A tough-minded executive wants to know if the company’s current promotion practices have caused any specific case of demonstrated discrimination against a minority employee. He explains that he is not interested in vague possibilities of discrimination but is concerned that the firm not do anything that “really” causes discrimination.
  • Edmund Muskie, a candidate in the 1972 U.S. presidential election, borrowed the words of President Harry Truman when he stated that what this country needed was a “one-armed” economist. When asked why, he responded that he was tired of economists who said “on the one hand . . . , but on the other hand. . . . ”

· Denying Uncertainty.

These decision makers are grasping for certainty in an uncertain world. They want to know what will or did happen, not what may or might have happen(ed). They illustrate the general principle that people find it easier to act as if the world were certain and deterministic rather than uncertain and often unpredictable. The executive in the first scenario wants to know about “real” discrimination, not the possibility of discrimination. Muskie expressed frustration with incessantly hearing about “the other hand.” What people want to hear is not what might happen, but what will happen. When executives act as if the world is more certain than it is, they expose themselves to poor outcomes, for both themselves and others. It is simply foolish to ignore risk on one’s own behalf, but it is unethical to do so on behalf of others.

There are some good reasons why people underestimate the importance of chance. One is that they misperceive chance events. When the market goes up on five consecutive days, people find a reason or cause that makes the world seem deterministic (for example, a favorable economic report was published). If the market goes up four days and then down on the fifth, people say a “correction” was due. Statistical market analyses suggest that changes in indices such as the Dow Jones index are basically random. Yet each morning, we are offered an “explanation” in the financial pages of why the market went up or down.

One implication of the belief in a deterministic world is the view that evidence should and can be perfect. The fact that there is a strong statistical relationship between smoking and bad health, for instance, is insufficient to convince tobacco company executives that cigarettes are harmful, because the standard of proof they want the evidence to meet is that of perfection. Any deviation from this standard is used strategically as evidence that smoking is not harmful.

We believe in a deterministic world in some cases because we exaggerate the extent to which we can control it. This illusion of control shows up in many contexts, but it seems maximal in complex situations that play out in the future. The tendency appears in experimental contexts in which people prefer to bet on the outcome of a flip of a coin that has not yet been tossed rather than on one that has already been thrown but whose outcome is unknown to the bettor.4 The illusory sense that a bet may influence the outcome is more acute for future than for past events.

The illusion of control undoubtedly plays a large role in many business decisions. Janis has suggested that President Kennedy’s disastrous decision to invade Cuba at the Bay of Pigs was flawed by, among other things, an erroneous belief that the invasion forces, with U.S. support, could control the battle’s outcome.5 Evidently, the Russian military offered similar assurances to support their attack on Grozny.

One common response to the assertion that executives underestimate the importance of random events is that they have learned through experience how to process information about uncertainty. However, experience may not be a good teacher. In situations in which our expectations or predictions were wrong, we often misremember what our expectations, in fact, were. We commonly tend to adjust our memories of what we thought would happen to what we later came to know did happen. This phenomenon, called the “hindsight bias,” insulates us from our errors.6

We fail to appreciate the role of chance if we assume that every event that occurred was, in principle, predictable. The response “I should have known . . .” implies the belief that some future outcome was inherently knowable, a belief incompatible with the fact that essentially random events determine many outcomes. If every effort has been made to forecast the result of a future event, and the result is very different from predictions, it may be ill-advised to blame ourselves or our employees for the failure. This, of course, assumes that we made every effort to collect and appropriately process all the information relevant to the prediction.

· Risk Trade-offs.

Uncertainty and risk are facts of executive life. Many risky decisions concern ethical dilemmas involving jobs, safety, environmental risks, and organizational existence. How risky is it to build one more nuclear power plant? How risky is it to expose assembly-line employees to the chemicals for making animal flea collars? At some point, our decisions are reduced to basic questions like: What level of risk is acceptable? How much is safety worth?

One unhelpful answer to the second question is “any price.” That answer implies that we should devote all our efforts to highway improvement, cures for cancer, reducing product risks, and so on, to the exclusion of productivity. Throughout our lives, dealing with risk requires trading off benefits and costs; however, this is not a process that people find easy. It is much simpler, but completely unrealistic, to say “any price.” The illusion that a riskless world can be created is a myth that is consistent with a theory of the world that minimizes the role of chance.

If we deal irrationally or superficially with risk, costly inconsistencies can occur in the ways we make risk tradeoffs. Experts point out that U.S. laws are less tolerant of carcinogens in food than in drinking water or air. In the United Kingdom, 2,500 times more money per life saved is spent on safety measures in the pharmaceutical industry than in agriculture. Similarly, U.S. society spends about $140,000 in highway construction to save one life and $5 million to save a person from death due to radiation exposure.

A special premium seems to get attached to situations in which all risk can be eliminated. Consider the following two scenarios:

Scenario A. There is a 20 percent chance that the chemicals in your company’s plant might be causing ten cancer-related illnesses per year. Your company must decide whether to purchase a multimillion-dollar filtration system that would reduce this probability to a 10 percent chance.

Scenario B. There is a 10 percent chance that the chemicals in your company’s plant might be causing ten cancer-related illnesses per year. Your company must decide whether to purchase a multimillion-dollar filtration system that would entirely eliminate this risk.

Evidence suggests that executives would be more likely to purchase the filtration system in scenario B than in scenario A.7 It appears to be more valuable to eliminate the entire risk than to make an equivalent reduction from one uncertain level to another. Rationally, all reductions in a risk of 10 percent should have the same value for the decision maker. The “preference for certainty” suggests that a firm might be willing to spend more money to achieve a smaller risk reduction if that smaller reduction totally eliminated the risk. Were this the case, not only would the firm’s decision be wasteful, it would be unethical because it failed to accomplish the greatest good with the budget allocated for it.

Perceptions of risk are often faulty, frequently resulting in public and private decision makers’ misdirected risk-reduction efforts. Is it not a breech of ethics if incoherent policies save fewer lives at greater costs than other possible policies? Failure to explicitly deal with risk tradeoffs may have created precisely such a situation.

· Risk Framing.

Whether a glass is half-full or half-empty is a matter of risk framing. When the glass is described as half-full, it appears more attractive than when described as half-empty. Similarly, a medical therapy seems more desirable when described in terms of its cure rate than its failure rate. This finding probably occurs because the cure rate induces people to think of the cure (a good thing), whereas an equivalent description in terms of failures induces people to think of failures (not a good thing).

A less obvious effect has been found with regard to the framing of risks. Consider this example:

  • A large car manufacturer has recently been hit with a number of economic difficulties. It appears that it needs to close three plants and lay off 6,000 employees. The vice president of production, who has been exploring alternative ways to avoid the crisis, has developed two plans.

Plan A will save one of the three plants and 2,000 jobs.

Plan B has a one-third probability of saving all three plants and all 6,000 jobs, but has a two-thirds probability of saving no plants and no jobs.

Which plan would you select? There are a number of things to consider in evaluating these options. For example, how will each action affect the union? How will each plan influence the motivation and morale of the retained employees? What is the firm’s obligation to its shareholders? While all these questions are important, another important factor influences how executives respond to them. Reconsider the problem, replacing the choices provided above with the following choices.

Plan C will result in the loss of two of the three plants and 4,000 jobs.

Plan D has a two-thirds probability of resulting in the loss of all three plants and all 6,000 jobs, but has a one-third probability of losing no plants and no jobs.

Now which plan would you select? Close examination of the two sets of alternative plans finds the two sets of options to be objectively the same. For example, saving one of three plants and 2,000 of 6,000 jobs (plan A) offers the same objective outcome as losing two of three plants and 4,000 of 6,000 jobs (plan C). Likewise, plans B and D are objectively identical. Informal empirical investigation, however, demonstrates that most individuals choose plan A in the first set (more than 80 percent) and plan D in the second set (more than 80 percent).8 While the two sets of choices are objectively the same, changing the description of the outcomes from jobs and plants saved to jobs and plants lost is sufficient to shift the prototypic choice from risk-averse to risk-seeking behavior.

This shift is consistent with research showing that individuals treat risks concerning perceived gains (e.g., saving jobs and plants — plans A and B) differently from risks concerning perceived losses (e.g., losing jobs and plants — plans C and D). The way in which the problem is “framed” or presented can dramatically change how executives respond. If the problem is framed in terms of losing jobs and plants, executives tend to take the risk to avoid any loss. The negative value placed on the loss of three plants and 6,000 jobs is usually perceived as not being three times as bad as losing one plant and 2,000 jobs. In contrast, if the problem is framed in terms of saving jobs and plants (plans A and B), executives tend to avoid the risk and take the sure “gain.” They typically view the gain placed on saving three plants and 6,000 jobs as not being three times as great as saving one plant and 2,000 jobs.

This typical pattern of responses is consistent with a general tendency to be risk averse with gains and risk seeking with losses.9 This tendency has the potential for creating ethical havoc. When thinking about layoffs, for instance, most employees surely focus on their potential job loss. If executives adopt a risk-prone attitude in such situations — that is, if they are willing to risk all to attempt to avoid any losses — they may be seen as reckless and immoral by the very people whose jobs they are trying to preserve. If different stakeholders have different frames, the potential for moral disagreement is great.

Perception of Causes

The final aspect of executives’ theories of the world, perhaps the most important, is the beliefs that executives and other people cherish about the causal texture of the world, about why things happen or don’t happen. Everyone holds beliefs about business successes and failures. As we mentioned earlier, every morning we’re given a reason for why the stock market rose, fell, or stayed the same, thus reinforcing the theory that the world is deterministic. Moreover, judging causal responsibility is often a precursor to judging moral accountability and to blaming or praising a person, organization, or policy for an outcome. However, even under the best of circumstances, causation is usually complex, and ambiguity about causation is often at the heart of disputes about responsibility, blame, and punishment.

Consider, for example, the Herald of Free Enterprise, a ferry that carried automobiles from the Belgian port of Zeebrugge to Dover, England. Several years ago, it sank in a placid sea a few minutes after leaving Zeebrugge; 180 persons drowned. An investigation determined that the boat sank because the bow doors, through which the cars enter, had been left open, allowing water to pour into the vessel. The assistant bosun, who was responsible for closing the bow doors, had, tragically, taken a nap.

There were no alarm lights to warn the captain that the doors were open. The captain had requested such lights, but the company had denied his request; it felt warning lights were unnecessary because the first mate monitored the closing. On this occasion, the first mate failed to monitor the bow-door closing because he was needed elsewhere on board due to a chronic manpower shortage. Furthermore, the monitoring system was a “negative” check system, which means that signals were sent only if problems were detected. The lack of a signal was construed as an indication that all was well; the captain did not have to wait for a “positive” signal from the boat deck. Finally, there was the question of why water entered the ship since the bow doors are normally several meters above sea level. The answer was that the ship had taken on ballast to enable it to take cars onto the upper car deck. The captain had not pumped out the ballast before departing because he needed to make up twenty minutes to get back on schedule. Thus the ship left harbor at full throttle, creating a bow wave, with the ship’s bow unusually low in the water.

What caused the Herald of Free Enterprise to capsize? Who is to blame? We have many candidates for blame: the assistant bosun, the first mate, the captain, the person who refused to provide warning lights, the person who instituted the negative check system, and the owners of the line for failing to provide adequate crew for the boat.

· Focus on People.

A central issue in this case is the tendency of most people to blame a person. This principle is at the heart of the slogan of the National Rifle Association, a U.S. lobbying organization for gun manufacturers and users: “Guns don’t kill people, people do.” “Human error” becomes the cause assigned to many accidents involving complex technologies (such as ferries). We tend to blame people because it is easy to imagine them having done something to “undo” or prevent the accident. If the assistant bosun had not fallen asleep, if the first mate had stayed on the car deck to supervise the bow door closing, if the captain had not left the harbor at full speed before pumping the ballast, and so on.

It is less easy to imagine changing the ship’s equipment and procedures, and these appear less salient as a cause of the disaster. The absence of warning lights allowed the ship to depart with the bow doors open. The negative check system invited a nonmessage to be misconstrued as an “all clear” signal. The point is that human “errors” occur within systems that may vary widely in the degree to which they are “error proof.” Our theories about the world usually involve people as the causal agents, rather than environments either that influence people for good or bad or that can compensate for human weaknesses such as drowsiness. From an engineering viewpoint, what is easier to change — warning lights or periodic drowsiness?

· Different Events.

Theories about causes often lead people to disagree, because, as McGill has pointed out, they are explaining different events.10 When Sears introduced a commission-based sales system at its automotive repair shops, there was an increase in consumer complaints, usually accusing the shop of performing unnecessary, expensive work. Sears acknowledged that there had been some “isolated abuses” but denied that the problem was widespread. In subsequent public discussions, some of the controversy confused two phenomena. The first is why a particular employee would recommend and perform unnecessary work. The question, “Why did Jack do this?” may lead to determining how Jack is different from Bill and other employees who did not recommend unnecessary work. These causes answer the question, “Why did Jack do this, while others did not?” Are there changes in Jack’s situation that can explain his misconduct? “Why did Jack do this now, when he did not do it earlier?” is another way to construe this question.

The second question is why Sears had more complaints in the new system. The fact that there was a change raises an important issue: different systems may produce different levels of unethical conduct. If we focus only on Jack, or if we never change the system, we fail to see that the system itself can be a cause of problems. In many cases, something like the method of compensation appears in the background. If an employee behaves dishonestly, we tend to contrast him or her with honest workers, rather than ask if there is something encouraging dishonesty. When we change situations, we can sometimes see that an organization’s features can have a causal impact on human actions, analogous to what happens when a community is exposed to a carcinogenic agent. The overall cancer rate in the community will increase, but it may be difficult to ever determine whether any specific individual’s cancer was caused by the toxin. There may be convincing proof that the agent is a cause of cancer in the community generally, but not of any particular cancer.

· Sins of Omission.

We have no problem judging that the assistant bosun bears some responsibility for the passenger deaths on the Herald of Free Enterprise, even though his contribution to the disaster was a failure to act. In many other situations, in which expectations and duties are not as well defined as they were with the Herald, a failure to take an action is used to shield persons from causal and, hence, moral responsibility. Is a public health official who decides not to authorize mandatory vaccinations responsible for the deaths of those who succumb to the disease?11 Is the executive who fails to disclose his knowledge of a colleague’s incompetence responsible for the harm that the colleague causes the firm? Many people would answer these questions in the negative, largely because they perceive that the immediate cause of the deaths or harm is the virus or incompetence. But since the actions of the public health official and the executive could have prevented the harm, their actions are logically in the same category as those of the assistant bosun. It is an old adage that evil prevails when good people fail to act, but we rarely hold the “good” people responsible for the evil.

Theories about Other People

An executive’s social world is changing at least as fast as his or her physical world. The internationalization of manufacturing and marketing exposes executives to very different cultures and people, and they need to be tolerant of different customs, practices, and styles. More women are entering the work force. In the United States, both the African American and Latino populations are growing faster than the Anglo population, a demographic fact reflected in labor markets. Also, the United States, like many other nations, prohibits employment discrimination on the basis of religion, race, gender, age, and other types of social or personal information. This combination of factors — the increasing social diversity of the business world and the inappropriateness of using such social information in making decisions — creates many ethical hazards that executives must avoid. Incorrect theories about social groups — about women, ethnic minorities, or other nationalities — increase executives’ danger markedly. In this section, we discuss how executives, like other people, are likely to harbor erroneous theories about other groups.12

Ethnocentrism

The characteristics of our nation, group, or culture appear to us to be normal and ordinary, while others appear foreign, strange, and curious. Implicit in this perception is the assumption that what is normal is good and what is foreign, different, and unusual is less good. This perception that “our” way is normal and preferred and that other ways are somehow inferior has been called ethnocentrism. In the ethnocentric view, the world revolves around our group, and our values and beliefs become the standard against which to judge the rest of the world.

Everyone is ethnocentric to some degree. We probably cannot escape the sense that our native tongue is “natural” while other languages are artificial, that our food is normal while others are exotic, or that our religion is the true one while others are misguided. The fact that ethnocentrism is basic and automatic also makes it dangerous. We do not have to harbor hostile views of members of other groups in order to subtly discriminate. We must merely believe that our own group is superior, a belief that is often actively and officially encouraged by the groups in question and that most of us find all too easy to maintain.

The consequences of ethnocentrism are pervasive. We may describe the same actions of “us” and “them” in words that are descriptively equivalent but evaluatively biased. We are loyal, hard-working, and proud; they are clannish, driven, and arrogant. We are fun loving; they are childish.

Furthermore, “we” tend to be like each other and quite different from “them.” “We” come in all shapes and sizes, while “they” tend to be all alike. We take pleasure in “our” successes and grieve over “our” failures, while we are relatively uncaring about “their” outcomes. We expect aid and support from others of “us” and are more willing to support “us” than “them.” We may not wish “them” harm but would not go out of our way to help “them.” What is curious about this phenomenon is that today “we” may be residents of Chicago and “they” may be rural residents of Illinois, and tomorrow “we” may be Americans and “they” may be Europeans, or “we” may be men and “they” may be women.

Ethnocentric thinking exaggerates the differences between “us” and “them” in ways that can expose leaders to the risk of making ethically unsound decisions. Intensely competitive situations, such as military contexts, illustrate this type of distortion. Military strategists have often made different assumptions about how “we” and “they” will react to intensive attack. They seem to believe that the enemy’s spirit can be broken by a prolonged artillery or bombing attack and associated deprivations. Their belief does not seem to have been weakened by the evidence of Leningrad, London, Dresden, Vietnam, or, more recently, Sarajevo. In all these cases, civilian populations were subjected to intensive, prolonged attack, the main consequence of which seems to have been to strengthen the afflicted people’s resolve to resist the aggressors. U.S. leaders did not share the Japanese belief that a swift and decisive victory over the U.S. Pacific fleet at Pearl Harbor would destroy the American will to wage a Pacific war. These instances reflect the belief that “they” will be more discouraged by extreme hardship than “we” would be. These incorrect theories about “them” turned out to be seriously wrong and immeasurably costly.

It is an error to think that the effects of ethnocentrism are always as momentous or conspicuous as in these examples. Consider the charge of pervasive racial discrimination in mortgage lending. There is evidence that a higher proportion of minority applicants than white applicants are rejected. This difference in rejection rates remains after accounting for the effects of differences in income, employment stability, credit history, and other indicators of credit-worthiness. Yet mortgage bankers vigorously deny that they are harder on minority applicants than on white ones.

Much research indicates that the way ethnocentrism often works is not by denigrating “them” but by rendering special aid to “us.” This has been called the “in-group favoritism” hypothesis.13 In mortgage lending, this hypothesis suggests that the difference in approval rates for whites and minorities may not reflect the fact that qualified minority applicants are denied, but that unqualified white applicants are given loans. This difference has important implications for banks that want to understand and correct the disparity. Establishing a review procedure for rejected minority loans would not be an advisable policy if the in-group favoritism hypothesis is correct, because there may be few, if any, qualified minorities who are rejected. Looking only at rejected minority loans would uncover no evidence of racial discrimination. To find where the discriminatory action lies, the bank needs to examine the marginally unqualified applicants. The in-group favoritism hypothesis predicts that, of this group, more white than minority applicants will be approved.

Stereotypes

In addition to the “theory” that “our” group is better than others, we often have specific beliefs about particular groups, which constitute implicit theories about people in these groups. We have stereotypes about different nationalities, sexes, racial groups, and occupations. To the extent that we rely on stereotypes rather than information about individuals, we risk making unfair, incorrect, and possibly illegal judgments. The issue here is not the extent to which stereotypes are accurate; the issue is whether people will be judged and evaluated on the basis of their individual qualities or on the basis of their group membership. The fact that women are generally smaller and weaker than men is irrelevant to the question of whether a particular woman is strong enough to perform a physically demanding job.

Like ethnocentrism, stereotypes are dangerous because we are often unaware of their influence. We tend to think that our beliefs about groups are accurate, and we can often draw on experience to support these beliefs. Experience, however, can be a misleading guide. Think about the people whom you consider to be the most effective leaders in your company. What qualities do they have that make them effective? For a purely historical reason, there is a good chance that the people who come to mind as effective leaders are men. For that reason, many of the qualities you associate with effective leadership may be masculine. Consequently, you may find it difficult to imagine a woman who could be an effective leader.

It is instructive to review the origins of the common belief that business leaders are masculine. First, there is the fact that twenty to thirty years ago, almost all businesspeople were men. Thus successful businesspeople today — those who have been in business twenty or thirty years — are also men. If we form our impressions of what it takes to succeed by abstracting the qualities of the successful people we know, a perfectly reasonable process, our impressions will have a distinctly masculine aura. It is not that we have evidence that women do not succeed; rather, we have little evidence about women at all. If you are asked to imagine people in your company who are notorious failures, the people you conjure up would probably also be men. The stereotypical failure is probably also a man.

How can we guard against the dangers of ethnocentric and stereotypical theories? Starting with ethnocentrism, we should question arguments based on the belief that “they” are different from “us.” The safest assumption to make, in the absence of contrary evidence, is that “they” are essentially the same as “us” and that if we want to know how “they” will react to a situation, a wise first step is to ask how “we” would react. Historically, far more harm has been incurred by believing that different groups are basically different than by assuming that all people are essentially the same.

Many decisions that executives make involve promotion, hiring, firing, or other types of personnel allocations. These decisions are stereotypical when they use considerations about the group rather than information about the person. “Women can’t handle this kind of stress” is a stereotypical statement about women, not an assessment of a particular individual. Executives should be especially alert for inappropriate theories about others when the criteria for evaluation and the qualifications under discussion are vague. Ethnocentric or stereotypical theories are unlikely to have a large impact if rules state that the person with the best sales record will be promoted. The criteria and qualifications are clear and quantified. However, vague criteria such as sociability, leadership skill, or insight make evaluation susceptible to stereotyping.

One of the most effective strategies for combating ethnocentrism and stereotypes is to have explicit corporate policies that discourage them, such as adopting and publishing equal opportunity principles and constantly reminding employees that group-based judgments and comments are unacceptable. Executives must be the ethical leaders of their organizations.

Theories about Ourselves

Low self-esteem is not generally associated with successful executives. Executives need confidence, intelligence, and moral strength to make difficult, possibly unpopular decisions. However, when these traits are not tempered with modesty, openness, and an accurate appraisal of talents, ethical problems can arise. In other words, if executives’ theories about themselves are seriously flawed, they are courting disaster. Research has identified several ways in which peoples’ theories of themselves tend to be flawed.14 We discuss three: the illusion of superiority, self-serving fairness biases, and overconfidence.

Illusion of Superiority

People tend to view themselves positively. When this tendency becomes extreme, it can lead to illusions that, while gratifying, distort reality and bias decision making. Scholars have identified three such illusions: favorability, optimism, and control.15

  • Illusion of Favorability. This illusion is based on an unrealistically positive view of the self, in both absolute and relative terms. For instance, people highlight their positive characteristics and discount their negatives. In relative terms, they believe that they are more honest, ethical, capable, intelligent, courteous, insightful, and fair than others. People give themselves more responsibility for their successes and take less responsibility for their failures than they extend to others. People edit and filter information about themselves to maintain a positive image, just as totalitarian governments control information about themselves.
  • Illusion of Optimism. This illusion suggests that people are unrealistically optimistic about their future relative to others. People overestimate the likelihood that they will experience “good” future events and underestimate the likelihood of “bad” future events. In particular, people believe that they are less susceptible than others to risks ranging from the possibility of divorce or alcoholism to injury in traffic accidents. To the extent that executives believe themselves relatively immune from such risks, they may be willing to expose themselves and their organizations to hazards.
  • Illusion of Control. The illusion of optimism is supported by the illusion of control that we referred to earlier. One reason we think we are relatively immune to common risks is that we exaggerate the extent to which we can control random events. Experiments have demonstrated the illusion of control with MBA students from some top U.S. business schools, so there is no reason to think that executives who have attended these schools will be immune to them.16 (Indeed, the belief that one is exempt from these illusions, while others are not, is an excellent illustration of the illusion of optimism.)

These illusions may also characterize peoples’ attitudes about the organizations to which they belong. The result is a kind of organizational ethnocentrism, as we discussed earlier. Managers may feel that their company’s contributions to society are more important than those of other companies, even when a neutral observer sees comparability. Similarly, executives may feel that the damage their firms cause society is not as harmful as that created by other organizations. Such a pattern of beliefs can create a barrier to societal improvement when each organization underestimates the damages that it causes. Often, however, firms and their executives genuinely believe that they are being fair and just in their positions (and that others are biased, an illustration of the illusion of favorability).

Self-Serving Fairness Biases

Most executives want to act in a just manner and believe they are fair people. Since they are also interested in performance and success, they often face a conflict between fairness and the desired outcome. They may want a spacious office, a large share of a bonus pool, or the lion’s share of the market. Furthermore, they may believe that achieving these outcomes is fair because they deserve them. Different parties, when judging a fair allocation among them, will often make different judgments about what is fair, and those judgments will usually serve the party’s interest. These judgments often reflect disagreements about deservedness based on contributions to the collective effort. It is likely that if you asked each division in your organization to estimate the percentage of the company’s worth that is created by the division, the sum of the estimates would greatly exceed 100 percent. (Research has been shown this to be true with married couples. The researchers who did the study reported that they had to ask the questions carefully because spouses would often be amazed, and then angry, about the estimates that their mates gave to questions like, “What percentage of the time do you clean up the kitchen?”17)

One important reason for these self-serving views about fairness is that people are more aware of their contributions to collective activities than others are likely to be; they have more information about their own efforts than others have or than they have about others. Executives may recall disproportionately more instances of their division helping the corporation, of their corporation helping the community, and of their industry helping society.

Furthermore, executives, like other people, credit themselves for their efforts, whereas they are more likely to credit others only for their achievements. They also credit themselves for the temptations that they resisted but judge others strictly by their actions, not by their lack of action. An executive who is offered a substantial bonus to misrepresent the financial well-being of her firm may feel proud of her honesty when she declines, but others may either not know of the temptation or, if they do, believe that she merely followed the rules. While she may feel that the firm owes her gratitude, the firm may not share that feeling.

These fairness biases are particularly problematic during negotiations, when costly delays and impasses result. Egocentric interpretations of fairness hinder conflict resolution because each party believes that its own demands are fair and thus is unwilling to agree to what it perceives as inequitable settlements. It is not just a matter of different interests, it is a matter of what is fair and proper. The difference in perspectives can lead parties to question each others’ ethics and morality. The temptation to view the other side as immoral when they fail to agree with us is especially pronounced in situations in which ethnocentric impulses may be aroused — for instance, international negotiations, labor management negotiations, or negotiations that involve issues of race or gender. For example, Price Waterhouse, a major accounting firm, was surprised when it lost a sexual discrimination suit. The firm’s view of its procedures’ fairness was at odds with the plaintiff’s and judge’s views.

Overconfidence

Most people are erroneously confident in their knowledge. In situations in which people are asked factual questions and then asked to judge the probability that their answers are true, the probability judgments far exceed the actual accuracy measures of the proportion of correct answers.18 For instance, when asked, “Which city is farther north, Rome or New York?,” most respondents choose New York and indicate a probability of about 90 percent that it is true. In fact, it is not true; Rome is slightly north of New York. Research has indicated that when people (including executives) respond to a large group of two-option questions for which they claim to be 75 percent certain, their answers tend to be correct only 60 percent of the time.19 For confidence judgments of 100 percent, it is not uncommon for subjects to be correct only 85 percent of the time. Other research found that subjects who assign odds of 1,000:1 to their answers are correct only 90 to 96 percent of the time.20 Overconfidence has been identified among members of the armed forces, executives, business students, and C.I.A. agents.21

The danger of overconfidence is, of course, that policies based on erroneous information may fail and harm others as well as the executive who established the policy. Overconfidence, as part of our theories about ourselves, coupled with flawed theories about the world or about other people, poses serious threats to rational and ethical decision making.

To the degree to which people are overconfident in their (conservative) risk assessments — in their beliefs about the availability of scarce resources or the character of people unlike themselves — they will fail to seek additional information to update their knowledge. One cost of overconfidence is a reluctance to learn more about a situation or problem before acting.

Even if people acknowledge the need for additional information, research has shown that their process for gaining that information may be biased to confirm prior beliefs and hypotheses.22 This tendency was initially demonstrated in a series of studies in which the subjects were given a three-number sequence, 2-4-6. Their task was to discover the numeric rule to which the three numbers conformed. To determine the rule, they were allowed to generate other sets of three numbers that the experimenter would classify as either conforming or not conforming to the rule. At any point, subjects could stop when they thought that they had discovered the rule.

The rule is “any three ascending numbers.” Suppose you thought the rule was “the difference between the first two numbers equals the difference between the last two numbers” (a common expectation). Testing confirming sequences, like 1-2-3, 10-15-20, or 122-126-130, will provide positive feedback and increase confidence in the original, but incorrect, hypothesis. To discover how the true rule differs from this rule, you must try sequences that do not conform to the hypothesized rule. You need to ask questions that, if answered positively, would disconfirm your rule. This is a less comfortable mode of acquiring information, partly because it may appear that you are not confident in your belief.

Transpose this idea to an executive questioning an engineer about the safety of a tool grip. The executive wants to and does believe that the grip is safe. If the executive asks questions like, “This really is a safe grip, isn’t it?” or “Does this grip meet all the standards that have been set for this type of tool?,” he is doing two things that may distort the information that he will receive. First, he is displaying the confirmation bias by asking questions he expects to be answered “yes.” Second, he is unconsciously exploiting social politeness, because people are more likely to agree than disagree. So by asking these types of questions, the executive is less likely to learn if the engineer has misgivings about any design features than if he asked questions such as, “What are the advantages and disadvantages of the grip?” or “What are the things we have most to worry about with this design?”

These processes suggest that executives may be favorably biased toward themselves and their firms. Will feedback help to eliminate or reduce these biases? We believe that feedback may provide only limited help because of the tendency to seek and notice confirming information, which forms an additional barrier to learning through experience.

When we consider the combined impact of the three processes described in this section — the illusion of superiority, self-centered perceptions of fairness, and overconfidence — we can see the peril associated with erroneous theories of the self. The major peril is that we will come to see ourselves as people for whom the normal rules, norms, and obligations do not apply. The danger is that an executive, especially a successful executive, will hold himself above conventional ethical principles and subject himself only to self-imposed rules that others might judge to be self-serving. He might justify telling a lie on the ground that it permits him to achieve important benefits for others (such as shareholders or employees) even though the shareholders or employees are being duped. He might feel that inflating an expense account or using company property for personal ends is not “really” wrong because of the value that he contributes to the company. Finally, he may undertake an immoral or illegal act, convinced that he will never be caught. The tendencies to feel superior, to generate self-serving, on-the-spot moral rules, and to be overconfident about beliefs create the potential for moral shallowness and callowness.

Improving Ethical Decision Making

Our position is that the causes of poor ethical decisions are often the same as the causes of poor decisions generally; decisions may be based on inaccurate theories about the world, about other people, or about ourselves. We suggest that ethical decision making may be improved in the same way that general decision making is improved. In this final section, we outline three broad criteria that executives can focus on: quality, breadth, and honesty.

Quality

Executives who make higher-quality decisions will tend to avoid ethical mistakes. Improving the quality of decision making means ensuring that all the consequences of actions are considered. It implies having accurate assessments of the risks associated with possible strategies and being attuned to the pitfalls of egocentric biases.

A general principle is that the types of flaws and biases we have discussed are likely to influence decision making more when decisions are intuitive, impulsive, or subjective rather than concrete, systematic, and objective. Stereotypes, for instance, have less influence on personnel decisions or performance appraisals if the evaluation criteria are quantitative rather than subjective and vague. Managers often resist this suggestion because they feel that using quantitative procedures makes their judgment “mechanical” or superfluous. The argument in favor of such procedures is that they reduce, or at least identify, opportunities for inappropriate information to influence decisions. Using a quantitative process allows a manager to identify precisely the source of such inappropriate information. Often, systematic procedures result in the same decision as more subjective ones, but the results are more acceptable because the process is viewed as objective, fair, and less subject to bias.

Whenever possible, executives should base decisions on data rather than hunches. In uncertain situations, the best guide comes from close attention to the real world (e.g., data), not from memory and intuition. People worry more about death by murder than death by automobile accident, even though the latter is, statistically, a much greater threat than the former. Reasoning by anecdote — for example, “My engineering chief says he is convinced the product is safe, regardless of what the test results say” — not only wastes resources expended to gather the data but also irresponsibly exposes others to avoidable risks. A corollary is that getting high-quality data is obligatory. In business, as in science, passing off poor, unreliable data as good is fraudulent and inexcusable.

Sometimes executives cannot escape making decisions and judgments on subjective, intuitive bases. But they can take steps to prevent some of the biases from distorting judgment. To combat overconfidence, for instance, it is effective to say to yourself, “Stop and think of the ways in which you could be wrong.” Similarly, to avoid minimizing risk, you can ask, “What are the relevant things that I don’t know?” Often, a devil’s advocate, who is given the role of scrutinizing a decision for false assumptions and optimistic projections, can play this role. A major difference between President Kennedy’s Bay of Pigs fiasco and his skillful handling of the Cuban missile crisis was his encouragement of dissenting opinions and inclusion of people whose political orientations disagreed with his own.23

One threat to rational and ethical decision making that we noted earlier stems from the untrustworthiness of human memory. The first step in managing this threat is to acknowledge it. The second is to compensate for it with improved, detailed record keeping. This recommendation corresponds to a tenet of the total quality management movement — record keeping and benchmarking are central to measuring objectively how well a process is performing. Quality management and ethical management are close companions; what promotes one generally promotes the other. Erroneous theories threaten both.

Breadth

By breadth, we mean assessment of the full range of consequences that policies may entail. An ethical audit of a decision must take into account the outcomes for all stakeholders. The first task is to compile a list of the stakeholders. The second is to evaluate a decision’s likely outcomes from the stakeholders’ perspective.

One approach to identifying stakeholders is to make the decision process as open as possible and invite input from interested parties. However, different groups may have different access to public information, so this technique risks overlooking important constituencies. A potential solution is to include representatives of the important groups on the decision-making team. Broad consultation, which requires an active search to enlist all affected parties into the decision-making process, is important. Openness itself is often a signal to potential opponents that nothing is being hidden and there is nothing to fear. For example, a few years ago, two relatively similar construction projects in Arizona differed greatly in the care they took to involve the active environmental groups in their communities. The project that worked continually with citizens gained their trust and support for the project, while the one that ignored environmentalists faced expensive legal challenges in court.

Socially responsible executive decision making recognizes that a company is part of a broader community that has an interest in its actions. A full accounting for decisions must include a community-impact assessment. If there is community opposition to a policy, it is far better to address it early on rather than risk being ambushed by it later.

Finally, executives’ decisions affect those not only in the present but also in the future. Executives’ responsibility is to manage so that the world’s social and physical environments are not spoiled for future generations. The continual squandering of nonrenewable resources or overuse of renewable ones gives privileges to the current generation at the expense of later ones. Likewise, postponing the payment for what we enjoy today saddles future generations with paying for current consumption. None of us would intentionally make our own children worse off than we are, and we would not want others to do so either.

Breadth is an important quality of ethical decision making because it is both ethically proper and strategically sound. It means doing the right thing and doing the smart thing. Intentional decisions to exclude stakeholders’ interests or input may not only violate their rights, which is an ethical infraction, but also invite opposition, resentment, and hostility, which is stupid.

Honesty

In discussing breadth, we urged openness. But executives can rarely divulge all the information involved in a decision. Much information is proprietary, gives competitors an unfair advantage, and is legally confidential. A policy of openness does not require executives to tell all. It is perfectly ethical and appropriate to withhold some types of information. It is inappropriate to withhold information about a project or policy merely because an executive is ashamed to make it public. We propose that, if an executive feels so embarrassed about some aspect of a project that she wants to hide the information, she probably should not undertake the project. Conscience, in short, is a good litmus test for a decision’s ethicality. If an idea cannot stand the light of day or the scrutiny of public opinion, then it is probably a bad idea. A variant of this “sunshine test” is to imagine how you would feel if you saw the idea or decision on the front page of the New York Times.

As we pointed out earlier, you cannot always trust your reaction to a hypothetical test. It’s easy to say, “I wouldn’t mind it if my family knew that I misstated the firm’s income by $20 million,” when this is, in fact, completely untrue. As one scholar points out, we ourselves are the easiest audience that we have to play to and the easiest to fool.24 Consequently, we should imagine whether our audience would accept the idea or decision. In particular, we should ask whether the people with the most to lose would accept the reasons for our actions. If not, we are probably on moral thin ice.

One risk often overlooked when practicing deceit is the continual need to maintain deception. Not only are original facts hidden, but the fact of hiding must also be hidden. In the notorious Watergate scandal, President Nixon was forced from office not for what occurred in the Watergate complex, but for the efforts the White House made to hide the offense.

While it is important to be honest with others, it is just as important to be honest with yourself. Self-deception —being unaware of the processes that lead us to form our opinions and judgments — is unavoidable. We think we remember things accurately, but careful studies show that we do not. We think we know why we make judgments about other people, but research shows us other reasons.

If we can accept the fact that the human mind has an infinite, creative capacity to trick itself, we can guard against irrational, unethical decisions. To deny this reality is to practice self-deception. We can learn to suspect our naive judgments. We can learn to calibrate ourselves to judge risk. We can examine our motives in judging others; are we using hard, reliable information to evaluate subordinates, or are we using stereotypes?

The topic of executive ethics has been dominated by the assumption that executives are constantly faced with an explicit trade-off between ethics and profits. We argue, in contrast, that unethical behavior in organizations is more commonly affected by psychological tendencies that create undesirable behavior from both ethical and rational perspectives. Identifying and confronting these psychological tendencies will increase the success of executives and organizations.

Topics

References

1. For the research on which we based this article, see:

M.H. Bazerman, Judgment in Managerial Decision Making (New York: John Wiley, 1994);

R.M. Dawes, Rational Choice in an Uncertain World (San Diego, California: Harcourt Brace Jovanovich, 1988);

T. Gilovich, How We Know What Isn’t So (New York: Free Press, 1991); and

S. Plous, The Psychology of Judgment and Decision Making (New York: McGraw Hill, 1993).

A forthcoming book will explore these and other topics in greater detail. See:

D.M. Messick and A. Tenbrunsel, Behavioral Research and Business Ethics (New York: Russell Sage Foundation, forthcoming).

2. G. Hardin, Filters Against Folly (New York: Penguin, 1985).

3. R. Nader, Unsafe at Any Speed (New York: Grossmans Publishers, 1965).

4. M. Rothbart and M. Snyder, “Confidence in the Prediction and Postdiction of an Uncertain Outcome,” Canadian Journal of Behavioral Science 2 (1970): 38–43.

5. I.L. Janis, Groupthink: Psychological Studies of Policy Decisions and Fiascoes (Boston: Houghton Mifflin, 1982).

6. B. Fischhoff, “Hindsight: Thinking Backward,” Pyschology Today 8 (1975): 71–76.

7. D. Kahneman and A. Tversky, “Prospect Theory: An Analysis of Decision under Risk,” Econometrica 47 (1979): 263–291.

8. Bazerman (1994); and

Kahneman and Tversky (1979).

9. Kahneman and Tversky (1979).

10. A.L. McGill, “Context Effects in the Judgment of Causation,” Journal of Personality and Social Psychology 57 (1989): 189–200.

11. I. Ritov and J. Baron, “Reluctance to Vaccinate: Omission Bias and Ambiguity,” Journal of Behavioral Decision Making 3 (1990): 263–277.

12. For further details on many of these issues, interested readers may consult:

S. Worcheland and W.G. Austin, Psychology of Intergroup Relations (Chicago: Nelson-Hill, 1986).

13. M.B. Brewer, “In-Group Bias in the Minimal Intergroup Situation: A Cognitive-Motivational Analysis,” Psychological Bulletin 86 (1979): 307–324.

14. For example, see:

S.E. Taylor, Positive Illusions (New York: Basic Books, 1989).

15. S.E. Taylor and J.D. Brown, “Illusion and Well-Being: A Social Psychological Perspective,” Psychological Bulletin 103 (1988): 193–210.

16. R.M. Kramer, E. Newton, and P.L. Pommerenke, “Self-Enhancement Biases and Negotiator Judgment: Effects of Self-Esteem and Mood,” Organizational Behavior and Human Decision Processes 56 (1993): 110–133.

17. M. Ross and F. Sicoly, “Egocentric Biases in Availability and Attribution,” Journal of Personality and Social Psychology 37 (1979): 322–336.

18. S. Lichtenstein, B. Fischhoff, and L.D. Phillips, “Calibration of Probabilities,” in D. Kahneman, P. Slovic, and A. Tversky, eds., Judgement under Uncertainty: Heuristics and Biases (Cambridge: Cambridge University Press, 1982), pp. 306–334.

19. B. Fischoff, P. Slovic, and S. Lichtenstein, “Knowing with Certainty: The Appropriateness of Extreme Confidence,” Journal of Experimental Psychology: Human Perception and Performance 3 (1977): 552–564.

20. Ibid.

21. R.M. Cambridge and R.C. Shreckengost, “Are You Sure? The Subjective Probability Assessment Test” (Langley, Virginia: Office of Training, Central Intelligence Agency, unpublished manuscript, 1980).

22. P.C. Wason, “On the Failure to Eliminate Hypotheses in a Conceptual Task,” Quarterly Journal of Experimental Psychology 12 (1960): 129–140.

23. Janis (1982).

24. S. Bok, Lying: Moral Choice in Public and Private Life (New York: Vintage Books, 1989).

Reprint #:

3721

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.