How to Think Critically VII: Risk Assessment

Given that this is an atheist site, I feel compelled to start this post with a snappy anti-religion quip, so here it is: Children and teenagers are more likely to be molested or assaulted in church than they are on social networking sites like MySpace. Parents, do you want to protect your kids? Keep them home on Sundays and send them to the computer instead!

But it wouldn't be fair to leave it at that. This statistic doesn't prove the inherent riskiness of going to church. What it proves is that most crimes - against both children and adults - are committed by someone the victim knows personally, not by a random stranger. The idea of pedophiles and kidnappers trolling the Internet and snatching up unsuspecting children is lurid, shocking, sensational, which is why it captures the imagination. But the reality is that such things are so extremely rare as to be essentially not worth worrying about.

The truth is, as many reporters have documented, that human beings are not good at rationally assessing risk. This post will list some of the more common ways in which our risk judgments go awry.

People overestimate the odds of spectacular, attention-grabbing catastrophes, while underestimating the danger posed by common, everyday risks. The paradigm example of this is the common phobia of flying - stemming, no doubt, from news reports of spectacular plane crashes - while hardly any people have a similar fear of driving, which is by almost any measure a more dangerous activity. Another good example is the widespread fear of terrorist attack, although the total number of people ever wounded or killed by terrorism is far less than the number of victims of "ordinary" dangers such as domestic violence.

Our brains rapidly habituate to familiar situations, and risks that we encounter daily soon become part of the background patterns we're accustomed to. But shocking, unlikely events disrupt that expectation and leave a vivid, emotional stamp on our memories. As a result, these risks are more salient and are often judged to be more likely, even when nothing could be farther from the truth.

People overestimate risks they cannot control and underestimate risks they can. Again, driving vs. flying is a common example. Even sitting in the passenger seat of someone else's car, as opposed to driving yourself, may make the oncoming traffic appear much faster. When we feel we are not in control of the situation, the danger seems greater than when we believe we are in control.

People underestimate risks that creep up over time. As in the first point, the brain habituates to risks that are encountered often, until they scarcely seem dangerous at all. But this tendency can turn fatal when the slow, gradual accumulation of risk ultimately results in a deadly situation - like the metaphorical frog that will jump out of a pot of scalding water but can be boiled to death by turning the heat up slowly. Both on the personal level, with lifestyle diseases such as atherosclerosis or smoking, and societal dangers such as global warming, humans are often reluctant to confront problems that haven't seemed to do any harm so far.

People underestimate risks for which there is a perceived benefit. Risk assessments are almost impossible to divorce from perceived benefits and values, and when a person sees "something in it for them", the accompanying risks will seem less serious. Conversely, the risk seems greater for activities that have no perceived upside. One of the earlier linked articles has an example: while dozens of teenagers are killed each year from sports-related injuries, no one is harmed by marijuana use. Yet sports is thought of as less dangerous because society perceives that it instills positive character traits, while no such benefit related to recreational drug use is envisioned.

People overestimate "artificial" risks and underestimate "natural" risks. Although "natural" substances can be just as poisonous as "artificial" chemicals, or even more so (think of deadly nightshade or hemlock), people tend to prefer the former to the latter. Psychology Today adds:

Our built-in bias for the natural led a California town to choose a toxic poison made from chrysanthemums over a milder artificial chemical to fight mosquitoes: People felt more comfortable with a plant-based product.

...When a case report suggested that lavender and tea-tree oil products caused abnormal breast development in boys, the media shrugged and activists were silent. If these had been artificial chemicals, there likely would have been calls for a ban, but because they are natural plant products, no outrage resulted.

A more telling example is the pseudoscientific hysteria over microwaves from cell phone towers or power lines allegedly causing cancer, when every day we are bathed in far more dangerous and legitimately carcinogenic radiation - from the Sun.

Other posts in this series:

May 30, 2008, 7:50 am • Posted in: The ObservatoryPermalink51 comments
Tags:

The Gospel of Elvis

In the book God?: A Debate Between a Christian and an Atheist, William Lane Craig (debating Walter Sinnott-Armstrong) makes the following argument for why God chooses to remain hidden:

"Could God reveal himself more clearly?" Of course, He could: He could have inscribed the label "Made by God" on every atom or planted a neon cross in the heavens with the message "Jesus Saves." But why would He want to do such a thing?

...[T]here is no reason at all to think that if God were to make His existence more manifest, more people would come into a saving relationship with Him.

...In the Old Testament God is described as revealing Himself to His people in manifest wonders: the plagues upon Egypt, the pillar of fire and smoke, the parting of the Red Sea. But did such wonders produce lasting heart-change in the people? No, Israel fell into apostasy with tiresome repetitiveness. If God were to inscribe His name on every atom or place a neon cross in the sky, people might believe that He exists, all right, but what confidence could we have that after time they would not begin to chafe under the brazen advertisements of their Creator and even come to resent such effrontery? (p.109)

This argument, like many made by Christian apologists, displays a bizarre ignorance of human psychology. If God were to clearly show his existence, it would not cause more people to worship him? Really?

If anything, human beings are too willing to worship and to follow. The great number of cults and sects that have sprung up in every era testifies to this; most of them have followed leaders who made only the flimsiest, most easily debunked pretense of having supernatural powers. (Sathya Sai Baba and Uri Geller, for example, have attracted significant followings despite performing only "miracles" that could easily be duplicated by sleight of hand.) To claim that an actual god which manifested itself and displayed real supernatural powers would not attract a vast following is to speak in total contradiction to everything that history and psychology teaches about humans' gullibility and eagerness to be led.

Fanatically devoted followings sometimes spring up even around figures that make no explicit effort to attract them. I can give no better example than the cult of Elvis Presley, which among his most devoted fans has taken on many of the trappings of a latter-day religion. His Graceland estate is a major destination for pilgrimage to this day. Every year, his fans still hold a candlelight vigil on the anniversary of his death. The most hardcore fans, the ones who knew Elvis while he was alive, were called the "gate people" for their habit of sitting outside the gates of his mansion, every day, simply waiting for a chance to see him. The ones who met him, who saw him in person or got gifts or letters from him, treasure them to this day as if they were holy relics. (A lock of Elvis' hair once sold at auction for over $100,000.) And, to this day, there are people who pattern their entire lives around imitating him.

In fact, during his lifetime Elvis claimed to have paranormal - even miraculous - powers:

His stepbrother and bodyguard, David Stanley, wrote a chapter 'My Brother the Mystic' in his book Life with Elvis, in which he alleges that Elvis could heal by touch and move clouds in the sky. When threatened with a violent thunderstorm during a car journey 'Elvis stuck his right hand out of the sunroof and started talking to the clouds. "I order you to let us pass through"... and the amazing thing was that the clouds did exactly as he asked them to. They split right down the middle.

And, of course, to this day there's widespread speculation that he didn't really die. I can readily imagine that if Elvis during his lifetime had ever said, "I am the Son of God," by now he'd have a following that would easily equal some of the established churches, and people would be busily inventing posthumous miracles to attribute to him. (Similar stories have already begun to pop up around the late Pope John Paul II.) In time, as these stories became diffused and exaggerated, Elvis worship could well blossom into a bona fide religion.

If a mere singer could attract this kind of devotion - and still does, decades after his death - then it surpasses belief to claim, as Craig does, that an actual appearance of God in the flesh would not attract a far larger following and worship. People do not become jaded and disenchanted by being able to see and touch their idols; it only inspires them to greater heights of devotion. Craig's assertions to the contrary are in total conflict with reality.

Of course, the real reason he must maintain such risible assertions is that there are no manifestations. Thus, Craig must find a post-hoc means of rationalizing this to be consistent with his preexisting belief in God. Given those constraints, the solution he comes up with seems like the only feasible one. But it still fails to accord with well-known facts about reality and human nature.

April 16, 2008, 7:12 am • Posted in: The LoftPermalink26 comments
Tags:

Light and Dark

Greta Christina recently wrote a wonderful review of the book Mistakes Were Made (But Not By Me), an analysis of the unconscious defense mechanisms people use to rationalize their bad decisions. She's absolutely right that this is a book everyone ought to read (I need to find a copy myself), and her review makes some points that I think are important enough to justify shining a spotlight on.

I'm no anthropologist or psychologist, but I like to think of myself as at least an amateur observer of human nature. And one of the facts of human nature which looms the largest is our incredible moral duality. Human beings, as a species, present an astonishing paradox. On the one hand, human beings are capable of tremendous compassion, altruism and generosity. There are countless people who selflessly give their effort, their resources, even their lives to bring about the good of others, asking no repayment except the knowledge that they've worked for a worthy cause. It would be unnecessary for me to cite examples; we all know people who are like this.

On the other hand, human beings are also capable of incredible cruelty, depravity and viciousness. We wage wars, inquisitions, pogroms, witch hunts. We are all too easily led by malignant demagogues, all too easily whipped up into frenzies of savagery and hate, and all too easily persuaded to treat strangers and outsiders as subhuman and to visit the most horrific atrocities on them. Again, I trust there's no need to cite examples; anyone versed in history can come up with far too many.

It seems unbelievable that two such contradictory impulses could exist within the same human nature, but this is undeniably the case. Our selflessness, our lovingkindness, our sense of justice is deeply rooted in mind and instinct. So is our hatred, our chaos and our evil.

Both these impulses, no doubt, come from the evolutionary process that created us. Throughout human prehistory, our ability to be kind and giving was a necessary part of living in groups. Human beings are ill-equipped to survive alone, and the better aspects of our nature are what made it possible for tribes and societies to hold together. But just as true, there were those who were our enemies and would have destroyed us. Our impulses toward violence, aggression and tribalism protected us in those between-group conflicts, even as they perpetuated them.

Until the true story of our origin was known, religions throughout history have noticed the strange amalgam of human nature and sought to explain it. Christianity's explanation, in particular, was a peculiar stroke of theological genius: by postulating an originally good human nature tainted by sin, they invented a structure that let them claim credit for the good acts of their followers while disavowing the bad ones. When Christians perform generous and selfless deeds, as many of them do, the apologists claim that their saving belief in Jesus was what made that goodness possible. When Christians do evil, again as many of them do, those apologists seek to rationalize it away as the result of sin. In reality, people of all belief systems perform acts of tremendous good, as well as acts of terrible evil. They both arise from our nature, they are both part of our heritage. No special theological explanation is needed for either one.

In the past, both these impulses were necessary for survival; either one, if untempered by the other, would have led to humanity's downfall. But in the present day, as societies have run together and merged into a global community, as our technology has magnified our impulses both for good and for ill, we can no longer afford for our loyalties to be divided between light and dark. The consequences of unchecked aggression, of letting the worse side of our nature get the upper hand, are too serious.

And just as bad is the misguided attempt of some people to deny that this problem even exists - which we have an unfortunate tendency to do. Greta Christina's post describes this common rationalization:

We have a tendency to think that bad people know they're bad. Our popular culture is full of villains cackling over their beautiful wickedness, or trying to lure their children to The Dark Side. It's a very convenient way of positioning evil outside ourselves, as something we could never do ourselves. Evil is Out There, something done by The Other.

It's fully understandable why we have this defense mechanism. Who wants to think of themself as capable of evil? But at the same time, this tendency is extremely dangerous - because it leads us to believe that we aren't the kind of people who could do such things. And the people who really and truly believe that are the ones who are most likely to end up committing the blackest evils - because they never consider the possibility that they've gone astray. Since they're the good ones, whatever they do must be in the service of Good and Right. (This dynamic is all too visible in the presidency of George W. Bush, which is thankfully drawing to a close...)

The writer Aleksandr Solzhenitsyn, who experienced firsthand the terrors of the Soviet Union, was well acquainted with the evil that can be done by people who are infallibly convinced that they're laboring in the service of good. In his work The Gulag Archipelago, he laid his finger on the central problem:

If only there were evil people somewhere insidiously committing evil deeds and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being. And who is willing to destroy a piece of their own heart?

And yet, as impossible as this seems, it is what we must do if humanity is to survive in the long term. How can we excise part of our own nature? I don't claim to have the answer, but there's one thing I can suggest.

As I wrote all the way back near the beginning of this blog, we can't fight influences on our behavior that we're not aware of. This applies with added force when it comes to the dark side of our nature: people who deny that they possess such a capability often turn out to be the ones in which it does the most damage. Recognizing that we all have this capability, that the potential for evil is not an aberration but a universal human trait, might make people better at recognizing the warning signs when it threatens to emerge within themselves and others, and using that awareness to avert the worst-case scenario from coming to pass.

February 6, 2008, 8:39 am • Posted in: The LoftPermalink77 comments
Tags:

On the Possibility of Perfect Humanity

Last month, in "An Impoverished Infinity", I wrote about the strange limitations that many Christian believers impose on God. These theists believe that God was not wise or powerful enough to create a world with intelligent beings that did not also include earthquakes, diseases and other disasters - as if the infinite space of possible worlds was somehow foreclosed.

The discussion in the comments thread centered largely around the issue of free will, which is the most common example of these theological limitations. Several theists showed up to argue that God could have created human beings such that we never chose to sin, but believe that he could only have done so by making us into automata who lack meaningful freedom.

I believe this argument is wrong, and I'll explain why. As I wrote some time ago, what it means to have free will is that you can choose from the options available to you in accordance with your desires. The "automata" claim overlooks the fact that there are three things which free will does not require.

First, free will does not require infinite choice, where every imaginable course of action is a realistic possibility. Even if the laws of nature and logic restrict our options to a limited set, we can still choose freely from among the members of that set. Free will is not a total absence of constraint, but rather the ability to select among the options that are available.

Free will also does not require a mental blank slate, where every possible course of action seems equally attractive and compelling. On the contrary, a free person can have dispositions, desires and character traits that incline them to choose a certain way in a given situation. This must be so, for a person who had no desires or inclinations would never act at all. Having a certain set of unchosen desires is a precondition for having a will in the first place. Just as with the previous point, we are still free because we can still choose among the options open to us. What makes a person unfree is not acting in accordance with their desires, but being compelled to act against their desires.

Finally, free will does not require randomness. Granted, a free person can choose to inject a kind of "radical choice" into their decision-making, permitting their decisions to be controlled by some external source of random input - whether it be a coin-flip or quantum noise in the synapses of the brain. But a random component is not required for an act to be free. Even a decision that involves no quality of randomness, one that is entirely determined by the facts and reasons available to the decision-maker, can be a free choice.

After all, wouldn't the freest possible agent be one who is perfectly responsive to reason, who is perfectly aware of all the facts relevant to any decision, and who decides on that basis? Such a person would always make the decisions that were best for them without ever needing to choose randomly, and surely that is the purest and most desirable form of free will. Anything less would be inferior, because being unaware of facts relevant to our choices diminishes, not increases, our freedom; it causes us to overlook possibilities we would otherwise have considered.

All three of these points should be uncontroversial, even among theists. To deny either of the first two is to deny that humans have free will, because obviously we do have built-in inclinations and do not have infinite choice. To deny the third, meanwhile, is to deny that God has free will; or at the very least, it is to suggest that our free will is more perfect than his, because we are blessed with ignorance and he, presumably, is not. Since I doubt that most theists would want to make either of those claims, I figure they would agree with me.

Now see where these conclusions lead. Free will does not require unlimited choice, absence of desire, or randomness. A person whose choices are constrained by physical law and their own desires, and who chooses in accordance with those desires and with the relevant facts, still can be and is free in a way that is genuine, significant, and worth wanting. (In fact, each of us is such a person.)

Given all this, why couldn't an omnipotent deity have done things differently? Such a being could have created a world where evil was a literal impossibility, where physical law is constituted by God's will and it is not possible to act in contravention to that will. Or God could have created a world in which evil acts were physically possible, but in which human psychology would be different than it actually is, such that we only desire to choose the good. To truly rule out evil in this world, our decisions would also have to be non-random, so that chance would not occasionally intervene and cause us to do evil despite our desires. In either of these worlds, human beings would truly be morally perfect.

None of these options, as we've seen, would turn humans into puppets or automata. We would still be truly and legitimately free. But in these worlds, there would be no sin or wrongdoing at all, and thus no evil, no suffering, no need to create an afterlife of torture or send earthly catastrophes as punishment. Why wouldn't God, if he exists, have created a world like this? It would have been superior to our own in every way.

The force of this argument should be undeniable. In fact, in worldviews like the Christian one, God conferred on human beings a positive attraction to sin - a set of psychological inclinations that frequently bias our decisions toward disobedience. If that isn't seen as taking away our free will, why couldn't he have done the opposite and instead given human beings an equally strong set of inclinations toward obedience? In short, instead of original sin, why not original virtue? If God hates sin so much, why would he create a world that would all but ensure the maximum amount of it?

A rational deity would not demand moral perfection unless he created beings capable of supplying it. To say otherwise contradicts a basic point of morality: that you cannot blame someone for not doing what they are not capable of doing. This is why, for example, we don't hold mentally ill people criminally responsible. We understand that their capacity to tell right from wrong is impaired, and that it wouldn't be just to treat them as we treat people who possess that capacity. But God, if we believe the Christian logic, rejects this reasoning - he created human beings imperfect and then punishes them harshly for their imperfection. If, as the Bible says, God is "not willing that any should perish", then I am unable to see why he would not have created a world where that will could be realized.

February 4, 2008, 8:28 am • Posted in: The LibraryPermalink124 comments
Tags:

An Impoverished Infinity

In Christian theology, God is presented as the omnipotent creator, able to bring about literally any world it is possible to imagine. His power has no limits, he never suffers from weakness or fatigue, and he possesses the omniscient knowledge necessary to shape the world according to his overarching plan.

Or so Christian apologists say, anyway. Yet when we atheists challenge them with the problem of evil, asking why a benevolent creator would bring about a world where disease and disaster wreak havoc on the innocent, these same apologists often fall back on a very strange defense. They insist that this is the best world God could possibly have created, that natural evil is a regrettable necessity, and that not even infinite power could have made a world where conscious beings like us could exist without also including these undesirable elements.

In the past few weeks, I've had two Christian correspondents make the same argument to me in e-mail. First, one visitor said this:

Take earthquakes, for example. Earthquakes are almost exclusively caused as a result of plate tectonics. Plates move, grind, slip - and the earth shakes as a result. The only alternative is to have a fixed, unmoving crust - plates that cannot move. But scientists have proven that plate tectonics are, in essence, a "necessary evil." Without the movement of the plates, life on earth as we know it could not exist. Therefore, in order to have life, one must accept plate tectonics - and the earthquakes that come with it.

In another example, I asked a Christian correspondent if he believes God could have avoided the need to create Hell by creating human beings who desired above all else to worship God as he requires. My correspondent's response: "There are 5 billion or so examples on this planet that show that what you propose is not possible."

Though neither of my correspondents seemed to notice, their argument effectively demotes God from omnipotence. What they're effectively saying is that God is not powerful enough, or wise enough, to create the world as other than it is. Not even an infinitely powerful, infinitely intelligent deity could have engineered a universe with different natural laws or conditions than ours, so as to permit self-aware living beings but exclude earthquakes caused by plate tectonics. This amounts to a claim that it is logically necessary that earthquakes accompany life, in the same way it is logically necessary that triangles have 180 degrees.

Similarly, the second argument amounts to a claim that it is logically impossible for human beings to be any different than we are. Not even God could have created us with different dispositions, different characters, different natures. Human beings as we are, with all our faults and contingent pecularities - our xenophobia, our emotional turmoil, our impulses to lust and violence, our often faulty grasp of cause and effect - are the only sentient creatures that exist anywhere in all the limitless space of possibility. Truly, the infinity of possible worlds must be an impoverished infinity indeed in the theist mind.

Even famous Christian apologists are willing to put sweeping limitations on God's power when theologically convenient. C.S. Lewis did the same thing in The Problem of Pain, claiming that this world is the only one God had the power to create, that he could not have made it any different, and that even God could not think of a way to allow life and free will without also allowing random disaster and catastrophe:

Try to exclude the possibility of suffering which the order of nature and the existence of free wills involve, and you find that you have excluded life itself...

...With every advance in our thought the unity of the creative act, and the impossibility of tinkering with the creation as though this or that element of it could have been removed, will become more apparent.

For people who believe in God, these theists don't give him much credit. They presume that God has no more imagination or knowledge than they, and that since they can't think of any world better than our own, then he couldn't either. Like Dr. Pangloss in Voltaire's famous satire Candide, they blithely assume that this must be the best of all possible worlds, not subject to improvement in any way.

Admittedly this conclusion, absurd though it is, is a rational conclusion from their own strained premises. Since Christians start with the assumption that God is all-powerful and good, they logically infer that he would not have created anything less than the best world possible. But this conclusion runs smack into the manifest imperfection of the actual world.

By contrast, atheists who are not bound by theological preconceptions can readily imagine ways in which an omnipotent being could have crafted better worlds than our own. (I listed just a few possibilities last March in "Improving on God's Handiwork"). This may relate to the common theme of fundamentalists fearing sci-fi and fantasy writing - it may well be that the exercise of imagining worlds different from ours is a dangerous path for these believers' tightly circumscribed imaginations to start down.

January 17, 2008, 10:31 pm • Posted in: The LibraryPermalink162 comments
Tags:

Opting Out

Humans are communal creatures, and we have been ever since we roamed the African savannas. Our greatest evolutionary advantage is our intelligence, but even the world's greatest genius would probably find that to be little help if forced to survive in total isolation. Intelligence is inherently a social adaptation; it works best among groups that can share ideas, pass down knowledge, and brainstorm solutions to problems.

Since we have always lived in clans and tribes, it's not surprising that the decisions of those around us exert such a potent pull on our own process of reasoning. As the Asch conformity experiment showed, humans are in a sense designed to go along with the group. When every member of your tribe is going one way, your brain is set up to believe they all probably know something you don't. Conversely, the rare individuals who resist peer pressure and defy the group consensus are likely to attract angry reactions.

Nowhere is this peer pressure more evident than in religion. People often seem incredulous, even angry when an atheist answers the question "What religion are you?" with a cheery "None of the above, thanks" - as if the mere existence of atheism was a personal affront to believers.

I don't find it at all surprising that a declaration of nonbelief is often met with such hostility. After all, religion is based on faith, not facts, and nothing is more threatening to a faith-based consensus than reasonable dissent. Even though evidence is lacking, believers can persuade themselves that they are in the right simply by banding together to reassure one another, only listening to each other's supportive voices. But the mere existence of dissent threatens to undermine this herd mentality and bring unwelcome rays of reason spearing into the darkness of conformity. It forces them to think about the possibility of error, which they would otherwise not have had to confront, and realize just how fragile their groupthink consensus may be. It's no surprise, then, that the less persuasive the evidence is, the shriller and more insistent the voices demanding conformity will always be.

But the angry demands to conform take their toll in the opposite direction as well. People naturally seek to be part of an in-group, a community of like-minded individuals where they can fit in and be welcomed. When the overwhelming majority of society is religious, that can be very difficult, and the fatigue of resisting can begin to take its toll. That's why, if we're not to abandon our principles, we need to hold firm in the face of peer pressure - even practice doing so. Like many other things, this is a skill that can be learned and reinforced through practice.

This is why I believe atheists should always speak their minds whenever it is practical. By learning to express our views, even on issues that aren't necessarily important, we break down the harmful habit of self-censorship and make it more likely that we'll speak up on the issues that truly matter.

Experiencing peer pressure, however, is by no means a problem limited to atheists. It's only because we're currently in the minority that we're subjected to the brunt of it, but there's no reason why we cannot form a community of freethinkers that give us greater license to freely speak our minds and choose as we wish. And peer pressure works both ways. Consider the experience of a Christian who went to see a showing of Brian Flemming's documentary The God Who Wasn't There:

after the movie, there was a short Q&A with the director. (the first showing was followed by a panel of theologians & scholars discussing and taking questions. but we just got the director). i was expecting a range of emotions from the people who stood up to ask questions, but was surprised when the overwhelming majority said: thank you for making this film.

one man was especially memorable. he was old, gray hair and shabby pants. he was sitting right in front of me. he stood up and said, "I want to thank you for making this film. I am a son of a missionary, was raised baptist, and it took me 60 years to break free." More gratitude, more gushing.

another man made a very similar comment: "It's taken me 70 years, but I can finally say: I do not believe in God." This greeted with applause that pierced my heart and shook my insides.

i felt alone in that auditorium...close to tears and aching. ryan made the point that it was like being on the outside of an inside joke...which I imagine is how many non-christians have felt their whole lives- in a circle of believers using "churchy" terminology or when they visit a church, feeling like they don't belong, that they don't get the joke, or worse: that they are the joke.

we were the joke tonight.

The author of this piece perceptively notes that the feeling of exclusion and isolation she experienced at this gathering is very much like what many atheists feel on a daily basis. This doesn't mean that we should seek to become the majority so we can turn the tables and subject all theists to similar discrimination. But such social force, if it was at our disposal, could prove very useful in pressuring the truly intolerant and dangerous believers whose immoral views have persisted into modern times.

September 20, 2007, 7:27 am • Posted in: The LoftPermalink14 comments
Tags:

The Asch Conformity Experiment

Solomon Asch. "Opinions and social pressure." Scientific American, vol.193, no.5 (1955), p.31-35.

Back in April, I wrote about the classic Milgram experiment and what it shows about how disturbingly willing people are to submit to authority, even in the presence of strong countervailing reasons. What about when the pressure to obey comes not from an authority figure above us, but from our peers? How will people fare then?

A classic study was done on this question in 1955 by Solomon Asch. In Asch's experiment, eight participants were shown into a room and sat down in a row, in full view of each other. In actuality, there was only one real subject; unbeknownst to that person, the other seven people were confederates of the experimenter.

The experimenter, standing at the front of the room, put up a card with a single reference line on it, and then another card with several lines of varying lengths. The experimenter then asked the participants to state out loud which of the lines on the second card was the same length as the reference line. The lines on the cards were all of dramatically varying lengths, so it was obvious at a glance which two lines matched. But the seven confederates, all of whom answered before the subject, all gave the same wrong answer.

What would you do in a circumstance like this? It seems obvious what the correct answer is. Yet everyone around you disagrees, which raises the unsettling possibility that the fault is yours. Would you stick to your guns, insisting on following the evidence despite strong social pressure to conform? Or would you give in and assume that maybe everyone else knows something you don't?

In control experiments where the element of social pressure was not present, participants selected the wrong line less than 1% of the time. But when group pressure was applied, the results changed dramatically. For all trials combined, participants gave the wrong answer when pressured to do so approximately 37% of the time, and as many as 75% of people went along with the majority at least once.

These are disturbingly high numbers. But Asch's experiment, if anything, underestimates the pressure to conform in the real world. This experiment was carried out in the highly artificial environment of the laboratory - where the right answer was unambiguous and extremely obvious, potentially giving independent thinkers a stronger motivation to resist peer pressure. In real-life situations where the answers are not so simple, it may be more difficult not to go along with the crowd. Perhaps even more important, the confederates in the laboratory experiment were strangers, bearing no special relation to the participant. What if, instead, they were members of the participant's in-group - relatives, friends, citizens of the same country, members of the same race, of the same religion? Under such circumstances, it may be far more difficult not to give one's allegiance to the group. In such cases, it's not hard at all to understand how many people could be coerced into stifling their doubts and joining the majority for fear of being the odd one out.

The Asch experiment has been repeated in modern times as well (source), this time with advanced brain-scanning imagery that allows researchers to watch which parts of the brain light up with activity in people in these situations. The results: when faced with a conflict with the group, the most-active areas of the brain are those having to do with spatial perception, not conscious judgment. The researchers suggested that the pressure of group conformity may literally change what a person sees - although, to be fair, an equally plausible explanation is that a participant's mentally "rechecking their answers" to see if they made a mistake causes increased activity in the spatial regions of the brain.

However, these findings were not all bad news. In Asch's original experiment, about one-quarter of participants refused to be swayed, always rejecting the wrong answer no matter how many other confederates supported it. These nonconformists may well be the ones who seed movements of social change in the real world, rejecting widely believed prejudices and inspiring something new and better. (It would be interesting to redo this study and find out whether people in this group correlate with atheism and freethought.)

But the Asch conformity experiment gives another, more substantial reason for hope. In a slight variation of the experiment, where just one confederate gave the right answer, the rate of compliance with the majority plunged dramatically. Evidently, seeing even one other person stand by you is enough to give most people the courage to resist social pressure.

This is why it is so important for atheists to speak out. We may be accused of "preaching to the choir", but in our case, we have a choir that needs to be preached to. The societal prejudices against atheism still run so strong that many nonbelievers are coerced into remaining anonymous, staying silent about their views or even pretending to be theists so as not to be discriminated against. In an atmosphere of intense hostility and bias, this is not hard to understand at all. But by making ourselves visible, by speaking out for atheism in a strong voice, we may well give many of these closeted nonbelievers the motivation they need to speak out and make their own voices heard in turn.

August 23, 2007, 7:37 am • Posted in: The ObservatoryPermalink16 comments
Tags:

The Milgram Obedience Experiment

Stanley Milgram. "Behavioral study of obedience." Journal of Abnormal and Social Psychology, vol. 67, no.4 (1963), p.371-378.

In the 1960s, Stanley Milgram conducted one of the most important experiments ever done in the field of human psychology and social conformity. For ethical reasons, this study probably could not be repeated today, but that only makes it even more important to raise awareness of its findings.

Milgram, a psychology professor at Yale University, recruited 40 male subjects of diverse occupations and educational levels from the surrounding area. When they arrived at the laboratory, he told them that they would be participating in a study on the role of punishment in learning, and that they might be either the "teacher" or the "learner", based on a random draw. In fact, the draw was rigged. The participant was always the "teacher", while the "learner", who was introduced as a fellow participant, was secretly a confederate of Milgram's.

The two participants were introduced. Then, in view of the teacher, the learner was strapped to a chair and had an electrode placed on his wrist. The teacher was then ushered into an adjacent room and was shown what they were told was an electric shock generator. It had 30 clearly marked levels, beginning at 15 volts and proceeding by 15-volt increments to a maximum of 450 volts. Each group of four switches was also given a verbal label, ranging from "Slight Shock" at 15 volts to "Danger: Severe Shock" at 400 volts. The two highest levels were simply labeled "XXX". The subject was told that the generator was connected to the electrode on the learner's wrist. In reality, the generator was a dummy, wired to flash a light and move a voltmeter when the button was pressed, but otherwise do nothing. The subject was also told that the shocks could be extremely painful, but could not cause any permanent tissue damage or injury.

The experiment was a simple exercise in matching words on a list, with the learner signaling an answer via switches that lit up one of four lights on a display in the teacher's room. Every time a wrong answer was given, the teacher was instructed to give the learner a shock and move the machine's intensity setting up by one level. The experiment was designed so that the teacher would have the opportunity to proceed through the full range of shocks. As prearranged by Milgram, upon reaching the 300-volt level, the learner would pound on the wall separating the two participants, and from that point on would no longer answer the questions. The subject was instructed by the experimenter, who remained in the room with him, to treat the absence of a response as a wrong answer and to continue with the experiment. If the teacher objected, the experimenter answered from a fixed list of replies, such as, "Please continue," "The experiment requires that you continue," or, "You have no choice, you must continue." Only if these prods could not persuade the subject to obey was the experiment terminated before reaching the highest shock setting.

Before the study was run, 14 Yale psychology majors to whom the experiment was described in advance predicted that only between 0 and 3% of subjects would obey the experimenter all the way through to the end and administer the most potent shock. In fact, of the forty subjects, twenty-six - an astonishing 65% - went all the way through to the end, administering what they had every reason to believe were dangerous shocks to a participant who, by that point, had clearly expressed a wish to stop and had subsequently become unresponsive.

As the experiment proceeded, many of the subjects exhibited signs of extreme stress, sweating, trembling, biting their lips and digging their fingernails into their palms. Some of them expressed concern about the learner or denounced the experiment as stupid, senseless or crazy. Yet they still continued to obey the experimenter and administer the shocks.

From the subject's perspective, after the 300-volt level, it was a reasonable inference that the learner's failure to answer any further was because he had become incapacitated and was at serious risk of injury or death if the experiment were to continue. Yet 26 out of 40 subjects continued on regardless, pressing the button at the experimenter's command to deliver shocks of up to 450 volts (the "XXX" level) to a person who had been unresponsive for as many as ten straight answers. Only 14 of the 40 disobeyed the experimenter's commands and terminated the experiment at any point before reaching the end.

The Milgram study teaches us much about the dark side of human psychology: the ease with which we come under the sway of authority, and the willingness of many people to suspend ordinary standards of morality and conscience when ordered to do so and hand over the responsibility for their actions to another. Even ordinary, ethical people, who in most imaginable circumstances would never dream of harming an innocent stranger, seem disturbingly susceptible to this flaw.

It is not hard to see in this experiment echoes of the Nazi foot soldiers who committed unspeakable crimes and then claimed, when brought to justice, that they were "just following orders" - the so-called Nuremberg Defense. And ironically, this study shows that those claims may well be true. As much as we might like to believe that those who commit such evil deeds have some intrinsic character flaw, some fundamental defect that makes them not like us, the truth is that acts of great evil can be committed by seemingly normal, ordinary people. Of course, we naturally want to deny this lesson because it means that we, ourselves, might also be capable of such acts under the right circumstances, and this is a disquieting conclusion.

However, even if the Nazis' claim that they were just following their leaders' orders was true, it does not excuse what they and others like them did. Morality would be thoroughly worthless if we exempted people from its dictates whenever they fell short due to human fallibility. Instead, its purpose is as a standard for us to live up to, a counterweight to the blindness of obedience to authority. And, do not forget, some people did refuse to obey and terminated the experiment early. Although authority can be a powerful influence on us, we are not helpless against it.

More importantly, by simply being aware of the Milgram study and its implications for our behavior, we can change that behavior and more effectively resist the undesirable tendencies bred into us by evolution. Knowledge of experiments like Milgram's is itself a causal factor that can influence people's actions and cause them to choose differently than they otherwise would have. Who, knowing about this study, would not think twice if they ever found themselves in a similar situation? This fits in with what I have said previously: merely by studying and learning about our limitations, we gain the tools to overcome those limitations and become more rational and more responsible human beings.

April 25, 2007, 7:37 pm • Posted in: The ObservatoryPermalink17 comments
Tags:

Book Review: The God Part of the Brain

(Author's Note: The following review was solicited and is written in accordance with this site's policy for such reviews.)

Summary: Contains many interesting ideas, but the informed reader will find much to take issue with.

Atheist Matthew Alper's The God Part of the Brain seeks to explain the religiosity of humankind in terms of human evolution and the biology of conscious experience. Alper's hypothesis is that the increased intelligence that gave human beings an evolutionary advantage also gave us the ability to foresee our own inevitable deaths. To prevent people from becoming debilitated by this knowledge, evolution counteracted death anxiety by instilling in us a biological predisposition to believe in gods, a soul, and an afterlife. Now that we understand why we believe in these things, he argues, there is sufficient evidence to conclude that they are all just cognitive illusions and none of them are real.

Although this book contains many attention-getting ideas, I believe the skeptical, knowledgeable reader will find many good reasons to doubt its thesis. Alper has no formal scientific training that I know of, and is a layman when it comes to biology; and it shows. His conclusion that religious belief is genetically hardwired into the entire human species, so that belief in God is a human trait as natural and universal as language or walking upright, is far too sweeping. Not nearly enough in the way of evidence is presented to support it. Other than a brief, footnoted reference to a single twin study, his entire line of argument rests on the assertion that belief in gods, an afterlife, and a spiritual realm is found in every human culture, even if the specifics of that belief differ, and that the only explanation for this is that such belief is a genetically programmed instinct.

Since this is where Alper begins, this is where I will begin as well. It is not the case that every human culture since the dawn of time has believed in a dualistic, Platonic conception of reality. Here is how he puts it:

...every human culture has perceived reality as consisting of two distinct substances or realms: the physical and the spiritual.

...every culture has maintained a belief in some form of a spiritual reality. As this realm transcends the physical, things comprised of spirit are immune to the laws of physical nature, to the forces of change, death, and decay. Things therefore which exist as a part of the spiritual realm are subsequently perceived as being indestructible, eternal, and everlasting. (p.3)

While reading this passage, the counterexample that immediately came to my mind was Buddhism. Contrary to Alper's claims, Buddhism generally does not believe in a distinct substance called "spirit" that is immune to the laws of physical decay. On the contrary, the core Buddhist tenet of anatman (literally "no-soul") teaches that human minds, far from the imperishable ghost in the machine that Western religions envision them as, are made up only of mutable aggregates called skandhas that are mistakenly identified as an imperishable self. The belief that the self is immutable and permanent is one of the fundamental ideas that Buddhism teaches against, regarding it as a delusion that causes all the suffering that people experience. Buddhism generalizes this principle to the belief that all things are transient and impermanent. As explained on this site:

The one great law of the universe, then, is change. Phenomena come into being, mature and disappear. They are the result of conditions; when the conditions change, they also change or disappear. Even those things which appear as permanent are impermanent. Entire universes come into being, mature and disintegrate. Buddhism does not recognize a primal cause, nor does it recognize the existence of a permanent, unchangeable substance in anything. Rather, it sees all things as constantly changing, as conditionally created.

Alper's understanding of Buddhism is seriously lacking. Several times, he mentions the Buddhist concept of nirvana, but speaks of it as if it were equivalent to the afterlife in the Western religions, a place where the immortal souls of the deceased go to dwell. Again, this is a gross mischaracterization of Buddhist teaching, which regards nirvana as a state of non-existence, insofar as it can be described in words at all. In fact, the word literally means "extinction".

Other examples could be adduced - the ancient Greek Atomists, some forms of Judaism - to show that not all cultures or religions believed in an immortal soul and a spiritual afterlife, as Alper incorrectly claims. The basic point is that the fundamental claim underlying all his assertions, the supposed universality of human belief in the spiritual, simply is not true.

In addition to this, I also find fault in Alper's scientific claims, specifically his claim that the only way to explain a universal or near-universal human belief is as a hardwired adaptation. Granted, in non-intelligent, non-sentient species that live their lives propelled entirely by instinct, it is a sound claim that any universally observed behavior must be dictated by genes. But human beings are obviously not such a species, and it is here that Alper's analogies between human religions and planarians turning toward light fall short. In addition to instinct, we have a wholly new level of mental and cultural complexity not shared by other species, and this undermines any simplistic claim that all our behaviors must be programmed by our genes.

Consider a parallel case. All human cultures have also worn clothes, in some form or another. Does this mean that clothes-wearing is also hardwired into us, programmed in our genes? Do the As, Ts, Cs and Gs of our DNA spell out instructions on how to cut and stitch a pair of pants, somewhere on our chromosomes? Are cultures that prefer robes, kilts or togas made up of mutants carrying an alternative allele of the clothes-wearing gene?

As any reputable biologist would agree, this is plainly absurd. There are very good cultural reasons why people wear clothes, including protection from the elements, societal notions of modesty, the desire to attract the opposite sex, and displays of social status. This commonality can be accounted for by basic, general similarities in the architecture of the human mind, and does not require elaborate scenarios postulating a specific selective advantage for early clothes-wearers. As compared to the null hypothesis, the claim that there exists a specific "clothes-wearing" gene is a positive assertion and as such takes on the burden of proof. Without empirical evidence to support such an idea, it becomes nothing more than a speculative "just-so" story, an example of armchair theorizing unsupported by the facts.

If religion is not a hardwired instinct, how else can its prevalence be explained? There are three alternatives:

1. The memetic explanation (adaptive): Religion is not hardwired in our genes, but has spread and become universal because it offered an advantage to human cultural groups that practiced it - societal cohesion and cooperation, willingness to sacrifice oneself in war, the establishment of law and order through divine-command morality, or whatever else - and groups that did not have this advantage were unable to compete with those that did, and eventually died out.

2. The spandrel explanation: Religion per se is not hardwired in our genes, but is an accidental byproduct of some other beneficial adaptation that evolution selected for in our species' past, such as the propensity to participate in dominance hierarchies, the desire to seek cause-and-effect relationships in the world, or the urge to anthropomorphize natural phenomena we do not understand. (See Daniel Dennett's Breaking the Spell for a run-down of these possibilities.)

3. The memetic explanation (parasitic): Religion is not hardwired in our genes, but has spread and become universal because it is advantageous to the religious memes themselves to do so. In this explanation religion is like a common cold virus, evolving in ways that improve its own propagation, even if this results in deleterious effects to the human beings who act as its hosts.

Note, also, that these explanations are not mutually exclusive. Like most complex natural phenomena, religion probably has multiple underlying causes, and the true explanation will almost certainly involve all of them to some degree. Personally, I lean towards a combination of 2 and 3, with a dash of 1. I do not, however, believe that religion is genetically hardwired into us, or that it would have entailed any adaptive advantage to humanity if it was.

Alper's hypothesis is an extreme version of genetic determinism: any cultural behavior that is widely or universally practiced must be dictated by genes that force us to instinctually behave in that way. No mainstream biologist or evolutionary psychologist that I know of holds to such a strong version of this idea, not even Richard Dawkins, who has been derided as an "ultra-Darwinian" by his critics.

There is another obvious counterexample to this claim: if Alper is correct, how could there be such people as atheists? He offers two possibilities to explain this. The first is that, like most genetic traits, religiosity exhibits a range of variation, and some people will be born with more or less capacity for it than others:

...there are those we might call spiritually/religiously deficient, those born with an unusually underdeveloped spiritual/religious function.... These are society's spiritually retarded, if you will, or, in keeping with the musical metaphor, those we might call spiritually tone deaf. (p.183)

This hypothesis does have one highly testable implication: there should be a genetic difference between theists and atheists. If it is true that religious belief is a preprogrammed genetic instinct to counteract the otherwise unbearable knowledge of mortality, it should follow that people who lack religious belief but are not crippled by dread must have a different gene that enables them to cope in another way. I strongly doubt any study will ever be performed that finds such a thing, but if one ever were, that would be compelling evidence in support of Alper's thesis.

But then again, there is another problem: what about people who convert from theism - often very intense, fundamentalist forms of theism, which Alper says lie on the opposite end of the bell curve of variation from atheists - who deconvert and become atheists? There are many such stories that could be produced. Are we to believe that these people's genes have changed during their individual lifetime? Obviously not.

Alper's suggestion is that these people's innate proclivities toward religion may have "atrophied", or that they have "chosen to suppress" them (p.183) - but if this is possible, it undercuts his entire hypothesis and throws its falsifiability into serious question. Alper's entire point is that the knowledge of one's future death is such a horrifying and debilitating awareness that people lacking a spiritual part of the brain literally could not survive and were driven to extinction (he says the knowledge was "jeopardizing our very existence" (p.183)). But now he implies that people can suppress this tendency without serious repercussions?

In a later chapter, this book also puts forth an inventive hypothesis, albeit one that strikes me as highly unlikely to be true. It suggests that America's high degree of religiosity as compared to most First World nations is due to a founder effect: most of the early immigrants were religious devotees fleeing persecution, who brought their genetic tendency toward dedicated religious practice to their new nation. If this were the case, how would we account for the fact that New England - site of settlement of the Puritans, one of the most fanatically religious of all America's immigrants, as Alper documents - is today relatively secular, as compared to the Bible-belt South, which was originally founded for economic profit? I suspect, again, the reasons for the United States' religiosity is cultural and not genetic: the Constitution's guarantees of a secular government have created a spirit of free-market competition among faiths, as opposed to the established European churches that became complacent and apathetic due to a lack of competition.

There is one more point I have to comment on. Despite being an atheist, despite proclaiming his confidence in science as the only truly effective method of understanding the world, there comes a point where Alper makes a truly bizarre philosophical claim that contradicts much of what he himself says:

As all of our perspectives are relative, no species, nor any individual within a species, can ever claim that its interpretation of reality constitutes any absolute truth... just as flies possess fly "truths," humans possess human "truths," neither being any more genuine or "real," just different. (p.226)

How can this not be read as a repudiation of everything he has spent the previous two hundred pages arguing? If different claims to truth are merely a matter of opinion and there is no way to determine which is more accurate, then his claims that evolution has given us a propensity to believe in God should also be viewed as mere opinion, no more valid than any alternative possibility.

This sloppy thinking is all too characteristic of the book, unfortunately. There are some interesting nuggets of information to be had, such as its citation of a deliciously ironic study that shows religious fundamentalists, not atheists, have often had stressed and difficult relationships with their fathers. But its major argument is little more than armchair philosophizing, lacking in substantial evidentiary confirmation, and contradicted in important ways by much of the evidence we do have.

April 1, 2007, 10:30 am • Posted in: The LibraryPermalink8 comments
Tags:

Priming the Mind

In a previous post from Daylight Atheism titled "On Presuppositions" (all the way back in February 2006!), I wrote about how subconscious biases and prejudices, instilled in us by culture and surroundings, can exert a disturbingly measurable effect on our behavior. However, there is more to this story that deserves to be told. In the previous post, I wrote about persistent biases, those that are apparently supported and reinforced frequently enough over long enough periods of time to become lasting aspects of our mental state and behavior. But, the evidence shows, even exposure to brief snippets of information can measurably affect the way we act, at least for a short time.

Turning again to Malcolm Gladwell's book, Blink:

In front of you is a sheet of paper with a list of five-word sets. I want you to make a grammatical four-word sentence as quickly as possible out of each set. It's called a scrambled-sentence test. Ready?

  1. him was worried she always
  2. are from Florida oranges temperature
  3. ball the throw toss silently
  4. shoes give replace old the
  5. he observes occasionally people watches
  6. be will sweat lonely they
  7. sky the seamless gray is
  8. should not withdraw forgetful we
  9. us bingo sing play let
  10. sunlight makes temperature wrinkle raisins

That seemed straightforward, right? Actually it wasn't. After you finished that test - believe it or not - you would have walked out of my office and back down the hall more slowly than you walked in. With that test, I affected the way you behaved. How? Well, look back at the list. Scattered throughout it are certain words, such as "worried," "Florida," "old," "lonely," "gray," "bingo," and "wrinkle." You thought that I was just making you take a language test. But, in fact, what I was also doing was making the big computer in your brain - your adaptive unconscious - think about the state of being old. It didn't inform the rest of your brain about its sudden obsession. But it took all this talk of old age so seriously that by the time you finished and walked down the corridor, you acted old. You walked slowly. (p.53)

Studies like this are the work of a New York University psychologist named John Bargh, whose work Gladwell enumerates upon. In another variation of this test, the "priming" words were one of two sets: either words like "aggressive", "rude", "disturb" and "infringe", or words like "respect", "considerate", "patiently" and "courteous". After the sentence-completion exercise, participants were asked to walk down the hall to another office to get their next assignment. However, a confederate in the experiment was blocking the doorway, posing as a confused student talking to the teacher. The question was whether the people primed with "rude" words would interrupt more quickly than the people primed with "polite" words, and they did - by a huge margin. The "rude" group interrupted after five minutes, on average. However, the overwhelming majority - 82% - of the "polite" group never interrupted at all, waiting the full ten minutes Bargh had arranged in advance for the phony conversation to last.

The priming phenomenon has been studied and rediscovered numerous times, in a variety of different contexts. In a New York Times article from last November, "Just Thinking About Money Can Turn the Mind Stingy", a study found that another scrambled-sentence test, this one containing words such as "money" and "salary", temporarily made participants less willing to ask for help from others and more stingy in giving it. (The article errs, however, in claiming that there is no precedent for this work.) It even caused participants to unconsciously place more physical distance between themselves and others - a small-scale demonstration of what I have previously said about the isolating effect of wealth.

Another aspect of priming, dubbed the "Lady Macbeth effect": asking subjects to recall an unethical act makes them more likely to fill in letter-completion tests to create words that suggest cleanliness:

In one set of tests, the researchers asked participants to recall an ethical or unethical act, and then asked them to fill in the missing letters in a series of incomplete words, like W_ _H and SH_ _ER. Those subjects who had recalled unethical acts mostly returned WASH and SHOWER, while the others returned a variety of words, like WISH and SHAKER.

And priming subjects with the feeling of being watched, even by eyes that they obviously know are not real, causes them to behave more honestly:

During the weeks when the eyes poster stared down at the coffee station, coffee and tea drinkers contributed 2.76 times as much money as in the weeks when flowers graced the wall.

...Roberts says he was stunned: "We kind of thought there might be a subtle effect. We weren't expecting such a large impact."

Indeed, this is a theme that recurs repeatedly in priming experiments. While scientists frequently expect the effect to be small, barely measurable, they are routinely shocked by how large and obvious it is. It should be stressed, however, that in all these studies, none of the subjects could offer conscious rationales for their altered behavior when asked to do so. In most cases, they were not even aware that their behavior had been altered.

But the priming effect does not just affect our behavior. It can also, incredibly, have significant effects on our actual performance. In another study cited by Gladwell, two groups of students were asked to answer forty-two questions from the game Trivial Pursuit. But first, one group was first asked to sit and think about professors, the other to think about soccer hooligans. And the difference was dramatic: an average of 55.6% of the questions were answered correctly by the first group, while only 42.6% were answered correctly by the second.

How can these results be explained? Can priming somehow temporarily infuse the brain with an expanded store of facts? Obviously not. Instead, what seems to be happening is that encouraging people to "think smart" briefly increases mental qualities like their ability to focus, their sense of recall, and their ability to quickly and correctly integrate diverse pieces of information. In other words, priming does not make us more intelligent, but it does briefly make us better at using the intelligence we already have.

But disturbingly, this effect can occur in the opposite direction as well. Most uncomfortably of all, it seems that the mere mention of race can evoke some of history's most ugly and pernicious stereotypes. Gladwell explains:

The psychologists Claude Steele and Joshua Aronson created an even more extreme version of this test, using black college students and twenty questions taken from the Graduate Record Examination, the standardized test used for entry into graduate school. When the students were asked to identify their race on a pretest questionnaire, that simple act was sufficient to prime them with all the negative stereotypes associated with African Americans and academic achievement - and the number of items they got right was cut in half (p.56).

Another example, as reported in this article:

Margaret Shih, a Taiwanese-American who is a psychology professor at the University of Michigan, wholeheartedly agrees that positive stereotypes often have a darker flip side. As a Harvard University graduate student, she helped administer a mathematics test to Asian-American women. During the preparations, some were subtly reminded that they were Asian, others that they were women. Nothing was said about race or gender to a third, control group. Those branded as Asians fulfilled the positive stereotype that Asians are whizzes at math. They did much better on the test than the control group. Those whose gender was emphasized met a negative stereotype - that women do poorly at math.

In spite of all that our society has done to eradicate these shameful stereotypes, it is very disturbing to see how much power they apparently retain. Even though women and minorities are no less intelligent, the internalized presence of stereotypes at even a subconscious level can become a self-fulfilling prophecy. What can a person concerned with equality and tolerance do to defeat this harmful effect?

Happily, there is an answer to this question. We are not helpless in the face of external influences on our behavior; although we are not perfectly rational agents, we are rational enough to control the irrational excesses of our actions. Just as I wrote in the previous post on presuppositions, the way we overcome these influences is to recognize their influence on us and consciously resist or compensate for them. The only kind of influence that truly makes us less free is the one that we are not aware of.

The priming effect is a perfect example. Like many illusions from fantasy and mythology, once it is perceived and recognized for what it is, it ceases to exist. Specifically, if we are aware of the phenomenon of mental priming and know that it is being used on us, the effect disappears. As Gladwell writes, "Once you become conscious of being primed... the priming doesn't work" (p.54).

This, then, is how we battle negative stereotypes and other undesirable manifestations of the priming effect: by consciously recognizing them and rejecting them, or better yet, counteracting them with positive evidence to the contrary. Unfortunately, due to the nature of the effect, we cannot consciously prime ourselves to do better at our tasks. But we can end the effect entirely and succeed based on our true skills and ability, and surely this is at least a second-best outcome.

The priming effect undoubtedly explains a great deal of the persuasive power of advertising. Though it would seem, rationally, that a fact-free thirty-second snippet offering no real reasons to prefer a product to its competitors could not possibly affect one's behavior - and at a rational level, it very probably does not - it may well be that exposure to advertising can make the viewer more likely to purchase that product if they should come across an opportunity to do so soon afterward. It is ironic that people fear subliminal advertising (the effectiveness of which has never been reliably demonstrated), when the evidence suggests that ordinary, consciously perceived advertising is more than enough to subconsciously affect behavior in the way that subliminal ads have long been feared to do. But, again, this fearsome manipulative power can only work on people who are passive and unaware.

January 11, 2007, 9:36 pm • Posted in: The ObservatoryPermalink9 comments
Tags:

< Newer Posts Older Posts >

DAYLIGHT ATHEISM: THE BOOK
Now available from Big Think!

RECENT POSTS

MUST-READ POSTS (view all)

RECENT COMMENTS

SITE CATEGORIES (explanation)

TAG ARCHIVE

ARCHIVES

POST SERIES

see all >

BLOGROLL

PODCASTS

FORUMS

OTHER LINKS

THIS BLOG'S PARENT SITE

SEARCH THIS SITE

RSS 2.0 FEED

ABOUT THE AUTHOR

STATEMENT OF PRINCIPLES

WHY "DAYLIGHT ATHEISM"?

FEEDBACK

SPEAKING ENGAGEMENTS

SSA Speaker Page
Find Me on Facebook Find Me on Atheist Nexus
Kiva - loans that change lives
Foundation Beyond Belief
The Out Campaign
Winner of the 2009 3 Quarks Daily Science Writing Prize