Blog Explanation

This blog brings together content that is noticeable, important or otherwise interesting from a human givens point of view.

Sunday 22 January 2012

The power of metaphor From All In The Mind (BBC Radio 4, 5.07.11)



Lera Boroditsky talks to Claudia Hammond about the power of metaphor to change what we think.

Claudia Hammond (CH): In 1990 a schoolgirl was attacked in Buffalo, New York. Before the attacker was captured 15 months later, another 10 girls had been assaulted by the same man. And some scholars believe that these girls were not only the victims of crime but the way we use language to frame a problem – that the police saw their role as hunting the man down in secrecy, rather than taking steps to protect the community from him while he was still at large. Now new research from the University of Stanford has found that something as simple as describing crime as a ‘beast’ or as a ‘virus’ can change the way we think about crime and the solutions we suggest to tackle it. But if simple words can make such a difference, what implications does this have for the social policy decisions that affect us all? The author of the research is Assistant Professor of Psychology, Lera Boroditsky. In a minute we’ll hear more about her specific work on metaphor and crime. But before that I asked her how more broadly language can affect both what we think and what we know in everyday situations.

Lara Boroditsky (LB): Research from my lab and from many others has shown that language isn’t just a way of expressing your thoughts – it shapes the very thoughts you wish to express. In some cases, languages even give people skills or cognitive abilities that other people don’t have. One of my favourite examples comes from work on language and spatial abilities, navigational abilities. So there are folks around the world who, instead of using words like ‘left’ and ‘right’, use words like ‘north, east, south and west’. One consequence of speaking like this – always using ‘north, east, south and west’ – is you have to say things like “There’s an ant on your south west leg” or “Pass the cup to the north-north east a little bit”. In Kuuk Thaayorre [an Australian Aboriginal language], for instance, to say “Hello”, you say “Where are you going?” and the answer should be something like, “North-north east in the far distance. How about you?” As a consequence, folks who speak languages like this are incredibly good at staying oriented. And even a five-year-old can point correctly south east when asked, without hesitation. That’s something that most American college students or most American professors can’t do.

CH: And your recent work has been looking at metaphor and how people think about crime. Why did you pick crime?

LB: The answer is that I wanted to work on something that is a real-world problem. It started with the observation that political speeches are just suffused with metaphors, and sometimes with metaphors that really strongly reorganise one’s ideas.
CH: So in the lab you’ve been looking at the different ways crime might be described and compare what happens if you talk about it as a virus or as beast. What did you do there?

LB: In our studies we told people about an increasing crime situation in a fictional city and we gave them lots of statistics about crime. But half of the people were told that crime was a beast ravaging the city and the other half were told that crime was a virus ravaging the city. Both groups got the same factual information, but in each case they got this innocent little one-word metaphor: “It’s a virus” or “It’s a beast”. And what we predicted was that if people were thinking about crime as a virus, then their solutions to treating the crime problem would be similar to the way you would treat a virus: they might want to diagnose the problem, they might want to institute preventative measures – kind of inoculate the community, institute social reforms, maybe improve the educational system, things like that. Whereas if they thought that crime was a beast, they would try to treat it as if it were a real beast attack. So you would try to hunt it down, you would throw people in jail, you would enforce harsher sentences, things like this.

And this is exactly what we found. People who were told crime was a beast, given the ‘beast’ metaphor, were much more likely to come up with enforcement and punishment solutions to the problem. Whereas people who were given the ‘virus’ metaphor were much more likely to say “Let’s try to improve the economic situation in the community, let’s try to improve the educational situation” – kind of restore the health of the community so that the problem doesn’t continue.

CH: It seems extraordinary that even when you mention this word ‘beast’ or ‘virus’ just once, it had this big impact on what people gave as their solutions.

LB: What we found was is that people assimilate information into their metaphorical frames, so even if it is just one word, it’s a very powerful word that activates a whole knowledge network, a whole framework of thinking; and that all the further information people learn about the city, all the crime statistics, is kind of assimilated and shaped into the knowledge framework that’s activated by that word.

CH: And you found it only worked if the word was near the beginning, not at the end of the description they had about the crime problem. Were people aware that this was happening, that this was influencing them?  

LB: That was one of the most interesting parts of what we found. We asked people after they gave us their solution: “What influenced you in your solution, what made you give the answer that you gave?” And nearly no-one chose the metaphor as the thing that influenced them. People thought they were influenced by the statistics, by the objective facts, whereas they were giving very different solutions depending on the metaphor that they got. So the metaphor was invisible to them, they didn’t think it was important, and yet it had formed their whole impression of the situation.

CH: And this must have effects way beyond crime because metaphors are used so often. I mean you think about people talking about floods of migrants coming in, or influxes or invasions – all those sorts of words are going to bring up certain sorts of things for people, aren’t they?

LB: Absolutely. In fact it’s impossible to talk about anything that’s abstract or complex without using metaphor. Partially this is just because we have a very finite set of words in our language and we have an infinite set of things we want to talk about, so we constantly have to re-use words and expressions from old knowledge to talk about new stuff. So we’re inevitably using metaphor. But certainly, whenever we’re trying to conceptualise something complex and abstract, and any societal problem fits into this category, we’re using huge numbers of metaphors. And those metaphors really frame how we shape the issues and then how we try to solve them, how we act on them in the end.

CH: Lara Boroditsky from Stanford University

Friday 13 January 2012

The Optimism Bias: Why we're wired to look on the bright side by Tali Sharot


We like to think of ourselves as rational creatures. We watch our backs, weigh the odds, pack an umbrella. But both neuroscience and social science suggest that we are more optimistic than realistic. On average, we expect things to turn out better than they wind up being. People hugely underestimate their chances of getting divorced, losing their job or being diagnosed with cancer; expect their children to be extraordinarily gifted; envision themselves achieving more than their peers; and overestimate their likely life span (sometimes by 20 years or more).

The belief that the future will be much better than the past and present is known as the optimism bias. It abides in every race, region and socioeconomic bracket. Schoolchildren playing when-I-grow-up are rampant optimists, but so are grown-ups: a 2005 study found that adults over 60 are just as likely to see the glass half full as young adults.

You might expect optimism to erode under the tide of news about violent conflicts, high unemployment, tornadoes and floods and all the threats and failures that shape human life. Collectively we can grow pessimistic – about the direction of our country or the ability of our leaders to improve education and reduce crime. But private optimism, about our personal future, remains incredibly resilient. A survey conducted in 2007 found that while 70% thought families in general were less successful than in their parents' day, 76% of respondents were optimistic about the future of their own family.

Overly positive assumptions can lead to disastrous miscalculations – make us less likely to get health checkups, apply sunscreen or open a savings account, and more likely to bet the farm on a bad investment. But the bias also protects and inspires us: it keeps us moving forward rather than to the nearest high-rise ledge. Without optimism, our ancestors might never have ventured far from their tribes and we might all be cave dwellers, still huddled together and dreaming of light and heat.

To make progress, we need to be able to imagine alternative realities – better ones – and we need to believe that we can achieve them. Such faith helps motivate us to pursue our goals. Optimists in general work longer hours and tend to earn more. Economists at Duke University found that optimists even save more. And although they are not less likely to divorce, they are more likely to remarry – an act that is, as Samuel Johnson wrote, the triumph of hope over experience.

Even if that better future is often an illusion, optimism has clear benefits in the present. Hope keeps our minds at ease, lowers stress and improves physical health. Researchers studying heart-disease patients found that optimists were more likely than non-optimistic patients to take vitamins, eat low-fat diets and exercise, thereby reducing their overall coronary risk. A study of cancer patients revealed that pessimistic patients under 60 were more likely to die within eight months than non-pessimistic patients of the same initial health, status and age.

In fact, a growing body of scientific evidence points to the conclusion that optimism may be hardwired by evolution into the human brain. The science of optimism, once scorned as an intellectually suspect province of pep rallies and smiley faces, is opening a new window on the workings of human consciousness. What it shows could fuel a revolution in psychology, as the field comes to grips with accumulating evidence that our brains aren't just stamped by the past. They are constantly being shaped by the future.

Hardwired for hope?

I would have liked to tell you that my work on optimism grew out of a keen interest in the positive side of human nature. The reality is that I stumbled onto the brain's innate optimism by accident. After living through 9/11, in New York City, I had set out to investigate people's memories of the terrorist attacks. I was intrigued by the fact that people felt their memories were as accurate as a videotape, while often they were filled with errors. A survey conducted around the country showed that 11 months after the attacks, individuals' recollections of their experience that day were consistent with their initial accounts (given in September 2011) only 63% of the time. They were also poor at remembering details of the event, such as the names of the airline carriers. Where did these mistakes in memory come from?

Scientists who study memory proposed an intriguing answer: memories are susceptible to inaccuracies partly because the neural system responsible for remembering episodes from our past might not have evolved for memory alone. Rather, the core function of the memory system could in fact be to imagine the future – to enable us to prepare for what has yet to come. The system is not designed to perfectly replay past events, the researchers claimed. It is designed to flexibly construct future scenarios in our minds. As a result, memory also ends up being a reconstructive process, and occasionally, details are deleted and others inserted.

To test this, I decided to record the brain activity of volunteers while they imagined future events – not events on the scale of 9/11, but events in their everyday lives – and compare those results with the pattern I observed when the same individuals recalled past events. But something unexpected occurred. Once people started imagining the future, even the most banal life events seemed to take a dramatic turn for the better. Mundane scenes brightened with upbeat details as if polished by a Hollywood script doctor. You might think that imagining a future haircut would be pretty dull. Not at all. Here is what one of my participants pictured: "I was getting my hair cut to donate to Locks of Love [a charity that fashions wigs for young cancer patients]. It had taken me years to grow it out, and my friends were all there to help celebrate. We went to my favourite hair place in Brooklyn and then went to lunch at our favourite restaurant."

I asked another participant to imagine a plane ride. "I imagined the takeoff – my favourite! – and then the eight-hour-long nap in between and then finally landing in Krakow and clapping the pilot for providing the safe voyage," she responded. No tarmac delays, no screaming babies. The world, only a year or two into the future, was a wonderful place to live in.

If all our participants insisted on thinking positively when it came to what lay in store for them personally, what does that tell us about how our brains are wired? Is the human tendency for optimism a consequence of the architecture of our brains?

The Human time machine

To think positively about our prospects, we must first be able to imagine ourselves in the future. Optimism starts with what may be the most extraordinary of human talents: mental time travel, the ability to move back and forth through time and space in one's mind. Although most of us take this ability for granted, our capacity to envision a different time and place is in fact critical to our survival.

It is easy to see why cognitive time travel was naturally selected for over the course of evolution. It allows us to plan ahead, to save food and resources for times of scarcity and to endure hard work in anticipation of a future reward. It also lets us forecast how our current behaviour may influence future generations. If we were not able to picture the world in a hundred years or more, would we be concerned with global warming? Would we attempt to live healthily? Would we have children?

While mental time travel has clear survival advantages, conscious foresight came to humans at an enormous price – the understanding that somewhere in the future, death awaits. Ajit Varki, a biologist at the University of California, San Diego, argues that the awareness of mortality on its own would have led evolution to a dead end. The despair would have interfered with our daily function, bringing the activities needed for survival to a stop. The only way conscious mental time travel could have arisen over the course of evolution is if it emerged together with irrational optimism. Knowledge of death had to emerge side by side with the persistent ability to picture a bright future.

The capacity to envision the future relies partly on the hippocampus, a brain structure that is crucial to memory. Patients with damage to their hippocampus are unable to recollect the past, but they are also unable to construct detailed images of future scenarios. They appear to be stuck in time. The rest of us constantly move back and forth in time; we might think of a conversation we had with our spouse yesterday and then immediately of our dinner plans for later tonight.

But the brain doesn't travel in time in a random fashion. It tends to engage in specific types of thoughts. We consider how well our kids will do in life, how we will obtain that sought-after job, afford that house on the hill and find perfect love. We imagine our team winning the crucial game, look forward to an enjoyable night on the town or picture a winning streak at the blackjack table. We also worry about losing loved ones, failing at our job or dying in a terrible plane crash – but research shows that most of us spend less time mulling over negative outcomes than we do over positive ones. When we do contemplate defeat and heartache, we tend to focus on how these can be avoided.

Findings from a study I conducted a few years ago with prominent neuroscientist Elizabeth Phelps suggest that directing our thoughts of the future toward the positive is a result of our frontal cortex's communicating with subcortical regions deep in our brain. The frontal cortex, a large area behind the forehead, is the most recently evolved part of the brain. It is larger in humans than in other primates and is critical for many complex human functions such as language and goal setting. Using a functional magnetic resonance imaging (fMRI) scanner, we recorded brain activity in volunteers as they imagined specific events that might occur to them in the future. Some of the events that I asked them to imagine were desirable (a great date or winning a large sum of money), and some were undesirable (losing a wallet, ending a romantic relationship). The volunteers reported that their images of sought-after events were richer and more vivid than those of unwanted events.

This matched the enhanced activity we observed in two critical regions of the brain: the amygdala, a small structure deep in the brain that is central to the processing of emotion, and the rostral anterior cingulate cortex (rACC), an area of the frontal cortex that modulates emotion and motivation. The rACC acts like a traffic conductor, enhancing the flow of positive emotions and associations. The more optimistic a person was, the higher the activity in these regions was while imagining positive future events (relative to negative ones) and the stronger the connectivity between the two structures.

The findings were particularly fascinating because these precise regions – the amygdala and the rACC – show abnormal activity in depressed individuals. While healthy people expect the future to be slightly better than it ends up being, people with severe depression tend to be pessimistically biased: they expect things to be worse than they end up being. People with mild depression are relatively accurate when predicting future events. They see the world as it is. In other words, in the absence of a neural mechanism that generates unrealistic optimism, it is possible all humans would be mildly depressed.

Can optimism change reality?

The problem with pessimistic expectations, such as those of the clinically depressed, is that they have the power to alter the future; negative expectations shape outcomes in a negative way. How do expectations change reality?

To answer this question my colleague, cognitive neuroscientist Sara Bengtsson, devised an experiment in which she manipulated positive and negative expectations of students while their brains were scanned and tested their performance on cognitive tasks. To induce expectations of success, she primed college students with words such as smart, intelligent and clever just before asking them to perform a test. To induce expectations of failure, she primed them with words like stupid and ignorant. The students performed better after being primed with an affirmative message.

Examining the brain-imaging data, Bengtsson found that the students' brains responded differently to the mistakes they made depending on whether they were primed with the word clever or the word stupid. When the mistake followed positive words, she observed enhanced activity in the anterior medial part of the prefrontal cortex (a region that is involved in self-reflection and recollection). However, when the participants were primed with the word stupid, there was no heightened activity after a wrong answer. It appears that after being primed with the word stupid, the brain expected to do poorly and did not show signs of surprise or conflict when it made an error.

A brain that doesn't expect good results lacks a signal telling it, "Take notice – wrong answer!" These brains will fail to learn from their mistakes and are less likely to improve over time. Expectations become self-fulfilling by altering our performance and actions, which ultimately affects what happens in the future. Often, however, expectations simply transform the way we perceive the world without altering reality itself. Let me give you an example. While writing these lines, my friend calls. He is at Heathrow waiting to get on a plane to Austria for a skiing holiday. His plane has been delayed for three hours already, because of snowstorms at his destination. "I guess this is both a good and bad thing," he says.

Waiting at the airport is not pleasant, but he quickly concludes that snow today means better skiing conditions tomorrow. His brain works to match the unexpected misfortune of being stuck at the airport to its eager anticipation of a fun getaway.

A cancelled flight is hardly tragic, but even when the incidents that befall us are the type of horrific events we never expected to encounter, we automatically seek evidence confirming that our misfortune is a blessing in disguise. No, we did not anticipate losing our job, being ill or getting a divorce, but when these incidents occur, we search for the upside. These experiences mature us, we think. They may lead to more fulfilling jobs and stable relationships in the future. Interpreting a misfortune in this way allows us to conclude that our sunny expectations were correct after all – things did work out for the best.

The role of the caudate nucleus

How do we find the silver lining in storm clouds? To answer that, my colleagues – renowned neuroscientist Ray Dolan and neurologist Tamara Shiner – and I instructed volunteers in the fMRI scanner to visualise a range of medical conditions, from broken bones to Alzheimer's, and rate how bad they imagined these conditions to be. Then we asked them: If you had to endure one of the following, which would you rather have – a broken leg or a broken arm? Heartburn or asthma? Finally, they rated all the conditions again. Minutes after choosing one particular illness out of many, the volunteers suddenly found that the chosen illness was less intimidating. A broken leg, for example, may have been thought of as "terrible" before choosing it over some other malady. However, after choosing it, the subject would find a silver lining: "With a broken leg, I will be able to lie in bed watching TV, guilt-free."

In our study, we also found that people perceived adverse events more positively if they had experienced them in the past. Recording brain activity while these reappraisals took place revealed that highlighting the positive within the negative involves, once again, a tête-à-tête between the frontal cortex and subcortical regions processing emotional value. While contemplating a mishap, like a broken leg, activity in the rACC modulated signals in a region called the striatum that conveyed the good and bad of the event in question – biasing activity in a positive direction.

It seems that our brain possesses the philosopher's stone that enables us to turn lead into gold and helps us bounce back to normal levels of wellbeing. It is wired to place high value on the events we encounter and put faith in its own decisions. This is true not only when forced to choose between two adverse options (such as selecting between two courses of medical treatment) but also when we are selecting between desirable alternatives. Imagine you need to pick between two equally attractive job offers. Making a decision may be a tiring, difficult ordeal, but once you make up your mind, something miraculous happens. Suddenly – if you are like most people – you view the chosen offer as better than you did before and conclude that the other option was not that great after all. According to social psychologist Leon Festinger, we re-evaluate the options post-choice to reduce the tension that arises from making a difficult decision between equally desirable options.

In a brain-imaging study I conducted with Ray Dolan and Benedetto De Martino in 2009, we asked subjects to imagine going on vacation to 80 different destinations and rate how happy they thought they would be in each place. We then asked them to select one destination from two choices that they had rated exactly the same. Would you choose Paris over Brazil? Finally, we asked them to imagine and rate all the destinations again. Seconds after picking between two destinations, people rated their selected destination higher than before and rated the discarded choice lower than before.

The brain-imaging data revealed that these changes were happening in the caudate nucleus, a cluster of nerve cells that is part of the striatum. The caudate has been shown to process rewards and signal their expectation. If we believe we are about to be given a paycheck or eat a scrumptious chocolate cake, the caudate acts as an announcer broadcasting to other parts of the brain, "Be ready for something good." After we receive the reward, the value is quickly updated. If there is a bonus in the paycheck, this higher value will be reflected in striatal activity. If the cake is disappointing, the decreased value will be tracked so that next time our expectations will be lower.

In our experiment, after a decision was made between two destinations, the caudate nucleus rapidly updated its signal. Before choosing, it might signal "thinking of something great" while imagining both Greece and Thailand. But after choosing Greece, it now broadcast "thinking of something remarkable!" for Greece and merely "thinking of something good" for Thailand.

True, sometimes we regret our decisions; our choices can turn out to be disappointing. But on balance, when you make a decision – even if it is a hypothetical choice – you will value it more and expect it to bring you pleasure.

This affirmation of our decisions helps us derive heightened pleasure from choices that might actually be neutral. Without this, our lives might well be filled with second-guessing. Have we done the right thing? Should we change our mind? We would find ourselves stuck, overcome by indecision and unable to move forward.

The puzzle of optimism

While the past few years have seen important advances in the neuroscience of optimism, one enduring puzzle remained. How is it that people maintain this rosy bias even when information challenging our upbeat forecasts is so readily available? Only recently have we been able to decipher this mystery, by scanning the brains of people as they process both positive and negative information about the future. The findings are striking: when people learn, their neurons faithfully encode desirable information that can enhance optimism but fail at incorporating unexpectedly undesirable information. When we hear a success story like Mark Zuckerberg's, our brains take note of the possibility that we too may become immensely rich one day. But hearing that the odds of divorce are almost one in two tends not to make us think that our own marriages may be destined to fail.

Why would our brains be wired in this way? It is tempting to speculate that optimism was selected by evolution precisely because, on balance, positive expectations enhance the odds of survival. Research findings that optimists live longer and are healthier, plus the fact that most humans display optimistic biases – and emerging data that optimism is linked to specific genes – all strongly support this hypothesis. Yet optimism is also irrational and can lead to unwanted outcomes. The question then is, How can we remain hopeful – benefiting from the fruits of optimism – while at the same time guarding ourselves from its pitfalls?

I believe knowledge is key. We are not born with an innate understanding of our biases. The brain's illusions have to be identified by careful scientific observation and controlled experiments and then communicated to the rest of us. Once we are made aware of our optimistic illusions, we can act to protect ourselves. The good news is that awareness rarely shatters the illusion. The glass remains half full. It is possible, then, to strike a balance, to believe we will stay healthy, but get medical insurance anyway; to be certain the sun will shine, but grab an umbrella on our way out — just in case.

Tali Sharot is a research fellow at University College London's Wellcome Trust Centre for Neuroimaging

© 2011 Tali Sharot

Tuesday 10 January 2012

The Human Givens

A link to information about the Human Givens and their importance for wellbeing:

http://www.hgi.org.uk/archive/In_our_own_words.htm

Monday 9 January 2012

Placebo – The Healing Power of Nothing Reader’s Digest, October 2011 (p 89 - )



What is the Placebo Effect?

It’s a phenomenon whereby an inert substance believed by a patient to be a drug has effects similar to the actual drug, resulting in the patient’s medical improvement. It’s been called the “ghost in the house of biological medicine” – and, unsurprisingly pharmaceutical companies go to enormous lengths to exorcise it.

All drugs must show that they are better than a placebo before they can get a license. This means there’s a campaign to discredit all forms of non-drug medicine by dismissing any reported benefits as simply due to the placebo effect.

A Placebo in Action

In one study, subjects were either told they were getting a painkiller or untruthfully that they weren’t, before having something hot pressed against their leg. This prior information had a big effect.

Those in the group who thought they weren’t getting any pain relief reported as much pain as if they really hadn’t had any painkiller – a simple instruction could wipe out the benefits of a strong drug. The other group reported much more benefit than was normal in subjects who didn’t know if they were getting a drug or a placebo, as happens in regular trials.

How should Placebos be used?

The big objection to using placebos has always been that it involve lying to patients. No one’s going to respond if they know it’s fake, right?

But Ted Kaptchuk of Harvard Medical School discovered that even when he told IBS patients that their pills were placebos – in a bottle marked “Placebo” – they still reported nearly twice as much benefit as those who got nothing.

This shows that pretending humans are just biochemical machines responding to chemical inputs is to miss a huge part of what goes on, not just in healing but also in our lives.

Extracts:

·         The Lancet recently devoted a major article to the placebo effect…because scientists have been making some remarkable discoveries. In fact the line between a drug and a placebo is looking increasingly blurred.

·         Professor Fabrizio Benedetti of Turin University is one of the pioneers of this new view of placebo. He’s been shaking up conventional thinking by showing that, without the placebo effect, some widely used drugs don’t work at all. He discovered this by doing something quite simple: he didn’t tell patients they were getting the drug.

·         Summary of Benedetti’s study: two groups of pre-operative patients – anxious – catheter for anti-anxiety drug – only one group visited by attentive doctor – only those who saw the doctor got any benefit from the drug. When the experiment was done with a powerful painkiller such as morphine, those not visited by the doctor needed almost double to dose to get the usual effect. “The conventional idea is that a real drug is better than a placebo, but here the drug didn’t work without the help of a placebo.”

·         Irving Kirsch’s work also suggests there’s a serious flaw in the way drugs are normally tested. The aim of clinical trials is to prevent people knowing if they’re getting the drug, so any effect has to be due to the chemical effect. But 70% of patients in trials for SSRI were able to work out from the side effects if they were getting the drug or not. One worrying implication is that drugs with stronger side effects are likely to show up as more effective in trials.

·         What this shows is that the notion of any clear division between proper medicine that has no truck with complementary-type medicine that depends on it, is nonsense. Brain-scan studies have shown that when placebo effects are at work, they have just as clear an effect on the brain as a drug does. So if the placebo effect is a vital part of all forms of treatment, why not find out more about it and make use of its power?

·         Prof. Ted Kaptchuk of Harvard Medical School has been investigating. He’s already found that there isn’t just one placebo effect – there are many, depending on the situation. For instance, there’s your belief as a patient (“I’m about to get a drug that will help”), and there are your thoughts about your doctor (“He/she seems really nice”). A recent study found that diabetes patients treated by a doctor rated high on empathy had better blood-sugar control and lower cholesterol than patients on the same drugs treated by doctors who were seen as distant. How much the doctor believes in the treatment – a phenomenon called placebo-by-proxy – can have a big effect too.

·         Kaptchuk found that you can boost the placebo element of a treatment by combining these different sorts of placebo response. “We treated irritable bowel syndrome (IBS) with either just sham acupuncture (a toothpick) or sham plus lots of emotional warmth and care from the doctor,” he says. A third group was left on a waiting list as a control.” We found that those getting just the sham treatment reported 40 per cent reduction in symptoms compared with those on the waiting list, but those with the boosted placebo-plus had a 60 per cent improvement. That’s the level of benefit you get with the best drug.”

·         The truth is that we’re social creatures who respond emotionally to the world around us all the time. Rejecting placebo means ignoring that side of ourselves that we value most. The same emotional responses that power the placebo come into play when we care for our children, when we make friends, or when we decide to trust someone.

·         Clinics based on the idea of getting the most out of our natural placebo response would be a path to a more generous, patient-centres sort of medicine. Could this be one of the ways out of Drugged-Up Britain?

Why Placebos Work Wonders


From Weight Loss to Fertility, New Legitimacy For 'Fake' Treatments

Wall Street Journal, In the Lab, January 3, 2012
Say "placebo effect" and most people think of the boost they may get from a sugar pill simply because they believe it will work. But more and more research suggests there is more than a fleeting boost to be gained from placebos.
A particular mind-set or belief about one's body or health may lead to improvements in disease symptoms as well as changes in appetite, brain chemicals and even vision, several recent studies have found, highlighting how fundamentally the mind and body are connected.
It doesn't seem to matter whether people know they are getting a placebo and not a "real" treatment. One study demonstrated a strong placebo effect in subjects who were told they were getting a sugar pill with no active ingredient.
Placebo treatments are sometimes used in some clinical practices. In a 2008 survey of nearly 700 internists and rheumatologists published in the British Medical Journal, about half said they prescribe placebos on a regular basis. The most popular were over-the-counter painkillers and vitamins. Very few physicians said they relied on sugar pills or saline injections. The American Medical Association says a placebo can't be given simply to soothe a difficult patient, and it can be used only if the patient is informed of and agrees to its use.
Researchers want to know more about how the placebo effect works, and how to increase and decrease it. A more powerful, longer-lasting placebo effect might be helpful in treating health conditions related to weight and metabolism.
Hotel-room attendants who were told they were getting a good workout at their jobs showed a significant decrease in weight, blood pressure and body fat after four weeks, in a study published in Psychological Science in 2007 and conducted by Alia Crum, a Yale graduate student, and Ellen Langer, a professor in the psychology department at Harvard. Employees who did the same work but weren't told about exercise showed no change in weight. Neither group reported changes in physical activity or diet.
Patients in a recent study were treated with placebos for an induced asthma attack. They reported feeling just as good as when they received an active treatment with albuterol.
Another study, published last year in the journal Health Psychology, shows how mind-set can affect an individual's appetite and production of a gut peptide called ghrelin (GREL-in), which is involved in the feeling of satisfaction after eating. Ghrelin levels are supposed to rise when the body needs food and fall proportionally as calories are consumed, telling the brain the body is no longer hungry and doesn't need to search out more food.
Yet the data show ghrelin levels depended on how many calories participants were told they were consuming, not how many they actually consumed. When told a milkshake they were about to drink had 620 calories and was "indulgent," the participants' ghrelin levels fell more—the brain perceived it was satisfied more quickly—than when they were told the shake had 120 calories and was "sensible."
The results may offer a physiological explanation of why eating diet foods can feel so unsatisfying, says Ms. Crum, first author on the study. "That mind-set of dieting is telling the body you're not getting enough."
Studies across medical conditions including depression, migraines and Parkinson's disease have found that supposedly inert treatments, like sugar pills, sham surgery and sham acupuncture, can yield striking effects. A 2001 study published in Science found that placebo was effective at improving Parkinson's disease symptoms at a magnitude similar to real medication. The placebo actually induced the brain to produce greater amounts of dopamine, the neurotransmitter known to be useful in treating the disease.
At times, a weaker placebo effect might be desired. In trials of experimental drug treatments for dementia, depression and other cognitive or psychiatric conditions, where one patient group takes medication and the other takes a sugar pill, it can be difficult to demonstrate that the medicine works because the placebo effect is so strong.
With depression, an estimated 30% to 45% of patients—or even more, in some studies—will respond to a placebo, according to a review published in December in Clinical Therapeutics. An additional 5% of patients were helped by an antidepressant in cases of mild depression, and an additional 16% in cases of severe depression. (The clinically meaningful cutoff for additional benefit was 11%.)
Fertility rates have been found to improve in women getting a placebo, perhaps because they experience a decrease in stress. A recent randomized trial of women with polycystic ovarian syndrome found that 15%, or 5 of 33, got pregnant while taking placebo over a six-month period, compared with 22%, or 7 of 32, who got the drug—a statistically insignificant difference. Other studies have demonstrated pregnancy rates as high as 40% in placebo groups.
Ted Kaptchuk, director of Harvard's Program in Placebo Studies and the Therapeutic Encounter, and colleagues demonstrated that deception isn't necessary for the placebo effect to work. Eighty patients with irritable bowel syndrome, a chronic gastrointestinal disorder, were assigned either a placebo or no treatment. Patients in the placebo group got pills described to them as being made with an inert substance and showing in studies to improve symptoms via "mind-body self-healing processes." Participants were told they didn't have to believe in the placebo effect but should take the pills anyway, Dr. Kaptchuk says. After three weeks, placebo-group patients reported feelings of relief, significant reduction in some symptoms and some improvement in quality of life.
Why did the placebo work—even after patients were told they weren't getting real medicine? Expectations play a role, Dr. Kaptchuk says. Even more likely is that patients were conditioned to a positive environment, and the innovative approach and daily ritual of taking the pill created an openness to change, he says.
Do placebos work on the actual condition, or on patients' perception of their symptoms? In a study published last year in the New England Journal of Medicine, Dr. Kaptchuk's team rotated 46 asthma patients through each of four types of treatment: no treatment at all, an albuterol inhaler, a placebo inhaler and sham acupuncture. As each participant got each treatment, researchers induced an asthma attack and measured the participant's lung function and perception of symptoms. The albuterol improved measured lung function compared with placebo. But the patients reported feeling just as good whether getting placebo or the active treatment.
"Right now, I think evidence is that placebo changes not the underlying biology of an illness, but the way a person experiences or reacts to an illness," Dr. Kaptchuk says.
Placebo can be more effective than the intended treatment. In a trial published in the journal Menopause in 2007, 103 women who had menopausal hot flashes got either five weeks of real acupuncture, or five weeks of sham acupuncture, where needles weren't placed in accepted therapeutic positions. A week after treatments ended, only some 60% of participants in both groups reported hot flashes—a robust immediate placebo effect. Seven weeks post-treatment, though, 55% of patients in the sham acupuncture group reported hot flashes, compared with 73% in the real acupuncture group.
Corrections & Amplifications
An earlier version of this article said that a study in the journal Health Psychology about appetite and the gut peptide ghrelin was published earlier this year.

The nocebo effect


by Wellcome Trust
Can anxious thoughts harm you? Penny Sarchet, winner of the professional scientists category in the 2011 Wellcome Trust Science Writing Prize, discusses the nocebo effect in her prize-winning essay. 
Can just telling a man he has cancer kill him? In 1992 the Southern Medical Journal reported the case of a man who in 1973 had been diagnosed with cancer and given just months to live. After his death, however, his autopsy showed that the tumour in his liver had not grown. His intern Clifton Meador didn’t believe he’d died of cancer: “I do not know the pathologic cause of his death,” he wrote. Could it be that, instead of the cancer, it was his expectation of death that killed him?
This death could be an extreme example of the “nocebo effect” – the flip-side to the better-known placebo effect. While an inert sugar pill (placebo) can make you feel better, warnings of fictional side-effects (nocebo) can make you feel those too. This is a common problem in pharmaceutical trials and a 1980s study found that heart patients were far more likely to suffer side-effects from their blood-thinning medication if they had first been warned of the medication’s side-effects. This poses an ethical quandary: should doctors warn patients about side-effects if doing so makes them more likely to arise?
The nocebo effect can also be highly infectious. In 1962, 62 workers at a US dressmaking factory were suddenly stricken with headaches, nausea and rashes, and the outbreak was blamed upon an insect arriving from England in a delivery of cloth. No insect was ever found, and “mass psychogenic illnesses” like these occur worldwide, usually affecting close communities and spreading most rapidly to female individuals who have seen someone else suffering from the condition.
Until recently, we knew very little about how the nocebo effect works. Now, however, a number of scientists are beginning to make headway. A study in February led by Oxford’s Professor Irene Tracey showed that when volunteers feel nocebo pain, corresponding brain activity is detectable in an MRI scanner. This shows that, at the neurological level at least, these volunteers really are responding to actual, non-imaginary, pain. Fabrizio Benedetti, of the University of Turin, and his colleagues have managed to determine one of the neurochemicals responsible for converting the expectation of pain into this genuine pain perception. The chemical is called cholecystokinin and carries messages between nerve cells. When drugs are used to block cholecystokinin from functioning, patients feel no nocebo pain, despite being just as anxious.
The findings of Benedetti and Tracey not only offer the first glimpses into the neurology underlying the nocebo effect, but also have very real medical implications. Benedetti’s work on blocking cholecystokinin could pave the way for techniques that remove nocebo outcomes from medical procedures, as well as hinting at more general treatments for both pain and anxiety. The findings of Tracey’s team carry startling implications for the way we practise modern medicine. By monitoring pain levels in volunteers who had been given a strong opioid painkiller, they found that telling a volunteer the drug had now worn off was enough for a person’s pain to return to the levels it was at before they were given the drug. This indicates that a patient’s negative expectations have the power to undermine the effectiveness of a treatment, and suggests that doctors would do well to treat the beliefs of their patients, not just their physical symptoms.
This places a spotlight on doctor-patient relationships. Today’s society is litigious and sceptical, and if doctors overemphasise side-effects to their patients to avoid being sued, or patients mistrust their doctor’s chosen course of action, the nocebo effect can cause a treatment to fail before it has begun. It also introduces a paradox – we must believe in our doctors if we are to gain the full benefits of their prescribed treatments, but if we trust in them too strongly, we can die from their pronouncements.
Today, many of the fastest-growing illnesses are relatively new and characterised solely by a collection of complaints. Allergies, food intolerances and back pain could easily be real physiological illnesses in some people and nocebo-induced conditions in others. More than a century ago, doctors found they could induce a hay fever sufferer’s wheezing by exposure to an artificial rose. Observations like these suggest we should think twice before overmedicalising the human experience. Our day-to-day worrying should be regarded as such, not built up into psychological syndromes with suites of symptoms, and the health warnings that accompany new products should be narrow and accurate, not vague and general in order to waive the manufacturer’s liability.
As scientists begin to determine how the nocebo effect works, we would do well to use their findings to manage that most 21st-century of all diseases – anxiety.
Penny Sarchet
This is an edited version of Penny’s original essay. 



Exercise for Life: Easier Than You Think Sandra A. Fryhofer, MD From Medscape Internal Medicine: Medicine Matters Posted: 12/23/2011


The topic: You do have time to exercise, according to a new study in Lancet.[1] Here's why it matters.
Everybody knows exercise is or would be good for them. It helps your heart. It maintains your mind. It relieves stress. But how much is enough? The general recommendation for adults is at least 150 minutes total each week.[2] That's slightly more than 20 minutes a day, which is 20 minutes more than many people claim they have.
A new study in Lancet obliterates that excuse.[1] Conducted in Taiwan, the study followed more than 400,000 people for more than 8 years. Participants kept exercise diaries and self-reported weekly exercise as inactive, low, medium, high, or very high. It turns out that even the low average, which was 15 minutes of exercise a day, reduced mortality, with a 10% decrease in cancer death, 14% decrease in death overall, and an average increase of 3 years of life. Whereas even a low amount of exercise is good, more is better. Each additional 15 minutes of daily exercise -- half an hour total -- produced an additional 1% decrease in cancer death and an additional 4% decreased risk for death overall. Participants who could not find 15 minutes to spare and did not exercise at all had a 17% higher death risk compared with even the low exercise group.
This is an observational study, but its bottom line is that just 15 minutes a day -- 105 minutes a week -- of moderate-intensity physical activity is all it takes to reap major benefits. A little bit of exercise can do a lot of good, and some is always better than none.

References

  1. Wen CP, Wai JP, Tsai MK, et al. Minimum amount of physical activity for reduced mortality and extended life expectancy: a prospective cohort study. Lancet. 2011;378:1244-1253. Epub 2011 Aug 16. Abstract
  2. Office of Disease Prevention & Health Promotion, U.S. Department of Health and Human Services. 2008 Physical Activity Guidelines for Americans Summary Accessed December 13, 2011.

Friday 6 January 2012

The BBC Stress Test - Results All in the Mind, BBC Radio 4, 20.12.11


The BBC Stress test was launched in June with BBC Lab UK, with the aim of answering one of the big questions in mental health - what is the cause of mental illness? More than 32,000 Radio 4 listeners took part, making this one of the largest studies of its kind in the world. The early results are in and Peter Kinderman, professor of clinical psychology at the University of Liverpool, tells Claudia Hammond what the findings reveal about the origins of mental health problems and the most effective coping strategies.
Extract from Claudia Hammond’s interview with Professor Peter Kinderman:

Peter Kinderman (PK): …we set out to find people who may or may not be stressed and then to look at what the causes of their mental health difficulties, their wellbeing were.

Claudia Hammond (CH): So what conclusions have you drawn about these bigger questions, about what causes mental health problems?

PK: The first thing to say is that we were testing out a theory that we had first expressed back in 2005 and we were looking at whether psychological factors were the consequence of high levels of stress, or whether they tended to cause high levels of stress. We’re still doing the analysis, so we’ve got a little bit of work still left to do, but it looks very much to us as if a family history of mental health problems, stressful life events, negative life events that you have experienced, and deprived social circumstances tend to make people ruminate and also blame themselves more for the negative events in their lives. And it’s that combination of self-blame and rumination that seems to be related to high levels of stress, and not as it might have been, the other way around.

CH: Ceri [one of the listeners who completed the ‘stress’ survey] mentioned that she often had a tendency to blame herself and that she would ruminate to an extent. How were you able to unpick what causes what in this, and how much it’s the events that have happened, how much it’s whether people blame themselves later, how much it’s whether they ruminate?

PK: Statistically, what we were looking at is how much of somebody’s stress levels were explained by different combinations of the different variables. So we were particularly interested in whether self-blame was more of a predictor than rumination. In fact, rumination seemed to be slightly more important than self-blame. But both self-blame and rumination were much more important than any of the other variables to be honest. So life events themselves were related to levels of stress, but they seemed to be related to stress only when people tended to ruminate or blame themselves. If you didn’t ruminate and didn’t blame yourself, then your levels of stress were much lower, even if you’d experienced many negative events in your life.

CH: So did you find that if people had many negative life events in the past, for example, difficult times growing up, if they then don’t ruminate a lot, are they then OK?

PK: Yes, basically what happened was that negative life events and very negative childhood events involving abuse were both related to mental health problems. But it seemed to be self-blame and rumination that were the pathway to those mental health problems. In scientific terms, very little of the variants was explained by the negative life events outside of the pathway through self-blame and rumination.  

CH: I see what you mean – so they can have those events, but if they don’t ruminate and self blame, they might be OK later on?

PK: Yes. And that’s important for another reason, which is it also suggests that if people are able to get a handle on rumination and self-blame, and to be honest, those other psychological processes such as where you place your attention, how your memory works, what you think about yourself, your self-concept. If people were to be able to get a handle on those psychological variables, then they might be able to improve their levels of stress and wellbeing. 

CH: Jan and Ceri were talking about rumination and their levels of stress there. What do you really mean by rumination? Does this mean worrying about those life events that happened in the past, those bad things that happen to people, or worrying more generally about everyday things?

PK: Well it can be both. It can be people having things going round and round in their heads about things that are coming up in the future. A typical example is someone who’s not sleeping because they’ve got a job interview the next day, and then thinking about what’s going to happen. It’s also people going over and over things that have happened in the past, ruminating about things that have happened, and they can’t shake these things out of their head. Interestingly, Jan made a distinction between productive and unproductive rumination, and I think that’s very important. We should plan for the future and we should reflect on the past: the question is to judge when it’s becoming unproductive and whether it’s just repetitive thoughts going round and round in our head.

CH: So you’re not saying all introspection is bad?

PK: I think introspection is good, but I think you need to be in control of it to the extent that it’s still useful to you, so you’re thinking about the future and preparing for it, rather than having thoughts about the ‘dreadful’ thing that’s coming up unproductively buzzing around in your head.

CH: So does this have implications for how psychological therapies should be shaped? I mean should they change to focus more on rumination and self-blame? Could you just look at those two things and make a difference?

PK: What I was doing was quite specifically testing out the question of whether psychological factors were causal of mental health problems, or whether they were the consequence of mental health problems. And I think that, although it’s only one study and other people will have other interpretations of it, I think it demonstrates that for this sample, rumination was a factor which caused mental health problems, in this sample.

CH: So it’s a bigger factor than say biological factors that people might think about?

PK: It was a much bigger factor than biological factors directly, although biological factors were related to your tendency to self blame. Everything was related to everything else, and it’s still quite difficult to tease that out. The important thing I think is that it gives an opportunity for people who are experiencing stress, who do feel as if their wellbeing is less than it should be, that there are things that can be done about that that don’t involve going back to the past and reversing the negative things that have happened but dealing with the consequences now, dealing with people’s tendency to ruminate, dealing with their tendency to self blame. And like I say, there will be other things like a tendency to jump to conclusions, how you evaluate your performance, what your self-concept is like, and a range of other psychological processes, that this experiment, amongst others, seems to demonstrate are important in determining how stressed you are.

CH: So these are your initial results. What are you going to be doing next?

PK: Well, one of the important things is to tease out the relationships between these variables in much more detail. So, for instance, one thing we have not yet analysed, but I think might be important, is the relationship between the two big explanatory factors: self-blame and rumination. It might be the case that rumination, if you don’t blame yourself for the events that have happened in your life, might be quite benign. It might be that it’s a particular combination of a tendency to ruminate and a tendency to blame yourself that’s particularly pernicious. Doing that analysis will take some time, it’s quite complicated. And so far we have tried it on three different computers and none of them have a big enough memory, so we need to increase the computing power.

CH: Because your sample is just too big – too many listeners you see.

PK: Yes, our sample is fantastically big.

CH: So if you found out that it was the self-blame that really mattered more, then in therapy you could just ignore rumination and teach people somehow not to blame themselves so much?

PK: Yes, I mean I think that Jan illustrated that a little bit when she said she tends to ruminate but she puts on the radio, and that’s OK. If you’re ruminating about toast, if you’re ruminating about what you might have for breakfast tomorrow morning, that might be easy to live with; if you’re ruminating about all of the mistakes you’ve made in your life and why you’re a bad person, that might cause you a great deal of stress.

CH: Listening to the radio as therapy – that’s what we like to hear. Professor Peter Kinderman. And the stress test is still there if you want to find out how you deal with stress. It takes about 20 minutes, it’s completely confidential, and you’ll find a link to it on the All in the Mind page of the Radio 4 website.   

(The programme can be downloaded from the Radio 4 website.)

Sacred Salubriousness: Why Religious Belief Is Not the Only Path to a Healthier Life New research on self-control explains the link between religion and health December 19, 2011, Scientific American Mind and Brain newsletter


Ever since 2000, when psychologist Michael E. McCullough, now at the University of Miami, and his colleagues published a meta-analysis of more than three dozen studies showing a strong correlation between religiosity and lower mortality, skeptics have been challenged by believers to explain why—as if to say, “See, there is a God, and this is the payoff for believing.”
In science, however, “God did it” is not a testable hypothesis. Inquiring minds would want to know how God did it and what forces or mechanisms were employed (and “God works in mysterious ways” will not pass peer review). Even such explanations as “belief in God” or “religiosity” must be broken down into their component parts to find possible causal mechanisms for the links between belief and behavior that lead to health, well-being and longevity. This McCullough and his then Miami colleague Brian Willoughby did in a 2009 paper that reported the results of a meta-analysis of hundreds of studies revealing that religious people are more likely to engage in healthy behaviors, such as visiting dentists and wearing seat belts, and are less likely to smoke, drink, take recreational drugs and engage in risky sex. Why? Religion provides a tight social network that reinforces positive behaviors and punishes negative habits and leads to greater self-regulation for goal achievement and self-control over negative temptations.
Self-control is the subject of Florida State University psychologist Roy Baumeister’s new book, Willpower, co-authored with science writer John Tierney. Self-control is the employment of one’s power to will a behavioral outcome, and research shows that young children who delay gratification (for example, forgoing one marshmallow now for two later) score higher on measures of academic achievement and social adjustment later. Religions offer the ultimate delay of gratification strategy (eternal life), and the authors cite research showing that “religiously devout children were rated relatively low in impulsiveness by both parents and teachers.”
The underlying mechanisms of setting goals and monitoring one’s progress, however, can be tapped by anyone, religious or not. Alcoholics Anonymous urges members to surrender to a “higher power,” but that need not even be a deity—it can be anything that helps you stay focused on the greater goal of sobriety. Zen meditation, in which you count your breaths up to 10 and then do it over and over, the authors note, “builds mental discipline. So does saying the rosary, chanting Hebrew psalms, repeating Hindu mantras.” Brain scans of people conducting such rituals show strong activity in areas associated with self-regulation and attention. McCul­lough, in fact, describes prayers and meditation rituals as “a kind of anaerobic workout for self-control.” In his lab Baumeister has demonstrated that self-control can be increased with practice of resisting temptation, but you have to pace yourself because, like a muscle, self-control can become depleted after excessive effort. Finally, the authors note, “Religion also improves the monitoring of behavior, another of the central steps of self-control. Religious people tend to feel that someone important is watching them.” For believers, that monitor may be God or other members of their religion; for nonbelievers, it can be family, friends and colleagues.
The world is full of temptations, and as Oscar Wilde boasted, “I can resist everything except temptation.” We may take the religious path of Augustine in his pre-saintly days when he prayed to God to “give me chastity and continence, but not yet.” Or we can choose the secular path of 19th-century explorer Henry Morton Stanley, who proclaimed that “self-control is more indispensable than gunpowder,” especially if we have a “sacred task,” as Stanley called it (his was the abolition of slavery). I would say you should select your sacred task, monitor and pace your progress toward that goal, eat and sleep regularly (lack of both diminishes willpower), sit and stand up straight, be organized and well groomed (Stanley shaved every day in the jungle), and surround yourself with a supportive social network that reinforces your efforts. Such sacred salubriousness is the province of everyone—believers and nonbelievers—who will themselves to loftier purposes.

ABOUT THE AUTHOR(S)

Michael Shermer is publisher of Skeptic magazine (www.skeptic.com). His new book is The Believing Brain.