Monday, 28 July 2014

The mistakes that lead therapists to infer psychotherapy was effective, when it wasn't

How well can psychotherapists and their clients judge from personal experience whether therapy has been effective? Not well at all, according to a paper by Scott Lilienfeld and his colleagues. The fear is that this can lead to the continued practice of ineffective, or even harmful, treatments.

The authors point out that, like the rest of us, clinicians are subject to four main biases that skew their ability to infer the effectiveness of their psychotherapeutic treatments. This includes the mistaken belief that we see the world precisely as it is (naive realism), and our tendency to pursue evidence that backs our initial beliefs (the confirmation bias). The other two are illusory control and illusory correlations - thinking we have more control over events than we do, and assuming the factors we're focused on are causally responsible for observed changes.

These features of human thought lead to several specific mistakes that psychotherapists and others commit when they make claims about the effectiveness of psychological therapies. Lilienfeld's team call these mistakes "causes of spurious therapeutic effectiveness" or CSTEs for short. The authors have created a taxonomy of 26 CSTEs arranged into three categories.

The first category includes 15 mistakes that lead to the perception that a client has improved, when in fact he or she has not. These include palliative benefits (when the client feels better about their symptoms without actually showing any tangible improvement); confusing insight with improvement (when the client better understands their problems, but does not actually show recovery); and the therapist's office error (confusing a client's presentation in-session with their behaviour in everyday life).

The second category consists of errors that lead therapists and their clients to infer that symptom improvements were due to the therapy, and not some other factor, such as natural recovery that would have occurred anyway. Among these eight mistakes are a failure to recognise that many disorders are cyclical (periods of recovery interspersed with phases of more intense symptoms); ignoring the influence of events occurring outside of therapy, such as an improved relationship or job situation; and the influence of maturation (disorders seen in children and teens can fade as they develop).

The third and final category of errors are those that lead to the assumption that improvements are causes by unique features of a therapy, rather than factors that are common to all therapies. Examples here include not recognising placebo effects (improvements stemming from expectations) and novelty effects (improvements due to initial enthusiasm).

To counter the many CSTEs, Lilienfeld's group argue we need to deploy research methods including using well-validated outcome measures, taking pre-treatment measures, blinding observers to treatment condition, conducting repeated measurements (thus reducing the biasing impact of irregular everyday life events), and using control groups that are subjected to therapeutic effects common to all therapies, but not those unique to the treatment approach under scrutiny.

"CSTEs underscore the pressing need to inculcate humility in clinicians, researchers, and students," conclude Lilienfeld and his colleagues. "We are all prone to neglecting CSTEs, not because of a lack of intelligence but because of inherent limitations in human information processing. As a consequence, all mental health professionals and consumers should be sceptical of confident proclamations of treatment breakthroughs in the absence of rigorous outcome data."


Lilienfeld, S., Ritschel, L., Lynn, S., Cautin, R., & Latzman, R. (2014). Why Ineffective Psychotherapies Appear to Work: A Taxonomy of Causes of Spurious Therapeutic Effectiveness Perspectives on Psychological Science, 9 (4), 355-387 DOI: 10.1177/1745691614535216

--further reading--
When therapy causes harm

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Saturday, 26 July 2014

Link feast

Our pick of the best psychology and neuroscience links from the past week:

Getting Over Procrastination
Maria Konnikova with an overview of some fascinating genetic research.

The End of ‘Genius’
"[T]he lone genius is a myth that has outlived its usefulness" writes Joshua Shenk.

Do You Need a Mental Health First Aider in The Office?
Mental health "first aider" Charlotte Walker explains her role.

Won’t They Help?
Dwyer Gunn for Aeon magazine looks at new programmes that are using psychological insights to combat the Bystander Phenomenon.

Dude, Where’s My Frontal Cortex?
Robert Sapolsky describes the advantages and disadvantages of the "unique" teenage brain.

Hundreds of Genes and Link to Immune System Found in Largest Genetic Schizophrenia Study
Michael O'Donovan explains the implications of the findings from the recent study he co-authored.

What’s Up With That: Why Does Sleeping In Just Make Me More Tired?
Nick Stockton for WIRED on the perils of too much sleep.

How Tests Make Us Smarter
Psychologist Henry L. Roediger III on the implications of his findings for educational policy.

Detecting Dementia: The First Steps Towards Dignity
Tania Browne explains why in future opticians may have an important role to play in detecting dementia.

Is One of the Most Popular Psychology Experiments Worthless?
Olga Khazan at The Atlantic asks whether its time to retire the "trolley problem" used in so many moral psychology experiments.


Post compiled by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Friday, 25 July 2014

How our judgments about criminals are swayed by disgust, biological explanations and animalistic descriptions

We expect of our jurors and judges calm, reasoned evaluation of the evidence. Of course we know the reality is rather different - prejudice and emotional reactions will always play their part. Now two new studies add insight into the ways people's legal judgements depart from cool objectivity.

Beatrice Capestany and Lasana Harris focused on two main factors - the disgust level of a crime, and whether or not the perpetrators' personality was described in biological terms. Seventeen participants were presented with pairs of crime vignettes, with each crime in a pair matched for severity in terms of US Federal sentencing guidelines, but one crime high in disgust value, the other low. For example, one vignette described a man pulling a gun on a love rival, taking aim and missing. The matching vignette described a man who stabbed his boss with scissors, once in the neck and once in the back, causing serious blood loss.

Each vignette concluded with a personality description that was either trait-based (e.g. Gerald has an impulsive personality) or biological (e.g. Terry has a gene mutation that makes him impulsive). These contrasting personality descriptions were always irrelevant to the crime - so, in the aforementioned impulsivity examples, the crime in question was pre-meditated.

Capestany and Harris found that participants recommended more serious punishments for crimes that were more disgusting. This sounds like emotion clouding judgment. But in a sense, greater disgust made participants more reliable decision makers because when disgust levels were high, the participants' recommendations more closely matched Federal sentencing guidelines. Perhaps, the researchers surmised, this is because the US legal system is rooted in historical moral judgments that were guided by disgust reactions.

Capestany and Harris also scanned the brains of their participants. This revealed greater engagement of brain regions involved in logical reasoning when participants were presented with crimes higher in disgust. In other words, a stronger emotional reaction to the crime actually led to greater activation of neural areas involved in logic.

When it came to the influence of the personality descriptions, participants judged criminals to be less culpable when they'd been described in biological terms, presumably because biological factors are perceived as deterministic and reduce the sense that the criminal has control over their behaviour. The brain scans showed greater recruitment of logical reasoning centres when vignettes included trait (non-biological) descriptions of the criminal's personality, so perhaps participants jumped to conclusions when given biological information.

"Biological personality descriptions dehumanise the person, reducing them to a mechanistic, biological organism and not a human being whose mental states are highly unique and salient during responsibility judgments," the researchers said.

Another way that a suspect can be dehumanised is by describing their actions in animalistic terms. This is what happened in the the UK with the real life case of Raoul Moat in 2010, after he shot three people in England. He was described in the media as a "brute" and like "an animal in the wild" when he went on the run.

A team led by Eduardo Vasquez has investigated people's sentencing decisions when criminal acts are described in animalistic terms (e.g. "... the perpetrator slunk onto the victim's premises ... He roared at the victim before pounding him with his fists") versus in non-animalistic terms, but with wording matched for seriousness (e.g. "the perpetrator stole onto the victim's premises ... He shouted at the victim before punching him with his fists").

Seventy-six participants recommended more serious sentences (one to two years extra duration) for criminals whose behaviour was described in animalistic terms. A follow-up study suggested this was because criminals described in animalistic terms were predicted to be more likely to re-offend.

Vasquez and his colleagues said their results "add to a growing body of literature examining the consequences of dehumanisation". They admitted that the implications for actual trials are unclear - after all, the descriptions they used are rarely heard in court. Nonetheless, they said there could be real-life relevance: "Media reports influence legal proceedings and most people rely on the media for information about criminal justice... People exposed to these [animalistic] descriptions may vote for harsher policies to address crime."


Capestany, B., & Harris, L. (2014). Disgust and biological descriptions bias logical reasoning during legal decision-making Social Neuroscience, 9 (3), 265-277 DOI: 10.1080/17470919.2014.892531

Vasquez, E., Loughnan, S., Gootjes-Dreesbach, E., & Weger, U. (2014). The animal in you: Animalistic descriptions of a violent crime increase punishment of perpetrator Aggressive Behavior, 40 (4), 337-344 DOI: 10.1002/ab.21525

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Thursday, 24 July 2014

Why job interviewers should focus on the candidates, not selling their organisation

It’s hard to find the best person for the job through an interview. New research uncovers part of the problem: judging a candidate’s calibre becomes trickier when we’re also trying to sell them the benefits of joining the organisation.

In an initial study, participants were asked to interview a person (another participant) who was acting as an applicant for a fictional position. Half the interviewers were told their priority was to get a good sense of the applicant, while the rest had to prioritise attracting the candidate to the vacant position. Following the interview, the interviewer participants then had to judge the applicant’s character by rating their Core Self Evaluation (CSE), a measure of their self-esteem and belief in their own competence, which is reliably predictive of job performance. Which set of interviewers ought to do a better job?

Researchers Jennifer Marr and Dan Cable tackled this topic because two fields of psychology make competing claims. Research on automatic processing suggests that when we apply explicit, rational processes to judgments that rely on quick intuition, we only muddy the water, or worse, become so self-conscious that we choke under pressure. We already know that some elements of applicant evaluation are fast - see this piece, so maybe we make our best judgments when we’re less concerned about making them? On the other hand, the theory of motivated cognition argues that when insufficiently focused we become vulnerable to biases or even blind to the obvious, as shown in the now-classic inattentional blindness experiments where focus on one task (counting basketball passes) makes it hard to spot salient events like the appearance of someone in an ape suit.

The new findings back the motivated cognition account - participants asked to entice the applicant were poorer judges of character than those explicitly asked to evaluate them. A follow-up field study found similar effects in genuine interviews within two samples: applicants to an MBA program and teachers applying for school assignments. In both samples, interviewees rated as having high CSE were more likely to go onto success - job offers for MBAs or "above and beyond" citizenship behaviours by the teachers - but only when the ratings came from interviewers who reported having a strong focus on evaluation. Those who reported giving more attention to selling the role produced CSE estimates that didn’t predict future success.

The authors note in their conclusion that “interviewers who focused only on evaluating applicants actually believed they were less able to select the best applicants than those who adopted a selling focus.” In fact the reverse was true, and the risk goes the other way: when we focus too much on soliciting applicants, we can miss the gorilla in the room: that they simply aren’t up to snuff.

  ResearchBlogging.orgMarr, J., & Cable, D. (2013). Do Interviewers Sell Themselves Short? The Effects of Selling Orientation on Interviewers' Judgments Academy of Management Journal, 57 (3), 624-651 DOI: 10.5465/amj.2011.0504

--further reading--
Experienced job interviewers are no better than novices at spotting lying candidates
Mind where you sit - how being in the middle is associated with superior performance

Post written by Alex Fradera (@alexfradera) for the BPS Research Digest.

Wednesday, 23 July 2014

What the textbooks don't tell you - one of psychology's most famous experiments was seriously flawed

Zimbardo speaking in '09
Conducted in 1971, the Stanford Prison Experiment (SPE) has acquired a mythical status and provided the inspiration for at least two feature-length films. You'll recall that several university students allocated to the role of jailor turned brutal and the study had to be aborted prematurely. Philip Zimbardo, the experiment's lead investigator, says the lesson from the research is that in certain situations, good people readily turn bad. "If you put good apples into a bad situation, you’ll get bad apples," he has written.

The SPE was criticised back in the 70s, but that criticism has noticeably escalated and widened in recent years. New details to emerge show that Zimbardo played a key role in encouraging his "guards" to behave in tyrannical fashion. Critics have pointed out that only one third of guards behaved sadistically (this argues against the overwhelming power of the situation). Question marks have also been raised about the self-selection of particular personality types into the study. Moreover, in 2002, the social psychologists Steve Reicher and Alex Haslam conducted the BBC Prison Study to test the conventional interpretation of the SPE. The researchers deliberately avoided directing their participants as Zimbardo had his, and this time it was the prisoners who initially formed a strong group identity and overthrew the guards.

Given that the SPE has been used to explain modern-day atrocities, such as at Abu Ghraib, and given that nearly two million students are enrolled in introductory psychology courses in the US, Richard Griggs, professor emeritus at the University of Florida, says "it is especially important that coverage of it in our texts be accurate."

So, have the important criticisms and reinterpretations of the SPE been documented by key introductory psychology textbooks? Griggs analysed the content of 13 leading US introductory psychology textbooks, all of which have been revised in recent years, including:  Discovering Psychology (Cacioppo and Freberg, 2012); Psychological Science (Gazzaniga et al, 2012); and Psychology (Schacter et al, 2011).

Of the 13 analysed texts, 11 dealt with the Stanford Prison Experiment, providing between one to seven paragraphs of coverage. Nine included photographic support for the coverage. Five provided no criticism of the SPE at all. The other six provided only cursory criticism, mostly focused on the questionable ethics of the study. Only two texts mentioned the BBC Prison Study. Only one text provided a formal scholarly reference to a critique of the SPE.

Why do the principal psychology introductory textbooks, at least in the US, largely ignore the wide range of important criticisms of the SPE? Griggs didn't approach the authors of the texts so he can't know for sure. He thinks it unlikely that ignorance is the answer. Perhaps the authors are persuaded by Zimbardo's answers to his critics, says Griggs, but even so, surely the criticisms should be mentioned and referenced. Another possibility is that textbook authors are under pressure to shorten their texts, but surely they are also under pressure to keep them up-to-date.

It would be interesting to compare coverage of the SPE in European introductory texts. Certainly there are contemporary books by British psychologists that do provide more in-depth critical coverage of the SPE.

Griggs' advice for textbook authors is to position coverage of the SPE in the research methods chapter (instead of under social psychology), and to use the experiment's flaws as a way to introduce students to key issues such as ecological validity, ethics, demand characteristics and subsequent conflicting results. "In sum," he writes, "the SPE and its criticisms comprise a solid thread to weave numerous research concepts together into a good 'story' that would not only enhance student learning but also lead students to engage in critical thinking about the research process and all of the possible pitfalls along the way."


Griggs, R. (2014). Coverage of the Stanford Prison Experiment in Introductory Psychology Textbooks Teaching of Psychology, 41 (3), 195-203 DOI: 10.1177/0098628314537968

--further reading--
Foundations of sand? The lure of academic myths and their place in classic psychology
Tyranny and The Tyrant,  From Stanford to Abu Ghraib (pdf; Phil Banyard reviews Zimbardo's book The Lucifer Effect).

Image credit: Jdec/Wikipedia
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Tuesday, 22 July 2014

The psychology of first impressions - digested

Piercings convey low intelligence and greater creativity, according to research
You’ll have had this experience - you meet a new person and within moments you feel good or bad vibes about them. This is you performing “thin slicing” - deducing information about a person based on “tells”, some more obvious than others.

Psychologists have studied this process in detail. For example, they’ve shown that we form a sense of whether a stranger is trustworthy in less than one tenth of a second. With some accuracy, we can also deduce rapidly more specific information such as their intelligence and sexual orientation.

This post delves into our archive and beyond to digest the science of first impressions:

People who make more eye contact are perceived as more intelligent
Psychologists at Northeastern University asked participants to watch five-minute videos of strangers chatting to each other in pairs, and then to rate the strangers' intelligence. People in the videos who made more eye contact with their conversational partner, especially while talking, and to a lesser extent while listening, tended to be perceived as more intelligent. Other research has found that people who avoid eye contact are judged to be insincere and lacking in conscientiousness (this last result was found for women, but not men). Don't go too far with the eye contact though - if you lock on and don't let go, people will likely assume you're psychopathic

White men with brown eyes are perceived to be more dominant than their blue-eyed counterparts, according to a 2010 study. However, a blue-eyed man looking to make himself appear more dominant would be wasting his time investing in brown-coloured contact lenses. The study by Karel Kleisner and colleagues at Charles University in the Czech Republic found that brown iris colour seems to co-occur with some other aspect of facial appearance that triggers in others the perception of dominance.

Back in the 70s, researchers created over fifty synthetic voices and played them to participants at various speeds. Increasing speech rate led participants to assume the owner of the voice was more competent. Similarly, in another study conducted during the same decade, researchers played their participants recordings of male interviewees, either slowed down by 30 per cent or at the normal rate. The participants who were played the slowed-down tapes rated the interviewees as less truthful, less fluent, and less persuasive. Other research has shown that people who “um” and “ah” a lot are assumed to not know what they're talking about.

Last year researchers asked participants to rate the same man who was shown either wearing an off-the-peg suit or a bespoke suit. When seen wearing the bespoke suit, the man was rated as more confident and successful. Other research has shown that people assume that the same job candidate in formal wear will be more likely to earn a higher salary and win promotion, as compared to when he looks more scruffy.

A study at Tilburg University showed that people wearing a luxury branded shirt (Tommy Hilfiger or Lacoste) were perceived as wealthier and higher status (than people wearing a non-branded or non-luxury shirt); more successful at getting passers-by to complete a questionnaire; more likely to be given a job; and more successful at soliciting money for a charity. But crucially, all these effects depended on the assumption that the shirt wearer owned the clothing.

In this research observers discerned correctly that more agreeable people tended to wear shoes that were practical and affordable (pointy toes, price and brand visibility were negatively correlated with agreeableness); that anxiously attached people tended to wear shoes that look brand new and in good repair (perhaps in an attempt to make a good impression and avoid rejection); that wealthier people wear more stylish shoes; and that women wear more expensive-looking, branded shoes.

Research in 2012 involved observers rating pictures of men and women who were depicted with various numbers of facial piercings. As the number of piercings went up, the ratings of intelligence went down. On the other hand, a 2008 study found that a woman was judged to be more artistic and creative when she was shown with more piercings. 

Researchers at the University of Liverpool presented undergrads with line drawings of women that varied in the number of visible tattoos. "Results showed that tattooed women were rated as less physically attractive, more sexually promiscuous and heavier drinkers than untattooed women, with more negative ratings with increasing number of tattoos." A more recent study found that men were more likely to approach a woman lying on a beach when she bore a tattoo on her back, and to do so more quickly. Men also estimated they would have more chance of dating or having sex with a woman when she had a tattoo on her back. 

When researchers at the Wharton School, University of Pennsylvania, photoshopped pictures of men, so that they appeared to have shaven heads, the men were judged to be "more dominant, taller, and stronger than their authentic selves."

In 2012, researchers analysed point-light videos to identify what cues participants used to make judgments about a walker's personality. This led to the identification of two main factors - one was related to an expansive, loose walking style, which participants tended to interpret as a sign of adventurousness, extraversion, trustworthiness and warmth; the other was a slow, relaxed style, which the participants interpreted as a sign of low neuroticism. Although linked with these observer perceptions, the two walking styles were not in fact associated with walkers' actual personalities.

A 2011 study found that participants made many assumptions about people based on their style of handshake, but that the only accurate judgments concerned conscientiousness. The researchers' explanation was that conscientiousness is a trait that reflects how successfully a person can learn any complex behaviour, be that a musical instrument or a handshake. "The ubiquitous handshake may not be as ritualized or as precise as the Japanese tea ceremony," they said, "but it certainly requires some knowledge of the prevailing social norms and some interpersonal coordination."

This post is the first in a new series in which we attempt to digest the research on a given topic, or pertaining to a particular question. If there are any topics or questions you'd like us to digest, please let us know by commenting on this post or contacting the Digest editor.   

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.

Study of dynamic facial expressions suggests there are four basic emotions, not six

New research suggests that humans recognise facial emotional expressions in a dynamic way. We search for urgent signals first, before seeking out more nuanced information. The University of Glasgow researchers also argue their data show there are four basic facial expressions of emotion rather than the widely accepted six.

Rachael Jack and her colleagues developed computerised 3-D faces that began neutral and relaxed before transforming over one second into a random expression, created through a combination of different facial muscle movements. These standard facial actions were digitised from recordings of real people, then tweaked to create variants with different speeds of transformation (see video, above).

Sixty Western Caucasian participants (31 women) then categorised each random expression as either: happiness, surprise, fear, disgust, anger or sadness. When observers agreed in their categorisations it meant something was signalling emotional information, and a technique called reverse correlation was applied across all the expressions to establish which muscle movements were associated with which emotions, and when. Many of these were expected, such as perceiving happiness in a raised upper lip and a wide mouth, but other findings were more surprising.

Whereas happiness and sadness were identifiable early on, other emotions took time to be distinguished. For example, for both surprise and fear, the same two early facial signals were involved early - dropping of the jaw and raised eyelids. It was only with the later appearance of raised eyebrows that surprise was distinguished from fear. A similar pattern was found with anger and disgust, with facial actions common to both (wrinkled nose, funnelled lips) appearing early, and the differentiator (a sneered upper lip for disgust) appearing later. It is thanks to the researchers’ unique dynamic stimuli that these unfolding processes have been uncovered for the first time.

Jack and her team argue that the four emotions that take time to be distinguished (fear, surprise, anger and disgust) fall into two categories: a “what the heck” avoidance response to a sudden fast-approaching threat, and a “something needs dealing with” approach response to an interest or problem within our midst. They see each of these categories as a single basic emotion, with the later distinction (fear or surprise; anger or disgust) adding social information to the more fundamental biological signal. Their argument would suggest there are four basic human emotions (approach, avoidance, happiness, sadness), contradicting the existing emotion framework, which states there are six basic emotions.

The debate over the precise number of basic human emotions is likely to run for a while yet. For example, another taxonomy, based on analysis of voice, touch, and posture, claims that there are several basic forms of happiness, even before counting varieties of the other emotions.

  ResearchBlogging.orgJack, R., Garrod, O., & Schyns, P. (2014). Dynamic Facial Expressions of Emotion Transmit an Evolving Hierarchy of Signals over Time Current Biology, 24 (2), 187-192 DOI: 10.1016/j.cub.2013.11.064

Thanks to Rachael Jack for permission to use the video and image from the study.
Post written by Alex Fradera (@alexfradera) for the BPS Research Digest.

Monday, 21 July 2014

It's time for Western psychology to recognise that many individuals, and even entire cultures, fear happiness

It's become a mantra of the modern Western world that the ultimate aim of life is to achieve happiness. Self-help blog posts on how to be happy are almost guaranteed popularity (the Digest has its own!). Pro-happiness organisations have appeared, such as Action for Happiness, which aims to "create a happier society for everyone." Topping it all, an increasing number of governments, including in the UK, have started measuring national well-being (seen as a proxy for "happiness") - the argument being that this a potentially more important policy outcome than economic prosperity.

But hang on a minute, say Moshen Joshanloo and Dan Weijers writing in the Journal of Happiness Studies - not everyone wants to be happy. In fact, they point out that many people, including in Western cultures, deliberately dampen their positive moods. Moreover, in many nations, including Iran and New Zealand, many people are actually fearful of happiness, tending to agree with questionnaire items like "I prefer not to be too joyful, because usually joy is followed by sadness".

Looking into the reasons for happiness aversion, Joshanloo and Weijers identify four: believing that being happy will provoke bad things to happen; that happiness will make you a worse person; that expressing happiness is bad for you and others; and that pursuing happiness is bad for you and others. Let's touch on each of these.

Fear that happiness leads to bad outcomes is perhaps most strong in East Asian cultures influenced by Taoism, which posits that "things tend to revert to their opposite". A 2001 study asked participants to choose from a range of life-course graphs and found that Chinese people were more likely than Americans to choose graphs that showed periods of sadness following periods of joy. Other cultures, such as Japan and Iran, believe that happiness can bring misfortune as it causes inattentiveness. Similar fears are sometimes found in the West as reflected in adages such as "what goes up must come down."

Belief that being happy makes you a worse person is rooted in some interpretations of Islam, the reasoning being that it distracts you from God. Joshanloo and Weijers quote the Prophet Muhammad: "were you to know what I know, you would laugh little and weep much" and "avoid much laughter, for much laughter deadens the heart." Another relevant belief here is the idea that being unhappy makes people more creative. Consider this quote from Edward Munch: "They [emotional sufferings] are part of me and my art. They are indistinguishable from me ... I want to keep those sufferings."

In relation to the overt expression of happiness, a 2009 study found that Japanese participants frequently mentioned that doing so can harm others, for example by making them envious; Americans rarely held such concerns. In Ifaluk culture in Micronesia, meanwhile, Joshanloo and Weijers note that expressing happiness is "associated with showing off, overexcitement, and failure at doing one's duties."

Finally, the pursuit of happiness is believed by many cultures and philosophies to be harmful to the self and others. Take as an example this passage of Buddhist text: "And with every desire for happiness, out of delusion they destroy their own well-being as if it were their enemy." In Western thought, as far back as Epicurus, warnings are given that the direct pursuit of happiness can backfire on the self, and harm others through excessive self-interest. Also, it's been argued that joy can make the oppressed weak and less likely to fight injustice.

There's a contemporary fixation with happiness in the much of the Western world. Joshanloo and Weijers' counterpoint is that, for various reasons, not everyone wants to happy. From a practical perspective, they say this could seriously skew cross-cultural comparisons of subjective well-being. "It stands to reason," they write, "that a person with an aversion to expressing happiness ... may report lower subjective wellbeing than they would do otherwise." But their concerns go deeper: "There are risks for happiness studies in exporting Western psychology to non-Western cultures without undertaking indigenous analyses, including making invalid cross-cultural comparisons and imposing Western cultural assumptions on other cultures."

  ResearchBlogging.orgJoshanloo, M., & Weijers, D. (2013). Aversion to Happiness Across Cultures: A Review of Where and Why People are Averse to Happiness Journal of Happiness Studies, 15 (3), 717-735 DOI: 10.1007/s10902-013-9489-9

--further reading--
What's the difference between a happy life and a meaningful one?
Other people may experience more misery than you realise

Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.