Why anti-corruption strategies may backfire

One of the defining attributes of humans is that we are champion cooperators, surpassing levels of cooperation far beyond what is observed in other species across the animal kingdom. Understanding how cooperation is sustained, particularly in anonymous large-scale societies, remains a central question for both evolutionary scientists and policy makers.

Social scientists frequently use behavioural game theory to model cooperation in laboratory settings. These experiments suggest that ‘institutional punishment’ can be used to sustain cooperation in large groups- a set up analogous to the role governments play in wider society. In the real-world however, corruption can undermine the effectiveness of such institutions.

In July’s edition of the journal Nature Human Behaviour, Michael Muthukrishna and his colleagues Patrick Francois, Shayan Pourahmadi and Joe Henrich published an experimental study which rather cleverly incorporated corruption into a classic behavioural economic game.

Corruption worldwide remains widespread, unevenly distributed and costly. The authors cite estimates from the World Bank, stating US$1 trillion is paid in bribes alone each year. However, levels of corruption vary considerably across geographies. For example, estimates suggest that in Kenya 8 out of 10 interactions with public officials require a bribe. Conversely, indices suggest Denmark has the lowest level of corruption, and the average Dane may never pay a bribe in their lifetime.

Transparency International state that more than 6 billion people live in countries with a serious corruption problem. The costs of corruption range from reduced welfare programmes, to death from collapsed buildings. In other words, corruption can kill.

Michael Muthukrishna’s work suggests that corruption is largely inevitable due to our evolved psychological dispositions; the challenge is apparently to find the conditions where corruption and its detrimental impacts can be minimised. As Muthukrishna is quoted saying in an LSE press release for the paper:

Corruption is actually a form of cooperation rooted in our history, and easier to explain than a functioning, modern state. Modern states represent an unprecedented scale of cooperation that is always under threat by smaller scales of cooperation. What we call ‘corruption’ is a smaller scale of cooperation undermining a larger-scale.

Playing Bribes

What follows is an overview of the studies’ experimental design and results. If this is of little interest, I suggest skipping to the section titled ‘Backfire effect’.

To model corruption, the authors modified a behavioural economic game called the ‘institutional punishment game’. The participants were anonymous, and came from countries with varying levels of corruption. Overall, 274 participants took part in the study. The participants were provided with an endowment, which they could divide between themselves and a public pool. The public pool is multiplied by some amount and then divided equally among the players, regardless of their contributions.

The institutional punishment game is designed so that it is in every player’s self-interest to let others contribute to the public goods pool, whilst contributing nothing oneself. However, the gain for the group overall is highest if everybody contributes the maximum possible. Each round one group member is randomly assigned the leader, who can allocate punishments using taxes extracted from other players.

The ‘bribery game’ that Muthukrishna and his colleagues developed is the same as the basic game, except that each player had the ability to bribe the leader. Therefore, the leader could see both each players’ contributions to the public pool, and also the amount each player gave to them personally. The experimenters manipulated the ‘pool multiplier’ (a proxy for economic potential) and the ‘punishment multiplier’ (the power of the leader to punish).

For each player’s move, the leader could decide to do nothing, accept the bribe offered, or punish the player by taking away their points. Any points offered to the leader that he or she rejected were returned to the group member who made the offer. Group members could see only the leader’s actions towards them and their payoff, but not the leader’s actions towards other group members.

Compared to with the basic public goods game, the addition of bribes caused a large decrease in public good provisioning (a decline of 25%).

Leaders with a stronger punishment multiplier at their disposal (referred to as ‘strong leaders’) were approximately twice as likely to accept bribes and were three times less likely to do nothing (such as punish free-riders). As expected by the authors, more power led to more corrupt behaviour.

Having generated corruption, the authors introduced transparency to the bribery game. In the ‘partial transparency’ condition, group members could see not only the leader’s actions towards them, but also the leader’s own contributions to the public pool. However, they did not see the leader’s actions to other group members. In the ‘full transparency’ condition, information on each member and the leader’s subsequent actions was made fully available (that is, individual group members contributions to the pool, bribes offered to the leader, and the leader’s subsequent actions in each case).

Although the costs of bribery were seen in all contexts, the detrimental effects were most pronounced in the poor economic conditions.

The experiments demonstrated that corruption mitigation effectively increased contributions when leaders were strong or the economic potential was rich. When leaders were weak (that is, their punitive powers were low and economic potential was poor), the apparent corruption mitigation strategy of full transparency had no effect, and partial transparency actually further decreased contributions to levels lower than that of the standard bribery game.

Backfire effect

The study indicates that corruption mitigation strategies help in some contexts, but elsewhere may cause the situation to deteriorate and can therefore backfire. As stated by the authors; “[…] proposed panaceas, such as transparency, may actually be harmful in some contexts.”

The findings are not surprising from a social psychological perspective, and support a vast literature on the impacts of social norms on behaviour. Transparency and exposure to institutional corruption may enforce the norm that most people are engaging in corrupt behaviours, and that such behaviour is permissible (or that one needs to also engage in such dealings to succeed). Why partial transparency had a more detrimental impact than full transparency when leaders were weak is not made clear however.

Remarkably, the authors found that participants who had grown up in more corrupt countries were more willing to accept bribes. The most plausible explanation presented is that exposure to corruption whilst growing up led to these social norms being internalized, which manifested in these individuals’ behaviour during the experiments.

It’s important to note that this is only one experimental study looking into anti-corruption strategies, and that caution is required when extending these research findings to practice. As stated by the authors; “Laboratory work on the causes and cures of corruption must inform and be informed by real-world investigations of corruption from around the globe.”

This aside, the authors’ research challenges widely held assumptions about how best to reduce corruption, and may help explain why the ‘cures for corruption’ which may prove successful in rich nations may not work elsewhere. To paraphrase the late Louis Brandeis, ‘sunlight is said to be the best of disinfectants, yet this may depend on climatic conditions and the prevalence of pathogens’.

Written by Max Beilby for Darwinian Business

Click here to read to full paper.



Muthukrishna, M., Francois, P., Pourahmadi, S., & Henrich, J. (2017). Corrupting cooperation and how anti-corruption strategies may backfire. Nature Human Behaviour.

Milinski, M. (2017). Economics: Corruption made visible. Nature Human Behaviour.

When Less is Best (LSE, 2017); Available here

Corruption Perceptions Index 2015 (Transparency International, 2015); Available here 


Image credit: George Marks/Getty Images.


Charismatic Leadership Through the Lens of Evolution

One of the defining features of human psychology is our extraordinary prosociality. How can cooperation and prosocial behaviour be maintained, despite the immediate temptations to free-ride and deflect?

In a paper published in the September edition of the journal Evolution & Human Behavior, organisational psychologists Allen Grabo and Mark van Vugt explore the origins and functions of charismatic leadership.

Charismatic leaders have played a prominent role throughout history, and yet a definition of what charismatic leadership actually is remains elusive.

The authors argue that the ultimate function of charismatic leadership is to effectively promote and sustain prosocial behaviour within groups. Using the terminology of evolutionary psychology, the authors contend charismatic leadership is “[…] a signalling process in which a leader conveys their ability to solve urgent coordination and cooperation challenges in groups”.

They continue:

This process is context-dependent, but fundamentally consists of (1) attracting attention to recruit followers, (2) making use of extraordinary rhetorical abilities and knowledge of cultural symbols and rituals to inspire and offer a vision, (3) minimizing the perceived risks of cooperation, and (4) aligning these followers toward shared goals.

Grabo and van Vugt suggest charismatic leadership helps foster group cohesion, even as populations grow larger and less kin-based than those of our hunter-gather ancestors.

The Charismatic Prosociality Hypothesis

Three studies were conducted to test the ‘charismatic prosociality hypothesis’. The authors recruited participants online, and used charismatic stimuli and experimental economic games to test it.

For the first two studies, the researchers capitalised on the wealth of TED talks available, and identified videos which viewers found similarly interesting but were presented by speakers scoring high or low in charisma. Participants watched either a high or  low charisma scoring TED talk, before participating in experimental economic games: the ‘Dictator‘ and ‘Trust‘ Games.

Participants who had watched the more charismatic TED talk gave more in the Dictator Game than the participants in the non-charismatic condition. For those playing the Trust Game, the Trustees behaved more pro-socially  (returned more of an initial amount sent by the first player) in the charismatic condition, versus the non-charismatic condition.

To test the generalizability of the effects observed in the initial studies, the authors made use of an entirely different ‘charismatic manipulation’. The authors instead primed participants by asking them to imagine a charismatic (or non-charismatic) individual, and to write a short description about this person. Afterwards, the primed respondents participated in the experimental economic games. The authors added ‘The Stag Hunt‘ Game, which measures cooperation in a more abstract way than the strict allocation of money.

The increased prosocial behaviour observed  in the high charisma condition within the Dictator and Trust games was replicated with the prime. In the Stag Game, participants in the charismatic condition were more likely to cooperate than those in the non-charismatic condition.

Overall, the findings provide initial evidence for the theory of charismatic leadership being an instrument to galvanise cooperation and prosociality among strangers.

A limitation of the research methodology arguably further supports the hypothesis: that the studies were confined to online experiments. One would expect significantly stronger prosocial effects when people are exposed to charismatic leaders in naturalistic settings.

The Dark Side of Charismatic Leadership

Of course, the authors focused on the positive aspects of charismatic leadership. Charisma has a dark side, which Grabo and van Vugt acknowledge.

The present article focuses exclusively on the positive effects of charismatic leadership, but this is by no means the entire story. In fact, there is much more to be said about the “dark side” of charismatic leadership, the dangers which can result when a leader takes advantage of the extreme devotion and commitment of followers for selfish or immoral reasons by signaling dishonestly their intentions to benefit the group. History is full of examples of individuals, such as cult members or suicide bombers, who were unable to abandon their commitment to a charismatic leader even in the face of conflicting information, with disastrous outcomes. One way of understanding such actions is to view them as the results of an evolved “psychological immune system” which functions to defend firmly held convictions against change by novel information. While such a system might have been beneficial for group cohesion in the past – when contact with outgroup members was rare and perhaps more dangerous – it is perhaps best considered an evolutionary mismatch in the modern world.

Click here to read the full paper

Post written  by  Max Beilby for Darwinian Business

You can read Max’s review of Mark van Vugt and Anjana Ahuja’s book  Selected: Why some people lead, why others follow, and why it matters here 


ATCG: Evolutionary Predictions for Organizational Cooperation

What follows is an overview of Michael Price (Brunel University, London) and Dominic Johnson’s (Edinburgh University) ‘Adaptionist Theory of Cooperation in Groups’, as outlined in Gad Saad’s (2011) Evolutionary Psychology in the Business Sciences

To help explain organizational cooperation from an evolutionary perspective, Price and Johnson developed the ‘Adaptionist Theory of Cooperation in Groups’- abbreviated to ATCG.

The acronym has double meaning- any hardcore science nerds will note that ATCG is also the acronym of the four bases of DNA (adenine, thymine, cytosine, and guanine). The authors note this conveniently highlights the theory’s biological foundations.

The reasoning behind an evolutionary perspective of group cooperation is this; “Managers could  more efficiently promote cooperation within their organizations if they had greater understanding of how evolution designed people to cooperate.” (p. 95)

The authors synthesised evolutionary research from an individual-level adaptationist perspective into a coherent theory of group cooperation. The basic premise is that people cooperate in groups to maximize their individual fitness (their ability to survive and reproduce).

ATCG takes into account ethnographic and archaeological evidence which suggests that in environments where humans evolved, cooperating in groups (whether for hunting, warfare, shelter construction, predator defence, etc) provided individuals benefits they could not have obtained by themselves. For example, group cooperation not only ensured more meat produce for less effort exerted compared to hunting alone, but also reduced the risk of starvation (as catches were pooled and distributed evenly among hunters).

The benefits of group cooperation transcend reciprocation from fellow cooperators (‘reciprocal altruism’). ATCG also implies the benefits of cooperation can involve much more than just a share of the first-order benefits, such as more meat. Price and Johnson note that cooperation also enhances individuals’ social status (‘competitive altruism’). For example, a skilled hunter would be highly valued by the group, which would attract many kinds of resources, which would thus make the hunter more attractive to females.

This is not just theoretical: field studies demonstrate that hunting skills is associated with social status and reproductive success in hunter gather societies (see Smith, 2004).

Group Cooperation in Organizations

Price and Johnson argue that modern organizations use these benefits to increase group cooperation:

“The method of motivating employees that is used in most organizations is to offer them social status in exchange for their help in producing the first-order resource. And just as in the ancestral past, higher status contributors – those on whom production most depends – attract greater economic compensation, in order to convince them to remain in the organization and to continue to contribute.” (p. 100)

Interestingly, the authors question whether cooperation is always a good thing. For example, the authors cite classic group decision making research which demonstrate that ‘nominal’ groups (aggregated ideas of individuals working alone) generate superior ideas than groups of interacting individuals.

A key issue regarding group cooperation is an ancient human dilemma: the free rider problem. Especially with larger groups, there is always the temptation of minimizing outputs whilst letting other group members do the hard work (also known as ‘slacking’).

In order to motivate employers to behave in group beneficial ways, Price and Johnson suggest allocating rewards fairly, and to allow employees to compete for these rewards by contributing in ways that most benefit the organization:

“If an employee makes a contribution that benefits the organization, for example by introducing a product improvement or new marketing strategy, a manager should never assume that the employee was selflessly motivated or is indifferent about being recognized and rewarded for this contribution, even if that employee modestly plays down the extent of his or her own contribution. If an employee does not receive some individual-level benefit that is commensurate with the value of his or her contribution, the employee will probably feel angry and exploited and lose motivation to cooperate…” (p. 105).

A key assumption of ATCG is that in order to cooperate adaptively, group members must ensure that their ‘benefit-to-contribution ratios’ are no smaller than those of co-members. In other words, that their efforts do not exceed that of fellow group members. If they do, these need to be compensated accordingly.

Frequency Dependence

And here we get to the heart of the issue: the ‘frequency dependence’ of cooperation.

What is the best strategy for an individual is dependent on that of other group members: whether they are free-riding, reciprocating, or unconditionally cooperating.

In a population made up predominantly of free-riders, the superior strategy from an individual perspective is to avoid all free-riders and to identify and collaborate with fellow cooperators. If the population is dominated by reciprocators, it makes the most sense to unconditionally cooperate- as you get all the benefits of cooperation and also minimize the costs of verification and checking. However if the population is dominated by unconditional cooperators, the population is inevitably invaded by free-riders- because cheaters can exploit their over-trusting cooperativeness.

According to ATCG, there is such a thing as being too trusting.

You can also think of frequency-dependence as a game of paper, rock scissors. The successful strategy depends on the strategy pursued by others.

ATCG proposes many novel predictions, over and above traditional organizational psychology theories such as equity theory. For example, ATCG predicts is that individuals who have more to gain from engaging in competition will be relatively pro-equity, rather than pro-equality. Similarly, sex differences regarding cooperation are highlighted. As men usually gain more reproductive benefits from social status than women, ATCG acknowledges that males tend to have a greater desire to compete.

Male competitiveness is also a key driver of group cooperation. Mark Van Vugt and his colleagues’ experimental research demonstrated that males increased their in-group cooperation significantly in response to competition from rival groups, whereas females were relatively unaffected by this competition.

Group Cooperation ≠ Group Selection

What is noticeable is how dismissive the authors are about group selection. Price and Johnson argue that group selection theory adds no advantages or predictive power above individual-level selection. But is this true?

Biologist David Sloan Wilson and his colleagues managed to challenge several decades worth of research on group decision making by applying a group selection perspective to the subject matter.  Contrary to conventional group decision making research suggesting that groups reach sub-optimal decisions as compared to that of individuals, DS Wilson’s innovative research illustrated that groups out-compete individuals when the complexity of the task increases- as would be expected from group selection (and common sense).

Noticeably, Price and Johnson didn’t cite this research.

More fundamentally, can the rise of empires, nation states and multinational corporations over the last 10,000 years be explained as by-products of reciprocal altruism?  Probably not.

Rather, ‘Cultural Multilevel Selection’ provides greater explanatory power regarding the rise of human civilisations.

Think of the hundreds of millions of servicemen that died defending their country throughout history. Staring death in the face, did these soldiers really calculate their ‘benefit-to-contribution ratios’ of engaging in lethal combat?

Arguably ‘pure altruism’ does exist, although it is probably a slither of humanity. As  Jonathan Haidt states in The Righteous Mind, we humans are ‘90% chimp and 10% bee’.

There are infrequent but highly impactful situations where individuals will sacrifice their  welfare for the benefit of the group, and new research suggests when and why this happens: when a society faces an existential threat from a rival group.

Of course, this is in the context of group survival and military combat. Organizations such as corporations are unlikely to elicit such altruistic behaviour.

One can envision a ‘Multilevel  Adaptionist Theory of Cooperation in Groups’, which incorporates these diverse findings into a coherent theory.

Written by Max Beilby

Click here to buy a copy of Evolutionary Psychology in the Business Sciences.



Haidt, J. (2012). The Righteous Mind: Why good people are divided by politics and religion. Vintage.

Smith, E. A. (2004). Why do good hunters have higher reproductive success?. Human Nature, 15(4), 343-364.

Turchin, P. (2015). Ultrasociety: How 10,000 Years of War Made Humans the Greatest Cooperators on Earth. Beresta Books

Van Vugt, M., & Ahuja, A. (2011). Naturally selected: The evolutionary science of leadership. HarperBusiness.

Van Vugt, M., De Cremer, D., & Janssen, D. P. (2007). Gender differences in cooperation and competition: the Male-Warrior hypothesis. Psychological science, 18(1), 19-23.

Wilson, D. S., Timmel, J. J., & Miller, R. R. (2004). Cognitive cooperation. Human Nature, 15(3), 225-250.

Wilson, D. S., Van Vugt, M., & O’Gorman, R. (2008). Multilevel selection theory and major evolutionary transitions: implications for psychological science. Current Directions in Psychological Science, 17(1), 6-9.