Behavioural Economics

Why anti-corruption strategies may backfire

One of the defining attributes of humans is that we are champion cooperators, surpassing levels of cooperation far beyond what is observed in other species across the animal kingdom. Understanding how cooperation is sustained, particularly in anonymous large-scale societies, remains a central question for both evolutionary scientists and policy makers.

Social scientists frequently use behavioural game theory to model cooperation in laboratory settings. These experiments suggest that ‘institutional punishment’ can be used to sustain cooperation in large groups- a set up analogous to the role governments play in wider society. In the real-world however, corruption can undermine the effectiveness of such institutions.

In July’s edition of the journal Nature Human Behaviour, Michael Muthukrishna and his colleagues Patrick Francois, Shayan Pourahmadi and Joe Henrich published an experimental study which rather cleverly incorporated corruption into a classic behavioural economic game.

Corruption worldwide remains widespread, unevenly distributed and costly. The authors cite estimates from the World Bank, stating US$1 trillion is paid in bribes alone each year. However, levels of corruption vary considerably across geographies. For example, estimates suggest that in Kenya 8 out of 10 interactions with public officials require a bribe. Conversely, indices suggest Denmark has the lowest level of corruption, and the average Dane may never pay a bribe in their lifetime.

Transparency International state that more than 6 billion people live in countries with a serious corruption problem. The costs of corruption range from reduced welfare programmes, to death from collapsed buildings. In other words, corruption can kill.

Michael Muthukrishna’s work suggests that corruption is largely inevitable due to our evolved psychological dispositions; the challenge is apparently to find the conditions where corruption and its detrimental impacts can be minimised. As Muthukrishna is quoted saying in an LSE press release for the paper:

Corruption is actually a form of cooperation rooted in our history, and easier to explain than a functioning, modern state. Modern states represent an unprecedented scale of cooperation that is always under threat by smaller scales of cooperation. What we call ‘corruption’ is a smaller scale of cooperation undermining a larger-scale.

Playing Bribes

What follows is an overview of the studies’ experimental design and results. If this is of little interest, I suggest skipping to the section titled ‘Backfire effect’.

To model corruption, the authors modified a behavioural economic game called the ‘institutional punishment game’. The participants were anonymous, and came from countries with varying levels of corruption. Overall, 274 participants took part in the study. The participants were provided with an endowment, which they could divide between themselves and a public pool. The public pool is multiplied by some amount and then divided equally among the players, regardless of their contributions.

The institutional punishment game is designed so that it is in every player’s self-interest to let others contribute to the public goods pool, whilst contributing nothing oneself. However, the gain for the group overall is highest if everybody contributes the maximum possible. Each round one group member is randomly assigned the leader, who can allocate punishments using taxes extracted from other players.

The ‘bribery game’ that Muthukrishna and his colleagues developed is the same as the basic game, except that each player had the ability to bribe the leader. Therefore, the leader could see both each players’ contributions to the public pool, and also the amount each player gave to them personally. The experimenters manipulated the ‘pool multiplier’ (a proxy for economic potential) and the ‘punishment multiplier’ (the power of the leader to punish).

For each player’s move, the leader could decide to do nothing, accept the bribe offered, or punish the player by taking away their points. Any points offered to the leader that he or she rejected were returned to the group member who made the offer. Group members could see only the leader’s actions towards them and their payoff, but not the leader’s actions towards other group members.

Compared to with the basic public goods game, the addition of bribes caused a large decrease in public good provisioning (a decline of 25%).

Leaders with a stronger punishment multiplier at their disposal (referred to as ‘strong leaders’) were approximately twice as likely to accept bribes and were three times less likely to do nothing (such as punish free-riders). As expected by the authors, more power led to more corrupt behaviour.

Having generated corruption, the authors introduced transparency to the bribery game. In the ‘partial transparency’ condition, group members could see not only the leader’s actions towards them, but also the leader’s own contributions to the public pool. However, they did not see the leader’s actions to other group members. In the ‘full transparency’ condition, information on each member and the leader’s subsequent actions was made fully available (that is, individual group members contributions to the pool, bribes offered to the leader, and the leader’s subsequent actions in each case).

Although the costs of bribery were seen in all contexts, the detrimental effects were most pronounced in the poor economic conditions.

The experiments demonstrated that corruption mitigation effectively increased contributions when leaders were strong or the economic potential was rich. When leaders were weak (that is, their punitive powers were low and economic potential was poor), the apparent corruption mitigation strategy of full transparency had no effect, and partial transparency actually further decreased contributions to levels lower than that of the standard bribery game.

Backfire effect

The study indicates that corruption mitigation strategies help in some contexts, but elsewhere may cause the situation to deteriorate and can therefore backfire. As stated by the authors; “[…] proposed panaceas, such as transparency, may actually be harmful in some contexts.”

The findings are not surprising from a social psychological perspective, and support a vast literature on the impacts of social norms on behaviour. Transparency and exposure to institutional corruption may enforce the norm that most people are engaging in corrupt behaviours, and that such behaviour is permissible (or that one needs to also engage in such dealings to succeed). Why partial transparency had a more detrimental impact than full transparency when leaders were weak is not made clear however.

Remarkably, the authors found that participants who had grown up in more corrupt countries were more willing to accept bribes. The most plausible explanation presented is that exposure to corruption whilst growing up led to these social norms being internalized, which manifested in these individuals’ behaviour during the experiments.

It’s important to note that this is only one experimental study looking into anti-corruption strategies, and that caution is required when extending these research findings to practice. As stated by the authors; “Laboratory work on the causes and cures of corruption must inform and be informed by real-world investigations of corruption from around the globe.”

This aside, the authors’ research challenges widely held assumptions about how best to reduce corruption, and may help explain why the ‘cures for corruption’ which may prove successful in rich nations may not work elsewhere. To paraphrase the late Louis Brandeis, ‘sunlight is said to be the best of disinfectants, yet this may depend on climatic conditions and the prevalence of pathogens’.

Written by Max Beilby for Darwinian Business

Click here to read to full paper.

 

References

Muthukrishna, M., Francois, P., Pourahmadi, S., & Henrich, J. (2017). Corrupting cooperation and how anti-corruption strategies may backfire. Nature Human Behaviour.

Milinski, M. (2017). Economics: Corruption made visible. Nature Human Behaviour.

When Less is Best (LSE, 2017); Available here

Corruption Perceptions Index 2015 (Transparency International, 2015); Available here 

 

Image credit: George Marks/Getty Images.

 

The Knowledge Illusion, by Steven Sloman & Philip Fernbach

Most things are complicated, even things that appear rather simple.

Take the toilet as an example. As a thought experiment, would you be able to explain to someone else how a toilet works?

If you’re fumbling for an answer– you’re not alone. Most people cannot either.

This not just a party trick. Psychologists have used several means to discover the extent of our ignorance. For example, Rebecca Lawson at the University of Liverpool presented people with a drawing of a bicycle which had several components missing. They were asked to fill in the drawing with the missing parts.

Sounds easy, right? Apparently not.

Nearly half of the participants were unable to complete the drawings correctly. Also, people didn’t do much better when they were presented with completed drawings and asked to identify the correct one.

Four badly drawn bikes

Four badly drawn bikes (Lawson, 2006)

To a greater or lesser extent, we all suffer from an illusion of understanding. That is, we think we understand how the world works when our understanding is rudimentary.

In their new book The Knowledge Illusion, cognitive scientists Steven Sloman and Philip Fernbach explore how we humans know so much, despite our individual ignorance.

Thinking is for action

To appreciate our mental limitations, we first need to ask ourselves: what is the purpose of the human brain? Answering this question ultimately leads to evolution, as the human brain has been honed by the forces of natural selection.

The authors note there is no shortage of of explanations of what the human mind evolved for. For example, there are those who argue the mind evolved to support language, or that it is adapted for social interactions, hunting, or acclimatising to changing climates. “[…] [T]hey are all probably right because the mind actually evolved to do something more general than any of them… Namely, the mind evolved to support our ability to act effectively.”

This more general explanation is important, as it helps establish why we don’t retain all the information we receive.

The reason we’re not all hyperthymesics is that it would make us less successful at what we’ve evolved to do. The mind is busy trying to choose actions by picking out the most useful stuff and leaving the rest behind. Remembering everything gets in the way of focusing on the deeper principles that allow us to recognize how a new situation resembles past situations and what kind of actions will be effective.

The authors argue the mind is not like a computer. Instead, the mind is a flexible problem solver that stores the most useful information to aid survival and reproduction. Storing superficial details is often unnecessary, and at times counterproductive.

Community of knowledge

Evidently, we would not do very well if we relied solely on our individual knowledge. We may consider ourselves highly intelligent, yet we wouldn’t survive very long if we found ourselves alone in the wilderness. So how do we survive and thrive, despite our mental limitations?

The authors argue the secret of our success is our ability to collaborate and share knowledge.

[…][W]e collaborate. That’s the major benefit of living in social groups, to make it easy to share our skills and knowledge. It’s not surprising that we fail to identify what’s in our heads versus what’s in others’, because we’re generally- perhaps always- doing things that involve both. Whether either of us washes dishes, we thank heaven that someone knows how to make dish soap and someone else knows how to provide warm water from a faucet. We wouldn’t have a clue.

One of the most important ingredients of humanity’s success is cumulative culture— our ability to store and transmit knowledge, enabled by our hyper-sociality and cooperative skills. This fundamental process is known as cultural evolution, and is outlined eloquently in Joe Henrich’s book The Secret of Our Success

Throughout The Knowledge Illusion, the metaphor of a beehive is used to describe our collective intelligence. “[…][P]eople are like bees and society a beehive: Our intelligence resides not in individual brains but in the collective mind.” However, the authors highlight that unlike beehives which have remained largely the same for millions of years, our shared intelligence is becoming more powerful and our collective pursuits are growing in complexity.

Collective intelligence

In psychology, intelligence has largely been confined to ranking individuals according to cognitive ability. The authors argue psychologists like general intelligence as it’s readily quantifiable, and has some power to predict important life outcomes. For example, people with higher IQ scores do better academically and perform better at their jobs.

Whilst there’s a wealth of evidence in favour of general intelligence, Sloman and Fernbach argue that we may be thinking about intelligence in the wrong way. “Awareness that knowledge lives in a community gives us a different way to conceive of intelligence. Instead of regarding intelligence as a personal attribute, it can be understood as how much an individual contributes to the community.”

A key argument is that groups don’t need a lot of intelligent people to succeed, but rather a balance of complimentary attributes and skill-sets. For example to run a company, you need some people who are cautious and others who are risk takers; some who are good with numbers and others who are good with people.

For this reason, Sloman and Fernbach stress the need to measure group performance, rather than individual intelligence. “Whether we’re talking about a team of doctors, mechanics, researchers, or designers, it is the group that makes the final product, not any one individual.”

A team led by Anita Woolley at the Tepper School of Business have begun devising ways of measuring collective intelligence, with some progress made. The idea of measuring collective intelligence is new, and many questions remain. However, the authors contend that the success of a group is not predominantly a function of the intelligence of individual members, but rather how well they work together.

Committing to the community

Despite all the benefits of our communal knowledge, it also has dangerous consequences. The authors argue believing we understand more than we do is the source of many of society’s most pressing problems.

Decades worth of research shows significant gap between what science knows, and what the public believes. Many scientists have tried addressing this deficit by providing people with more factual information. However, this approach has been less than successful.

For example, Brendan Nyhan’s experiments into vaccine opposition illustrated that factual information did not make people more likely to vaccinate their children. Some of the information even backfired– providing parents stories of children who contracted measles were more likely to believe that vaccines have serious side effects.

Similarly, the illusion of understanding helps explains the political polarisation we’ve witnessed in recent times.

In the hope of reducing political polarisation, Sloman and Fernbach conducted experiments to see whether asking people to explain their causal understanding of a given topic would make them less extreme. Although they found doing so for non-controversial matters did increase openness and intellectual humility, the technique did not work on highly charged political issues, such as abortion or assisted suicide.

Viewing knowledge as embedded in communities helps explain why these approaches don’t work. People tend to have a limited understanding of complex issues, and have trouble absorbing details. This means that people do not have a good understanding of what they know, and they rely heavily on their community for the basis of their beliefs. This produces passionate, polarised attitudes that are hard to change.

Despite having little to no understanding of complicated policy matters such as U.K. membership of the European Union or the American healthcare system, we feel sufficiently informed about such topics. More than this, we even feel righteous indignation when people disagree with us. Such issues become moralised, where we defend the position of our in-groups.

As stated by Sloman and Fernbach (emphasis added):

[O]ur beliefs are not isolated pieces of data that we can take and discard at will. Instead, beliefs are deeply intertwined with other beliefs, shared cultural values, and our identities. To discard a belief means discarding a whole host of other beliefs, forsaking our communities, going against those we trust and love, and in short, challenging our identities. According to this view, is it any wonder that providing people with a little information about GMOs, vaccines, or global warming have little impact on their beliefs and attitudes? The power that culture has over cognition just swamps these attempts at education.

This effect is compounded by the Dunning-Kruger effect: the unskilled just don’t know what they don’t know. This matters, because all of us are unskilled in most domains of our lives.

According to the authors, the knowledge illusion underscores the important role experts play in society. Similarly, Sloman and Fernbach emphasise the limitations of direct democracy– outsourcing decision making on complicated policy matters to the general public. “Individual citizens rarely know enough to make an informed decision about complex social policy even if they think they do. Giving a vote to every citizen can swamp the contribution of expertise to good judgement that the wisdom of crowds relies on.”

They defend charges that their stance is elitist, or anti-democratic. “We too believe in democracy. But we think that the facts about human ignorance provide an argument for representative democracy, not direct democracy. We elect representatives. Those representatives should have the time and skill to find the expertise to make good decisions. Often they don’t have the time because they’re too busy raising money, but that’s a different issue.”

Nudging for better decisions

By understanding the quirks of human cognition, we can design environments so that these psychological quirks help us rather than hurt us. In a nod to Richard Thaler and Cass Sunstein’s philosophy of libertarian paternalism, the authors provide some nudges to help people make better decisions:

1. Reduce complexity

Because much of our knowledge is possessed by the community and not by us individually, we need to radically scale back our expectations of how much complexity people can tolerate. This seems pertinent for what consumers are presented with during high-stakes financial decisions.

2. Simple decision rules

Provide people rules or shortcuts that perform well and simplify the decision making process.

For example, the financial world is just too complicated and people’s abilities too limited to fully understand it.

Rather than try to educate people, we should give them simple rules that can be applied with little knowledge or effort– such as ‘save 15% of your income’, or ‘get a fifteen-year mortgage if you’re over fifty’.

3. Just-in-time education

The idea is to give people information just before they need to use it. For example, a class in secondary school that reaches the basics of managing debt and savings is not that helpful.

Giving people information just before they use it means they have the opportunity to practice what they have just learnt, increasing the change that it is retained.

4. Check your understanding 

What can individuals do to help themselves? A starting point is to be aware of our tendency to be explanation foes.

It’s not practical to master all details of every decision, but it can be helpful to appreciate the gaps in our understanding.

If the decision is important enough, we may want to gather more information before making a decision we may later regret.


Written by Max Beilby for Darwinian Business

Click here to buy a copy of The Knowledge Illusion

References

Fernbach, P. M., Rogers, T., Fox, C. R., & Sloman, S. A. (2013). Political extremism is supported by an illusion of understanding. Psychological Science, 24(6), 939-946.

Haidt, J. (2012) The Righteous Mind: Why good people are divided by politics and religion. Pantheon.

Henrich, J. (2016). The Secret of Our Success: How culture is driving human evolution, domesticating our species, and making us smarter. Princeton University Press.

Kuncel, N. R., Hezlett, S. A., & Ones, D. S. (2004). Academic performance, career potential, creativity, and job performance: Can one construct predict them all? Journal of Personality and Social Psychology, 86(1), 148-161.

Lawson, R. (2006). The science of cycology: Failures to understand how everyday objects work. Memory & Cognition, 34(8), 1667-1675.

Nyhan, B., Reifler, J., Richey, S., & Freed, G. L. (2014). Effective messages in vaccine promotion: a randomized trial. Pediatrics, 133(4), e835-e842.

Sunstein, C., & Thaler, R. (2008). Nudge: Improving decisions about health, wealth and happiness. New Haven.

Thaler, R. H. (2013). Financial literacy, beyond the classroom. The New York Times.

Woolley, A. W., Chabris, C. F., Pentland, A., Hashmi, N., & Malone, T. W. (2010). Evidence for a collective intelligence factor in the performance of human groups. Science, 330(6004), 686-688.