Human Nature at Work, with Andrew O’Keeffe

Here’s a podcast I recorded with Andrew O’Keeffe, for the Evolution Institute.

We talk about Andrew’s work helping leaders make better sense of the human dimension of their role, so that they can work with (rather than against) human nature. We also discuss the coronavirus pandemic, and its potential long-term impacts on working practices.

We recorded this episode on the 7th October, 2020.

About Andrew

Andrew O’Keeffe is director of Hardwired Humans, a consulting firm that helps organisations design their people strategies to fit human instincts. Andrew is also author of the books Hardwired Humans and The Boss.

Andrew has a close connection with Dr Jane Goodall and the Jane Goodall Institute. Over the last decade when Jane Goodall has visited Australia, Jane and Andrew have joined forces to speak to business leaders about the importance of our social instincts (Dr Goodall talking about chimps, and Andrew talking about humans).  

Managing the Human Animal, with Nigel Nicholson

Here’s a podcast episode I recorded with Nigel Nicholson, for the Evolution Institute.

Nigel Nicholson and I discuss the application of evolutionary psychology to business and management. We cover Nigel Nicholson’s academic career, and his books Managing the Human AnimalFamily Wars and The “I” of Leadership. We also explore the impacts of the pandemic on the world of work. 

We recorded this episode on the 1st September, 2020.

How culture explains our weak response to the coronavirus

The sneakiness of the novel coronavirus virus has wreaked havoc worldwide.

Although the coronavirus is a global pandemic, what’s striking is how the pathogen’s destruction has varied across regions.

Whilst East Asia has largely got a grip on the virus, Europe is still reeling. The United Kingdom recently pipped Italy to claim Europe’s highest death toll, with a tally that dwarfs all but a handful of nations. The United States has established itself as the world’s coronavirus leader— although not in the way President Trump would want us to believe. And Brazil appears to be the new epicentre of the pandemic, with growing fears that their healthcare system will not survive the oncoming onslaught.

This all begs the question: why has Europe and the Americas been hit so much harder by the pandemic?

If your eyes are glued to the news, you’ll be able to point your finger at the guilty culprits. For example, we can blame our politicians— who were quick to dismiss scientists’ warnings and too slow to act.

Whilst there’s truth to this claim, it isn’t a sufficient explanation. After all, it doesn’t explain why our politicians didn’t take the threat seriously in the first place, nor why whole continents struggled to contain the coronavirus.

To help make sense of this, Michele Gelfand and her colleagues have recently released a preprint which explores the role of culture in our response to the outbreak.

Rule makers, rule breakers

Michele Gelfand is an American cultural psychologist, and author of Rule Makers, Rule BreakersMichele has dedicated her life’s work to solving what has long been considered an enigma: why do cultures differ?

Having conducted painstaking research across the world’s diverse societies, Michele discovered that cultural differences essentially boil down to two dimensions: how ‘tight’ or ‘loose’ cultures are. That is, whether groups prioritise order and strictly abide by rules, or if they are more permissive and disorganised.

Tight countries have many rules in places, where punishments are strictly enforced (think of Singapore, where chewing gum is illegal). Citizens in tight countries are used to a high degree of monitoring aimed at curtailing bad behaviour. In contrast, loose societies have laxer rules— and are more tolerant and accepting of transgressions (think of Italy and Spain).

Crucially, Michele found that these cultural differences are not random. Rather, countries with the most draconian laws and harshest punishments are those that have historically faced a barrage of existential threats.

Throughout our evolutionary history, we humans have faced hostile forces of nature. These persistent foes include famine, natural disasters, invasions from rival tribes— and you guessed it— outbreaks of infectious disease.

Because these threats are present to varying degrees, our cultural practices and social norms have evolved accordingly— tightening up in the presence of existential threats, which provides protection against danger. In contrast, societies that have faced fewer threats have experienced the luxury of loosening— cultivating social norms that favour freedom and self-expression.

As with all things in life, there’s a clear trade-off. Tight cultures instil order and stability, at the cost of being less tolerant and creative. On the other hand, loose cultures are open and dynamic— with the drawback of being more chaotic and disorderly.

Despite overlap, Michele makes clear that tight and loose transcends political ideology and does not correspond with the ‘left-right’ political spectrum.

A failed response

This trade-off between tightness and looseness was clear for all to see during the coronavirus’ initial exponential explosion. Famously tight countries such as Singapore mobilised an effective response early on. Meanwhile, looser countries like Italy did not initially take the threat as seriously— and as a consequence are still suffering.

Armed with their knowledge of cultural evolution, Michele and her colleagues wondered how much tightness and looseness explained countries’ initial responses to the outbreak.

Specifically, the team predicted countries that are tight culturally and have highly efficient governments would respond most effectively to the pandemic. That is, they’d have less people infected and subsequently less people dying.

Why would the efficiency of governments matter? They suspected tightness may only provide protection when governments also have the expertise and resources necessary to respond in a timely manner.

Michele’s team used a couple of tools to test this.

First, they crunched government statistics on the coronavirus worldwide, and cross referenced this with their data on cultural differences. They also fed in key economic and demographic information, which give them the ability to predict both the amount of infections and deaths from the coronavirus disease.

Like forensic accountants, they also unearthed countries underreporting coronavirus cases— and corrected for this in their analysis.

To complement their slicing and dicing, they also created a computer simulation to model how people respond to infectious outbreaks (think of The Sims computer game. But instead of Sims spreading ‘poopy pants’, they’re catching coronavirus).

Tightness saves lives

So, what did Michele and her team find?

The team found that tightness and government efficiency interacted to predict infection rates— and that this relationship strengthened with more information fed into their equations.

For the countries with inefficient governments, tightness was actually associated with slightly more infection rates. However, countries with tight cultures and highly efficient governments had significantly less infections and overall deaths.

Their algorithms revealed several other important factors that predict infections. Specifically, they discovered that developed countries with high levels of wealth inequality and older populations had the highest number of infections and subsequent deaths (which in not surprising, as we know COVID-19 is a disease that mainly kills the elderly).

To model an infectious outbreak, the team tailored the Prisoner’s Dilemma (no, this isn’t the dilemma governments faced when releasing prisoners early to prevent the pathogen’s spread. Rather, Prisoner’s Dilemma is one of game theory’s iconic strategic games).

During the early stages of the simulation, tight and loose cultures exhibited similar levels of cooperation. However, as time passed and The Sims zombie apocalypse was in full swing, big differences emerged. Automatons in tight cultures found it easier to copy each other’s cooperative behaviour— and therefore had higher rates of survival. In contrast, those in loose cultures didn’t fair so well.

Their simulation suggests that tight cultures may mount a more effective response to epidemics because people in tight cultures are more likely to conform and copy people’s survival strategies. If this is correct, tightness may only be effective when social norms championing cooperation are established early on in a pandemic. If they aren’t, tightness may not provide any additional protection.

Surviving the pandemic

As this paper yet to be published, one needs to be careful commenting on it. However, appreciating both the rigour of the research and the extraordinary circumstances we now face, drawing practical implications from their paper seems justified.

Reflecting on Michele’s grand theory, what screams out is the need for Western democracies to tighten up accordingly.

Several European countries have experienced intolerable suffering from the avalanche of coronavirus cases, and had no choice other than imposing draconian measures. Conversely, countries such as the United Kingdom have adopted a more hands-off approach— where the rules that have been put in place are more lax and less strictly enforced. Coincidently, the United Kingdom is now one of the world’s worst affected countries.

Bar a miracle, we’ll be living with the coronavirus for some time to come. For nations such as the UK to overcome the pandemic, we’ll need to tighten up our cultural practices to minimize disruption and protect vulnerable people from future outbreaks.

To dispel any misconceptions, I am not advocating for our governments to become more autocratic— far from it. Authoritarianism was controlled for in their study, which didn’t actually slow the rate of infections. While it’s important for governments to promote practices that stop the virus spreading, Michele’s team argue that heavy handed responses to the pandemic may cause irreparable harm. Also, the excessive use of force can hamper innovation— which becomes increasingly important when devising long-term solutions.

Rather, we should aspire to what Michele has coined ‘cultural ambidexterity’. That is, we should retain the positive aspects of our loose cultures— such as tolerance for diversity and greater creativity— whilst also having the flexibility to tighten up when necessary.

Think this can’t be done? Look south to Australasia.

New Zealand is one of the loosest countries in the world. Yet under Jacinda Ardern’s leadership, Middle Earth mobilised an effective response to the coronavirus early on. New Zealand now has one of the lowest death rates among Western nations, and Kiwis are even bracing themselves for coronavirus ‘elimination day’.

The tight-loose seesaw

Whether it’s business partners or family members squabbling, Michele has found clashes between people leaning tight or loose is a major source of conflict. Noticeably, ‘tight-loose’ clashes have become defining stories of the coronavirus in the UK.

Days after Boris Johnson ordered Britain to “stay at home, protect the NHS, save lives”, Derbyshire Police received a stiff telling off for using drones to shame people for visiting the Peak District (you could call this ‘meta-shaming’). On the other hand, a steady stream of social media posts complained about people flouting the rules— and the reticence of the police to enforce them.

More recently, the justifications provided for Dominic Cummings’ coronavirus road trips were frankly absurd— and have scorched political capital and damaged the public’s trust in the UK Government. Although Boris Johnson is betting this saga will blow over, this breach may undermine the restrictions in place and the next phase of the government’s strategy.

To successfully navigate the pandemic, we must ensure that the rules in place are properly and consistently enforced. However, we also need to calibrate our tightness to reflect the actual level of risk— tightening the rules when cases flare up, and relaxing them once the threat from the virus wanes.

Reopening for business

As Europe and the United States begin easing restrictions and reopening for business, risks abound.

Management gurus are purporting that the work office ‘is now dead’. Although there’ll certainly be long lasting changes to the way we work, declaring the end of the office is not clear-cut. Although the pandemic has demonstrated that whole companies can successfully work from home, there are several reasons why people will want to meet their colleagues and clients in person (at the end of the day, we are social primates).

By understanding the hidden forces of social norms, business leaders can tilt their companies towards the ideal tight-loose balance in the age of the coronavirus.

I’ll provide a couple of examples.

Before the pandemic, people who came into work sick were frequently deemed more loyal and dedicated employees (particularly in tight corporate cultures, where taking time off was seen as slacking). However, this is nonsensical. Not only does coming into work sick jeopardise your recovery and therefore productivity, it also risks spreading the illness to other employees. In the wake of the coronavirus, this social norm needs to be flipped: no more brownie points for coming into work sick, but rather ostracism for putting other people’s lives at risk.

Whilst we need to tighten up our hygiene standards, we also need looseness to foster innovative working practices. If we cannot resume business without causing a resurgence of infections, we face a bleak future of continuously stalling and restarting our economy.

A team of Israeli scientists have proposed a rather ingenious solution to this dilemma, by exploiting a key property of the coronavirus: its ‘latent period’. On average, there is a three-day window between someone being infected with the virus and actually being able to spread it to others.

The scientists’ solution is to work in two-week cycles, in a system dubbed ‘10:4’. In this arrangement, people work on the job as normal for four days straight. Once they’ve passed this latency period and are therefore possibly infectious, they then work from home in isolation for ten days. The scientists’ models suggest that this two-week working cycle can drastically reduce infection rates, causing cases to drop off a cliff.

Time will tell whether this working arrangement is actually effective. But it precisely this kind of innovative thinking that’ll help us overcome the coronavirus.

So far, the coronavirus’ sneaky strategy has paid off handsomely. However, if we can adapt our social norms and become culturally ambidextrous— tightening up our hygiene standards whilst retaining our creativity and innovativeness— we can play the virus against itself and resume some normality.

Written by Max Beilby for Darwinian Business

Article updated on the 4th June 2020.

Evolutionary Organisational Psychology, with The Dissenter

Here’s a podcast episode I recorded with Ricardo Lopes, for The Dissenter.

Ricardo and I explore the application of evolutionary psychology to the business world. We start by tackling the concept of evolutionary mismatch, and then go through some examples of how it applies to the modern workplace— such as Dunbar’s number, hierarchy and leadership, and work stress.

We recorded this episode on the January 29th, 2020.

‘Big Men’ in the office

It’s a well-established fact that size influences our choice of leaders.

For example, CEOs of Fortune 500 companies are disproportionately over 6 feet tall. Similarly, the height of presidential candidates can partially predict the outcome of an election. Since the start of the 20th century, over 70% of US elections between the two parties have been won by the taller candidate.

Despite robust empirical evidence demonstrating our fondness of tall leaders, there isn’t as much known about the impact of leader’s weight on our leadership preferences.

To broaden this body of knowledge, Kevin Kniffin and his colleagues Vicki Bogan and David Just have recently published a paper in the journal PLOS One, that explores the impact of leader’s weight on their persuasiveness.

Big Man Ting

Anthropologists who have lived with hunter-gatherer tribes are very familiar with leaders being referred to as ‘Big Men’.

In addition to being influential and persuasive, leaders of pre-industrial societies also have to be physically formidable. Indeed, recent expeditions with hunter-gatherer tribes have found that physical size is associated with leadership— but primarily among men.

There are several reasons why this is the case, including the importance placed on fighting ability. Across the natural world, animals that are the most powerful and menacing fighters are generally granted high status (if you’re sceptical, watch one of David Attenborough’s latest documentaries). The same is partially true for humans too.

However, Kevin and his colleagues argue that overindulgence is a marker of social status and wealth within traditional communities (if people are struggling to eat and you’re actually fat, you’ve got to be doing something right).

Melanesian Big Men
Melanesian men in traditional dress (image credit: ABC News)

The anthropological literature is replete with references to high status ‘Big Men’. However, you don’t need to unearth yellowing anthropology books to discover this association between size and status. A quick search of popular rap songs on YouTube provides you with lyrics like:

Yo, BMT, I’m on a big man ting
I ain’t going into clubs without my big hand ting
Got eleven on my finger, that’s a big man ring
Yeah I got a big bag, no bin man ting

In our post-industrial age of abundance, being overweight is increasingly linked with negative life outcomes— including diabetes and cancer. Indeed, obesity is now a bigger killer than hunger worldwide, and people who carry extra pounds find themselves subject to stigma.

Despite our newfound infatuation with thinness, Kevin and his colleagues wondered if this association between weight and social status still lingers in the modern world (beyond the UK rap community of course).

To explore this relationship, Kevin’s research team conducted a series of studies. Specifically, they wanted to see if overweight men are perceived as more persuasive than their scrawny counterparts.

As stated by the authors:

We focus specifically on persuasiveness since “big men” in pre-industrial societies did not have the power of an institution to enforce their standing in the group’s hierarchy; instead, “big men” needed to rely upon influence and persuasiveness to gain and maintain their status.

Heavyweight titans

As a preliminary step, Kevin’s team simply asked a small group of American university students to associate words with the phrases ‘heavyweight’ or ‘lightweight’.

Lo and behold, the students associated ‘heavyweight’ with strength, whereas ‘lightweight’ conjured negative connotations— such as being ‘weak’ or ‘unqualified’ (can you recall lightweight ever being intended as a compliment?).

The researchers commented that ‘heavyweight’ and ‘lightweight’ are terms commonly associated with boxing, and that heavyweight champions – the likes of Muhammad Ali and Mike Tyson—have garnered greater interest than lighter fighters (that is, boxers who are on their Big Man Ting).

Mike Tyson_ Trevor Berwick
‘Iron’ Mike Tyson becoming the youngest heavyweight champion in history (image credit: Sky Sports)

When officially starting their research, Kevin’s team returned to their respective universities and asked students to evaluate their own leadership capabilities—including their ability to persuade others—and to disclose their height and weight (I know, how rude).

The researchers found that the students who rated themselves as more persuasive actually weighed more. Having dug into the data however, this relationship was largely explained by their height—rather than their weight per se. Therefore, these results weren’t entirely in line with their predictions.

Next, Kevin’s team sent out a survey to American undergraduate students. Specifically, the students were asked how persuasive they find people of different weights generally.

The most favourable answer received was that heavy people are more likely to be perceived as persuasive. In other words, the students stated they didn’t find overweight people more persuasive themselves— but believed that other people do.

In the later rounds of the research, Kevin’s team used drawings of men and women of different physiques. They asked American undergraduates to rate each of these figures on how persuasive, extraverted, funny and attractive they thought each person would be if they actually existed in real life.

The researchers found that the overweight male figures were rated as more persuasive—whilst the same wasn’t true for the female figures. In other words, there was no linear relationship between female figures’ weight and how persuasive students thought they would be.

They then repeated this drawing exercise with a broader American sample and people from Kenya. This was to make sure that these findings actually replicate, and are not just quirks of WEIRD people (that is, people who are Western, Educated, Industrialised and Democratic).

In both replications, they found that the overweight male figures were rated as more persuasive than the normal or underweight companions.

Fish out of water

Before delving any further, we first need to acknowledge the limitations of this study.

You don’t need a degree in psychology to realise that the researchers didn’t directly measure how persuasive individuals are among groups, but rather inferred influence through people’s perceptions. To be sure that this effect is real though, experiments would need to be conducted that directly measure the impact of people’s height and weight on their ability to influence (a tall order, I know).

Other scientific papers have recently been published that point in the same direction however, which lend further weight to these findings (no pun intended).

Future research should explore whether these associations are universal across cultures or dependent on context, as there are likely to be several exceptions.

As stated by the authors:

With further research, we may find out that the influence of weight on status and persuasiveness is context specific and culturally defined. Indeed, criticisms that a President of the United States would substantially under-report his own weight while perhaps over-estimating his own height are interesting in light of the studies we present.

Likewise, I suspect differences in body composition dramatically change how leaders are perceived (as the Body Mass Index doesn’t distinguish fat from muscle).

These points aside, how can we ultimately make sense of their studies’ results? Kevin and his colleagues interpret this phenomenon as an evolutionary mismatch.

To elaborate, evolutionary mismatches are physical and psychological traits that were selected for in our ancestral past, that are now frequently misaligned with the demands of the modern world (think of newly hatched sea turtles who are perilously drawn to streetlights).

Kevin and his colleagues argue that a proclivity for ‘Big Men’ constitutes an evolutionary mismatch, as there is no longer good reason to associate size with status. As we don’t have to hunt or worry about being raided by rival tribes anymore, the size of a leader is essentially irrelevant in the modern world of work.

The authors write:

As with studies of height in relation to non-manual work environments (e.g., being President of the United States), there is no essential relationship that should exist for weight. With the exception of certain athletic occupations such as sumo wrestlers or linemen on professional football teams, there is no clear rationale or consensus for why weight should be relevant for many contemporary occupations.


Clearly, this paper has several practical implications for the business world. As Kevin and his colleagues suggest, employers should be particularly sensitive to the various ways in which weight can cloud our judgement when it comes to hiring and promotion decisions. Businesses need to make a concerted effort to eliminate such biases during the selection process, and to focus on core competencies that underpin the role in question.

Perhaps most importantly, these findings have significant implications for democracies.

Against the backdrop of increasing political polarisation, prominent politicians are now placing greater emphasis on their size and fighting prowess (or rather, telling tales of their fighting history). Indeed, combative behaviour and threat displays have become unusually coarse among Western politicians— particularly in the United States.

Whether it’s ‘little Marco’ or ‘mini Mike’, President Trump continues to mock his political rivals based on their stature— which so far has proven to be a winning strategy. Beyond basic health considerations however, candidate’s size should be irrelevant when it comes to electing the President of the United States (note the irony of Michael Bloomberg’s latest campaign ads appearing to mock Trump based on his weight).

Across several spheres of society, being aware of these remnants of our primate lineage would likely benefit us. Armed with this knowledge, we can ultimately discard candidates’ size when selecting our leaders.

Written by Max Beilby for Darwinian Business

Evolutionary Mismatch in the Workplace, with Mark van Vugt

Here’s a podcast episode I recorded with Mark van Vugt, for the Evolution Institute.

Mark van Vugt and I discuss his book Mismatchcoauthored with Ronald Giphart. We then delve into the science of evolutionary mismatches, and how this knowledge can help us understand human behaviour in modern settings, such as the workplace.

We recorded this episode on December 23rd, 2019.

The evolutionary logic of overconfidence

People in general are overconfident, excessively optimistic and think of themselves as superior to others.

How can I say such a thing?

Ask anyone with a license to rate their driving abilities, and most people will tell you that they are above average. However, this is not just an isolated case of cocky drivers. The same effect is found when asking people to assess their own intelligence, evaluate their attractiveness, or reflect on how kind they are.

survey of American school students, which received over 1 million responses, found that no less than 70% rated themselves as above average leaders. Conversely, only 2% of these students humbly stated that they are actually below average.

This all begs the question: why do we think more highly of ourselves than who we really are?

To explain the prevalence of overconfidence, psychologists have historically pointed to the mental health benefits of positive illusions— such as maintaining self-esteem, or helping reduce anxiety about an uncertain future. In other words, that these biases protect us from threatening information.

However, a more intriguing explanation has been proposed by the evolutionary biologist Robert Trivers.

Robert Trivers is widely known as the ‘bad ass’ of evolutionary biology, for his antics both in and outside of academia. However, this reputation has not diminished his scientific contributions. Steven Pinker has described Trivers as “one of the great thinkers in the history of Western thought”, and Time magazine has named him one of the greatest scientists of the 20th century.

In his book The Folly of Fools, Trivers argues that a glowing view of  the self makes others see us in the same light— leading to more cooperative and romantic opportunities in life. This is because self-deception requires no conscious input, which therefore eliminates any tell tale signs of lying– ultimately making one more convincing when trying to persuade others. In other words, these positive illusions evolved not to protect us from threatening information per se, but rather to help us persuade others of our superiority.

The logic of the theory is as follows: deception is a fundamental aspect of communication. Those of our ancestors who were better at detecting deception were at an advantage, causing an evolutionary arms race in deception and detection. As a result, self-deception evolved to better mask deception, “hiding the truth from yourself to hide it more deeply from others”.

Trivers argues that we fool ourselves in all realms of life— when overestimating our looks or abilities, when justifying our beliefs, or when convincing ourselves that a lie we’ve told is actually true. Apparently, it’s all part of advancing our own agendas.

Although Trivers formulated his theory back in 1970s, only recently has it begun receiving empirical support. Fundamentally, as we don’t have access to people’s private beliefs, we can’t say with certainty whether someone is actually lying or deceiving themselves (from the best of my knowledge, we’re yet to develop a mind-reading machine). This makes testing such a hypothesis particularly tricky.

But it isn’t impossible. Answering Trivers’ call to arms, economists Peter Schwardmann and Joël van der Weele have recently published a paper in the journal Nature Human Behaviour, that successfully tackled this challenge.

Put your money where your mouth is

So, how do you measure people’s beliefs about their own abilities if you’re unable to read their minds? Peter and Joël’s simple yet ingenious idea was this: get people to bet on their own performance.

To elaborate, Peter and Joël conducted experiments where contestants essentially bet on their performance on an intelligence test. This meant that people had money at stake for accurate assessments of their intellectual performance.

However, there was a twist.

To see if introducing opportunities for social gain induced self-deception, some of the participants were invited to a speed-dating style interview with a panel of ’employers’. Here, the contestants were awarded more money if they could convince the employers that they were actually a top performer (whilst still not knowing their results on the intelligence test, which were provided to them at the end of the experiment).

Of course, there’s two sides of the coin— are people actually fooled by overconfidence?

As with the contestants completing their initial assessments, the employers were asked to essentially bet on who the top performers on the intelligence test were. This meant that the employers also had money at stake when deciding who actually performed well (in other words, their bullshit detectors were well calibrated).

The research team had another trick up their sleeves.

Before the interviews, each contestant received cryptic feedback on their performance on the intelligence test, and were then asked to reevaluate their performance in light of this feedback. This allowed the researchers to see if contestant’s overconfidence influenced the employer’s judgements of them— as their feedback had a direct and measurable impact on the contestant’s confidence levels.

Once the interviews came to a close, employers were given a few seconds where they could scrutinise contestants’ body language and facial expressions. Employers were asked to write down what each contestant said, and how well they think each performed on the intelligence test. The employers were also asked to evaluate how honest, likeable, attractive and confident they thought each contestant was.

In total, 688 German students participated in these series of experiments (410 women and 278 men).

Fooling yourself the better to fool others 

So, what did they find?

Peter and Joël found that overconfident contestants received higher evaluations from the employers. In other words, those who privately bet that their performance was relatively high were seen as higher performers on the intelligence test.

The researchers were able to isolate the impact of contestants receiving a noisy yet positive signal before their interviews (that is, cryptic feedback suggesting superior performance). Contestants who received a positive signal subsequently received higher evaluations from employers– even when controlling for their actual test scores. What this means is that overconfidence caused contestants to receive higher ratings from the employers— independent of their actual performance on the intelligence test.

Finally, the contestants invited to the speed-dating style interviews were noticeably more confident than those who didn’t participate. That is, people became overconfident when they were given the opportunity to profit from persuading others.

In summary, Peter and Joël found that overconfidence pays. The contestants who were more confident received higher ratings from the employers, and therefore won more money. Likewise, they showed that opportunities to profit from persuasion induced overconfidence, and that it was overconfidence which led contestants to receive higher ratings— above and beyond their actual performance.

Overall, these results provide compelling evidence in favour of Robert Triver’s theory of self-deception.


As is always the case, there are aspects of this paper that can be critiqued.

Although overconfidence is a well established finding from the field of psychology, there is evidence suggesting some cultural differences in how overconfidence manifests.

These experiments were conducted with German university students. However, we know that these people are really WEIRD. That is, they are Western, educated, industrialised, rich and democratic, and therefore may not paint a complete picture of humanity overall… Although we can have confidence in these research findings, we don’t want to become overconfident. 

To add more weight to the study, other recently published papers and preprints point in the same direction. However, it remains the case that these experiments would benefit from more diverse samples.

These points aside, this paper arguably has far reaching implications for business.

What this strand of research suggests is that positive illusions are far more pervasive than is usually portrayed by psychologists.

Evidently, positive illusions offer individuals a wealth of benefits. Those of us who are the most optimistic and overconfident reap many benefits in life, including more career success. As noted by Daniel Kahnmenan in Thinking Fast and Slow, entrepreneurs and business leaders are the most optimistic and overconfident among us.

Positive illusions encourage us to take gambles— gambles which can come with big pay offs. But of course, risky bets can also end in bankruptcy. Overconfidence and excessive optimism can lead financial traders to lose money, CEOs to initiate value-destroying mergers, and lenders to invest in businesses that are built on sand.

The key takeaway for me is this: if positive illusions are not so much a means of protecting us from threatening information, but rather a form of self-deception to help us persuade others, then the problem has been misdiagnosed by psychologists and practitioners alike. Initiatives such as teaching people debiasing techniques as a means of overcoming overconfidence are simply not going to work.

These findings also have obvious implications for recruitment and selection. Indeed, the experiment is inadvertently a parody of the interview process itself.

In light of these findings, it’s not too surprising that interviews are a weak predictor of job performance. These series of experiments mark yet another red cross against job interviews as a recruitment method, and lend credence to the practice of blind assessments.

By coincidence, Peter and Joël concluded their paper by discussing the business implications of their research (emphasis added):

One implication of our findings is that overconfidence is likely to be more prevalent in settings in which its strategic value is highest, that is, in cases in which measures of true ability are noisy, competition is fierce and persuasion is an important part of success. It may arise in employer–employee relationships because of its strategic benefits in job interviews and wage negotiations. Arguably, confidence may be even more valuable among the self-employed, whose economic survival often depends more immediately on persuading investors and customers. We would also expect overconfidence to be rife amongst high-level professionals in finance, law and politics.

Peter and Joël’s reflections are likely to resonate with people working in politics or finance. For those of us working in such fields, vigilance is required. For example, when evaluating candidates for leadership positions, it’s important to gauge whether candidates are overconfident. If they appear to be raising the stakes by promising the world, you have to ask yourself: are you really going to win this bet?

Written by Max Beilby for Darwinian Business

The rise and fall of the dominant leader

Ranking people by their social status seems to come naturally to us humans. Indeed, social hierarchies are ubiquitous across cultures and throughout human history.

Social hierarchies have allowed humans to coordinate effectively, and enabled large groups to make decisions and address collective action problems.

Whether small-scale societies or industrialised nations, one can think of various hierarchical structures that have been the result of conflict and brute force. However, many forms of hierarchy are also the product of leaders being freely chosen. What isn’t well understood by social scientists is how people climb these more productive forms of hierarchy.

To put it another way, what strategies actually make a leader successful in modern organisations, and which of these is more successful over time?

Keeping in sync with the latest research, Daniel Redhead and his colleagues Joey Cheng, Charles Driver, Tom Foulsham and Rick O’Gorman have just published a study the journal Evolution & Human Behavior, that helps answer this question.

Two ways to the top

Before delving into the particulars of the study, we need to establish what scientists already know. What strategies are known by evolutionary psychologists to increase one’s rank in the social pecking order?

Firstly, there’s dominance– increasing one’s social status through intimidation, manipulation, and coercion. This type of leadership is ancient, and traces back millions of years to our primate heritage.

Throughout the natural world, animals which are the most powerful and menacing fighters are generally granted high status (if you’re not convinced, watch one of David Attenborough’s latest documentaries).

In the tree of life, human and chimpanzee lineages split off from their common ancestor approximately 5 to 7 million years ago. With this, both primate species took with them a proclivity for dominance hierarchies, and a psychology sensitive to dominance.

However, the story of leadership gets a bit more complicated when we home in on homo-sapiens. Unlike other animals, we are a cultural species. We need to be socialised, and depend on collective wisdom for our survival (how long would you be able to live on your own in the wilderness?). As a result, we seek leaders with the knowledge and skills that our group needs to succeed.

This path to leadership is very different than what you usually see in a wildlife documentary, and is aptly called prestige.

Intriguingly, research shows that both paths are equally effective ways of gaining status. That is, one can get to the top either through dominance, or by leading through prestige. What wasn’t known by social scientists is how these different strategies play out over time. In other words, which leadership style is more effective in newly formed groups, and which is more successful in the long-run.

Cue Daniel and his research team.

Brains over brawn

For a couple of reasons, Daniel and his colleagues suspected dominance wouldn’t be an effective leadership strategy over time.

They state:

We proposed that the context of time and place is fundamental to the nature of human dominance… Unlike non-human primates, physical strength and size are not necessarily the most essential determinants of victory during antagonistic contests between humans. The presence of allies and coalitions shrinks the perceived size and muscularity of a foe and the widespread development of lethal weaponry potentially neutralizes human physiological dominance.

Translation: we humans take out overbearing arseholes (tarnishing their reputation through gossip and ostracism. Or if stigma doesn’t do the trick, pelting rocks at them will).

The authors stress that there needs to be certain social and environmental conditions for dominance to be a viable way of gaining status. For example, if bullying and violence are prevalent in the social context one faces, then dominance may prove to be an effective strategy (indeed, it may also be an essential survival strategy).

Conversely, Daniel and his colleagues argue that prestige should be a universally effective way of gaining social status over time. Why? Because prestige is marked by the respect earned by others, which requires a leader to build and maintain a good reputation.

So how did they go about testing their hypotheses?

The researchers used newly formed groups of American students to see how effective dominance and prestige were over time. Specifically, these student groups were formed for an assignment, which counted towards their end of year grades.

In total, 263 students were randomly assigned to a mixed-sex group, and were followed over 16 weeks.

The researchers got these students to rate each other on their leadership styles, and what they thought their peers’ positions were in the social pecking order. They also completed surveys about themselves throughout the semester.

Nice guys finish first

So what did they find?

Replicating previous studies, the researchers found that both dominance and prestige were a successful way for students to acquire status in these newly formed groups.

The authors write:

These results align with previous work that suggests that humans have a disposition to defer to those that they perceive as able and willing to confer benefits or harm, even among groups of undergraduate students, whereby fear and threat may not be particularly potent. 

Critically however, dominance lost its sticking power in the weeks after the groups were formed. Conversely, prestige strongly increased students’ social status over the period of the semester.

With this experiment, the researchers were able to rule out an alternative explanation for dominance’s effectiveness: that dominant individuals are simply mistaken for being prestigious. Rather, the experiment clearly showed that individuals in unacquainted groups can gain status either through aggression and coercion, or by building respect through their skill and competence. These are distinct leadership strategies, which also had different trajectories.

Another insight gleamed from the study is that prestige and social status are a two-way street. That is, being a student high in prestige increased one’s rank in the social pecking order. However, promotions in social rank also bumped up one’s prestige a couple of notches. This was not the case for dominance, where ratings of dominance remained largely unchanged for those who gained higher social status.

Finally, the researchers found that although prestige and dominance have a negative relationship with each other, they are not entirely separate either. In other words, a leader can be both dominant and prestigious at the same time.

The prestige premium

As is always the case, there are limitations to this study.

Like the majority of psychological studies, these experiments were conducted with American university students. However, we know that these people are really WEIRD. That is, they represent a slice of humanity who are Western, educated, industrialised, rich and democratic, who do not necessarily reflect humanity overall. Further experiments would need to be conducted cross-culturally to confirm whether or not these findings are universal.

The authors argue that as the students had a vested interest in making sure their groups performed well, these project teams paralleled work in the outside world of business and government. However I’m not so sure. For various reasons, I suspect students generally are not that invested in the outcomes of group assignments.

As Daniel and his colleagues note themselves, there may be contexts where dominant leaders are able to sustain their advantage over an extended period of time (for example, when working in large and fragmented organisations). Likewise, dominant leaders may deploy tactics to maintain their social rank, such as ostracising their competitors or modifying group structures, to prevent challenges to their power base.

These points aside, this study sheds light on aspects of leadership which had previously been left in the dark. What the study answered is not whether dominance is a successful leadership strategy, but when it is. 

Contrary to what is taught in many business schools and psychology departments, dominance is an effective way of gaining status. Indeed, it is likely those who rise to the top of corporate and political hierarchies have a combination of dominance and prestige in their repertoire, and deploy both strategies when needed (think of Jeff Bezos for example, and imagine what it must be like working in his executive team… Did you experience a pang of fear?).

However, a domineering leadership style also comes with a hefty price tag; less satisfied employees, reduced creativity, and people rushing for the next exit. On top of this, we now have evidence suggesting that leaders high in dominance are less successful in the long-run. To put it bluntly, being a leader who’s an arsehole is unsustainable.

Or to frame it in the positive, we modern humans place a premium on prestige. Whether it’s your organisation’s leadership capabilities or your own development, make sure you invest your capital wisely.

Written by Max Beilby for Darwinian Business

Competition can encourage prosocial behaviour to spread

A defining aspect of our species’ success is our unusually high levels of cooperation. In particular, our ability to cooperate with others who are not related to us.

The scale of cooperation among humans is rare in the animal kingdom, and is strongly at odds with our closest primate relatives. Presented with this puzzle, scientists are still debating the evolutionary origins of our extraordinary prosociality. 

Traditionally, evolutionary scientists have explained prosocial behaviour by modelling the evolutionary benefits to the individual (or more specifically, the individual’s genes). For example, prosociality can evolve among non-relatives based on reciprocation (‘you scratch my back, I’ll scratch yours’), or if altruists are deemed more attractive romantic partners (and therefore have more babies).

However, an emerging class of inter-disciplinary scientists are viewing our large-scale cooperation as a product of cultural group selection’. That is, traits favouring prosocial behaviour can evolve via culture, due to the competitive advantage they bestow to a group. This is a type of cultural evolution, and does not involve natural selection working on genes. 

Although the theory is well developed, empirical evidence documenting cultural group selection is only just accumulating

To shed some light on the matter, economists Patrick Francois and his colleagues Thomas Fujiwara and Tanguy van Ypersele recently published a paper in Science Advancesexploring cultural group selection in the workplace and the laboratory. 

Banking on trust

What is particularly interesting about this paper is that the researchers analysed industry data to test their hypotheses. As stated by the authors; “Perhaps the most ubiquitous avenue of group-level competition occurring in contemporary settings is likely to be competition across firms.”

Patrick and his colleagues hypothesised that companies subject to more intense external competition would be more likely to foster cooperation among their employees. In other words, increased external competition would encourage employees to suppress selfishness and increase cooperative behaviour, in the interest of the firm’s survival. 

The authors used ‘generalised trust’ as their measure of prosocial behaviour (that is, answers to the question; “Do you think that, on the whole, people can be trusted or that you can’t be too careful in dealing with people?”). Their reasoning was that survey-based questions of trust reflect the level of pro-social behaviour individuals perceive of others around them.

The authors used a range of data sources to test their hypothesis. 

Firstly, Patrick and his colleagues explored the relationship between the competitiveness of industries, and the level of trust employees report.

To do this, the authors used data from the United States’ General Social Survey, which includes measures of trust among employees. The competitiveness of an industry was calculated by the percentage of total sales in an industry not covered by the largest 50 firms. 

You can see the relationship below:   

Americans who work in more competitive industries are more likely to trust.

Although a strong relationship between competitiveness and trust was identified, the authors note that this is weak evidence of competition increasing trust. As this data is correlational, it cannot explain causality. Likewise, it may be others factors which are driving this relationship, which haven’t been acknowledged. 

To get round this conundrum, you’d need a naturalistic experiment where competition is increased within an industry, with levels of trust measured before and after this introduction. 

 It turns out such a natural experiment was provided by an episode of American banking deregulation.

Starting in the early 1980’s, several US states lifted restrictions which prohibited banks from operating in other states across the country. This deregulation increased the availability of credit, which in turn facilitated the creation of new firms- and therefore raised the amount of competition within these local markets. 

Of particular interest to the researchers was that different states undertook the deregulation at different times.

What they found is that in the years after the deregulation was introduced, there were significant increases in levels of reported trust. As expected by the authors, firm competition increased with the banking reforms (with more firms created and subsequent business closures).

These broader impacts apparently continued for 10 years after the deregulation was initially introduced. 

Banking deregulation in U.S. states raised firm competition and trust.

Survey data from German employees was also analysed as part of the study, as this allowed the researchers to observe how trust is impacted when workers move to more competitive industries. Similar to the data from the US, Francois and his colleagues found that German workers who moved to more competitive industries reported higher levels of trust.

Although these observational findings provide considerable insight, there are also limitations to this approach.

Fundamentally, using observational data means you can’t be sure of the effect you’ve found, or that you can confidently rule out alternative explanations. To get around this, the researchers also conducted laboratory experiments.

Back to the lab

These experiments were conducted in France, and tested whether changes in levels of competition across groups would impact trust and cooperation. 

A strategic economic game called the Public Goods Game was employed for the experiments.

Participants were placed into pairs, and were allocated to one of two versions of the game. The first was a standard version of the game, with no group competition. 

For each version of the experiment, 20 people were placed into groups of 2. Each player was given €10 per round. Participants were given the choice on how much they wanted to contribute to the ‘collective pot’, which would benefit both group members equally.

The game presents a dilemma. By the end of each round, the collective pool is increased by 1.5 times. Although good for the group overall, this means each individual’s contributions is actually a net cost (providing €0.75 for every €1 they contribute to the pot). 

If your objective is to maximise your own earnings, then the best strategy is therefore to contribute nothing. However, this undermines the greater success your group would have if both of you cooperated and contributed more money. 

Individuals were paired anonymously, and were told the outcome at the end of each round. They were then paired with a new partner, and played a total of 19 rounds. Participants were asked some questions after the experiment, with the main one being generalised trust.

The second condition of the experiment was the same, but with a twist.

The amount they received from the collective pool depended not only on their group’s contributions, but also on the size of their collective pot relative to other groups. Only if their collective pot matched or surpassed another equivalent group, did the group members receive their slice of the pie. 

So what happened?

As what almost always happens when playing the standard version of the Public Goods Game, the researchers observed declining contributions as the game progressed. Initial contributions were also low, with participants chipping in just over €2 for the first round on average.

However, there was a big difference in the second ‘competitive’ condition. As the graph below illustrates, group competition induced significantly higher contributions to the collective pot, which was sustained across all of the rounds. 

Contributions in the first round were also twice as high with group competition, and stayed higher throughout the game. 

Introducing competition in public good laboratory game increases contributions and propensity to trust.

The players may have increased their contributions for various reasons, such as feeling inclined to reciprocate. However, the authors point out that players also increased their contributions when they saw their competitors performing well. They also don’t see this as evidence of reciprocation, as each partner was drawn afresh for each round. 

Instead, Patrick and his colleagues argue these findings show cultural group selection at work; “mimicry of the actions or norms in successful groups leading to diffusion of those norms into the broader population.”

Cooperation from competition

The theory that evolution works at the level of the group, rather than the conventional level of the individual, is controversial. Likewise, there is no clear consensus among scientists regarding the importance of group selection (also known as multilevel selection) to evolution.

Although less contested than its genetic grandfather, cultural group selection also remains controversial, and not everyone is convinced. 

Exhibit A:


Oliver Curry made some valid points on Twitter, outlining potential limitations of the study’s design and the inferences made by the authors. 

Can the data presented be best explained by cultural group selection, over and above other well established theories of cooperation (such as mutualism)? As the connection to theory within the paper isn’t made clear, it’s difficult to answer this question.

What isn’t obvious to me is why external competition would increase trust per se, rather than cooperative behaviour by itself. Oliver Curry argues that cooperation and trust are not separate, and that trust is simply the expectation of cooperation. However, there is experimental evidence suggesting that they are indeed distinct concepts, and that is it useful to separate them.

In a subsequent Twitter exchange, Tim Waring also acknowledges the studies limitations, but argues the study does ultimately support the authors’ conclusions. 


Future research will hopefully address these points raised. 

Despite the critiques of this particular study, cultural group selection arguably offers a powerful explanation for the evolution of large-scale human cooperation. Although traditional evolutionary theories explain much of human cooperation, they don’t seem able to explain how a hominid species that evolved for life in small groups came to develop chiefdoms, nation states, and the modern corporation.

For the purpose of this blog, I assume the majority of business practitioners aren’t particularly bothered about the underlying evolutionary theory. Regardless of the best scientific explanations available, it’s evident that greater external competition increases prosocial behaviour within groups.

This knowledge could be used to increase trust among employees and to make groups more productive. This may be achieved by changing group structures, and rewarding teams as opposed to individual outputs. Similarly, businesses may want to foster an organisational culture where considerable attention is focused on the threats posed by external competition.

However, it’s easy to see how such knowledge can also be abused. Many leaders seem to intuitively grasp how external threats influence behaviour, with the potential for manipulation. As an extreme example, one can be cynical and think of dark triad world leaders who may be tempted to wage war as a means of boosting their political support (no need to mention names here). 

Written by Max Beilby for Darwinian Business


We don’t need to understand how technology works for it to evolve

We modern humans live in a world surrounded by ever evolving technology. Whether it’s the combustion engine or the modern computer, these technologies are ubiquitous and have radically altered the world we live in.

What’s no so obvious is how complex the technologies of traditional societies are too. Bow and arrows and clothing are just a couple of sophisticated technologies that pre-industrial humans created, and used to venture into new, challenging environments.

How is it that we humans have managed to produce such impressive technology, when our closest living primate relatives have produced nothing of the sort?

Many believe this comes down to our superior cognitive abilities.  That is, our intelligence and our ability to reason.

However, some scientists argue that the inherent complexity of certain technologies make them very hard to understand. Instead, they argue that complex technologies result from many small improvements made over generations which are culturally transmitted– without people understanding how these technologies actually work.

To help settle the debate, Maxime Derex and his colleagues Jean-François Bonnefon, Robert Boyd and Alex Mesoudi conducted a rather ingenious experiment, involving a technology which changed the face of our planet: the wheel.

Note that at the time of writing this post, the paper is a preprint and yet to be peer-reviewed, and is therefore subject to further to scrutiny. Despite the amendments that may be made to the paper, the significance of this study should become apparent.

Spinning wheels

The experiment boiled down to getting participants to increase the speed of a wheel down a meter long, inclined track. The wheel had 4 radial spokes, and a single weight could be moved along each spoke.

Participants were organised into ‘chains’ of 5 individuals. Each participant had 5 trials  to minimize the time it took for the wheel to reach the end of the track. All participants  were provided with the last two choices and  scores of the previous participant in their chain (except those who went first). 14 chains were run, with each containing different people.

In total, 140 people took part in the study (with two versions of the experiment conducted). Each person received money for participating in the experiments. The money they received ranged from €3 to €29, depending on their performance and that of their peers.

Derex and his colleagues provide sound reasons for choosing a wheel for their experiment on causal understanding.  First, existing studies suggest Westerners generally have poor understanding of how wheels work, which means most participants didn’t know what was required of them (this is not meant to be insulting). Secondly, the speed of the wheel depends solely on the laws of physics, and not on irrelevant factors which could compromise the validity of their findings. And thirdly, the wheel systems doesn’t involve many dimensions, which made it well suited for hypothesis testing.

So what were the researchers actually evaluating? They were essentially testing whether wheel speeds would increase after several generations of trails, and if people’s understanding of the underlying physics would do too.

The wheel’s speed depends on just two variables: its moment of inertia (how mass is distributed around the axis), and its initial potential energy (the distance between the wheel centre of mass and the ground).

If the weights are located closer to the centre of the wheel, and if one of the weights at the top or to the right of wheel are further away from the axis before its descent, then the wheel will cover the track faster. Note that there’s a trade-off here between the two forces, and some experimentation is required to work out the optimal configuration.

The simplicity of the system meant the researchers could measure participants’ understanding of the wheel after they completed their trials. The research team evaluated their understanding by presenting them with a few options, and asking them to predict which wheels would cover the track faster.

Causal understanding_image 2
Illustration of the experimental set up (Derex et al, preprint)

So what did Derex and his team find having conducted the experiment?

After the 5 generations, the average wheel speed increased significantly. However, participants’ actual understanding of the physics did not.

The average wheel speed produced by the first participants on their last trial was 123.6 meters per hour, and their average understanding score was 4.60. After 5 generations, the average wheel speed increased to 145.7 meters per hour, while participants’ understanding didn’t significantly change.

With a maximum possible speed of 154 m/h, the team found remarkable improvements in just a few generations.

Stifling exploration

The authors were particularly interested in whether or not the sharing of lay theories to one and another would increase people’s understanding.

To further explore how individuals gain their understanding, Derex and his colleagues ran another version of the experiment.

The set up was largely the same, with 5 trials per participant and 14 chains. However, the difference was that participants could now also write their own theory about the wheel, and share this with the next participant in their chain.

All participants were provided with the previous participant’s theory, except those who were starting.

What did they find? The average wheel speed increased at a similar rate to the first experiment, and the participants’ understanding also barely changed across the generations (see the graph below).

Counter-intuitively, the authors also found that the sharing of theories had a negative  effect on participant’s actual understanding of the underlying physics.

Causal understanding_Graph
Participants produced faster wheels across generations, but their understanding of the system did not (Derex et al, preprint)

Although little differences were observed between the experimental conditions overall,  further digging found “striking” differences in participant’s exploration and independent learning.

The researchers found that if a participant had received a theory about either inertia or potential energy, then their configurations would be constrained to one of these forces. In other words, inheriting an inertia theory increased their understanding of this dynamic, but reduced participant’s understanding of energy (and vice versa).

The main explanation presented is that receiving a theory mostly constrained participants’ focus, and blinded them to the dynamics beyond the theory they received.

Derex and his colleagues argue that these results support the theory that small improvements occur over generations via cultural transmission, in the absence of people’s actual understanding of the technology.

As stated by the authors:

These results indicate that highly optimized technologies do not necessarily result from evolved reasoning abilities but instead can emerge from the blind accumulation of many small improvements made across generations linked by cultural transmission, and demand a focus on the cultural dynamics underlying technological change as well as individual cognition.


With  the paper yet to be peer reviewed, it does seem a bit premature drawing lessons from the study at this stage. However, a wealth of research demonstrates the role of cultural evolution in driving technological advancement, which means we can have some confidence in the research findings.

The authors also note that these experiment were conducted on ‘WEIRD’ people. That is,  those who are Western, educated, industrialised, rich and democratic. Further experiments would need to be conducted cross-culturally to confirm whether or not this finding is universal.

These points aside, one key take away I took from these experiments are the roles groups and demographics play in fostering technological advancements, rather than the contributions of individuals.

In business and society more broadly, a widespread belief is that the most significant innovations come from geniuses and their novel ideas. However, such experimental findings from the field of cultural evolution reveal how overly simplistic these beliefs are; these beliefs ignore the wider environmental factors and culturally acquired knowledge that facilitate novel insights in the first place.

Another potential lesson concerns exploration and independent learning. If it is the case that receiving incomplete theories can compromise people’s understanding of technology, then this has implications for research and development professionals (or anyone fostering innovation for that matter). Working around this effect and encouraging independent learning may lead to insights which may have otherwise been missed.

Ultimately, such findings illustrate the importance of experimentation in driving technological advancements. Whether one is trying to improve a process or create new products, continuous small-scale experimentation may lead to new technologies being developed- although you may not understand how they actually work.

Written by Max Beilby for Darwinian Business

Note: Derex et al’s paper has since been published in the journal Nature Human Behaviour (1st April 2019)