In a 2017 interview, Sundar Pichai, Google’s CEO made a statement about not seeing automated cars on the streets of India. Sundar was in agreement with what Uber’s then-CEO Travis Kalanick, who had said**

*“India would be the last place to get the company's automated cars.”*

**Oops.

But there isn’t much reason to challenge their opinion. If you’ve ridden a bike during the peak traffic hours in any crowded city in India, you’ll agree too.

For simplicity, I’ll compare India to the USA in this aspect of traffic. Their road infrastructure and the disciplined conditioning of people make a valid case for why the USA can consider self-driven cars. But in India, the case is exactly the opposite. The roads in India are not defined on any block or grid system. Traffic discipline isn’t a virtue we can boast of.

But despite all the chaos on the road, there is a system we follow. The system may be hyperlocal and followed only by those who travel by road regularly but the system somehow works.

**An automated car, when put in the mix of a highly local system that works based on human understanding, is thus bound to fail.**

## **To believe otherwise is to commit the ***Ludic Fallacy.*

*Ludic Fallacy.*

****The Ludic Fallacy, proposed by Nassim Nicholas Taleb in his book The Black Swan, is *"the misuse of games to model real-life situations."*

**Taleb explains the fallacy as *"basing studies of chance on the narrow world of games and dice."*

**One example given in the book is the following thought experiment. Two people are involved:

- Dr. John, who is regarded as a man of science and logical thinking
- Fat Tony, who is regarded as a man who lives by his wits

A third party asks them to "assume that a coin is fair, i.e., has an equal probability of coming up heads or tails when flipped. *I flip it ninety-nine times and get heads each time. What are the odds of my getting tails on my next throw?"*

**Dr. John says** that the odds are not affected by the previous outcomes so the odds must still be 50:50.

**Fat Tony says** that the odds of the coin coming up heads 99 times in a row are so low that the initial assumption that the coin had a 50:50 chance of coming up heads is most likely incorrect. *"The coin gotta be loaded. It can't be a fair game."*

**The ludic fallacy here is to assume that in real life the rules from the purely hypothetical model (where Dr John is correct) apply. A reasonable person, for example, would not bet on black on a roulette table that has come up red 99 times in a row (especially as the reward for a correct guess is so low when compared with the probable odds that the game is fixed).In classical terms, statistically significant events, i.e. unlikely events — like a fair coin coming up with heads 99 times in a row — should make you question your model's assumptions.The factors that Taleb wants us to keep in mind are:

- Not rushing to apply naïve and simplified statistical models in complex domains because complexity means more variables and all of them can’t be known from the get-go
- Theories or models based on empirical data may still not be able to predict events which are previously unobserved but have a tremendous impact

## **Take this particular Twitter poll, for instance.**

51.3% people responded considering this was a ludic question: they chose to build a hypothetical model solely based on the data, without considering other unstated environmental variables that are in the mix: particularly the fact that in the real world, we have seasons.

Rainy days aren't equally spread out all over the year, and most of them are all bunched up together at one specific period in the year. If it rained today, it is highly likely that it is the rainy season, which makes it way more likelier for it to rain tomorrow — with at least more than a 1/3 chance.

As VGR goes on to explain further in the thread, a natural phenomena cannot be predicted based on the models that apply in statistics because the unknown variables increase. But inspite of the awareness that such a question cannot simply be modelled, at least half of us — as evident from the poll — think of this as a simple chance or probability question: **we treat a complex environmental system like a simple game with few variables.**

But statistical modelling isn’t the only place we fall prey to the fallacy. ****

**Ludic Fallacy in salary negotiations**

****Most salary negotiation advice is too simplistic.

*“Only negotiate once.”“Never take the first offer.” "Never tell your price upfront.”"Prove your worth in appraisal meetings.”*

**But what if your manager does not like you as a person. They're human after all, with their fair share of biases. Would explaining numbers to them in a presentation demonstrating your value still work?

Or what if you desperately need a job and aren't willing to move away from the table. Is it still worth negotiating?

For all practical intents and purposes, if you’re not willing to walk away from a negotiation you’ve already lost.

Moreover, working relationships work on trust. If you over-negotiate, you lose that trust.

Essentially, what may be true of salary negotiations for a particular role, industry, number of hirings slots is not considered while dishing out the advice. Although well-meaning, it could be disastrous for the person who fails to account for the unknown variables or incentives and treats the entire exercise like a simple game.

In this piece, we had written about another instance of how dealing with ambiguous situations within complex systems favors those who can think through those situations without blindly relying on a skill they learned in a simple system, say, a classroom or a race track.

## **The lesson?**

****We invariably rush to quantify and fit into formulas that which feels ambiguous. But how real life unfolds is hard to predict, as it is one of the most complex systems with many unknown variables affecting the system in highly meaningful ways.

Even in the world of business, they say,

*"There are no business problems, there are only people problems."*

**Because the complexity of how humans function cannot be fit into neatly designed models that help predict what’s next as reliably as a fair coin toss.