I used to attend a health club in the middle of the day and chat with an interesting Eastern European fellow with two Ph.D. degrees, one in physics (statistical no less), the other in finance. He worked for a trading house and was obsessed with the anecdotal aspects of the markets. He once asked me doggedly what I thought the stock market would do that day. Clearly I gave him a social answer of the kind “I don’t know, perhaps lower”-quite possibly the opposite answer to what I would have given him had he asked me an hour earlier. The next day he showed great alarm upon seeing me. He went on and on discussing my credibility and wondering how I could be so wrong in my “predictions,” since the market went up subsequently. Now, if I went to the phone and called him and disguised my voice and said, “Hello, this is Doktorr Talebski from the Academy of Lodz and I have an interrresting prrroblem,” then presented the issue as a statistical puzzle, he would laugh at me. “Doktorr Talevski, did you get your degree in a fortune cookie?” Why is it so?
Clearly there are two problems. First, the quant did not use his statistical brain when making the inference, but a different one. Second, he made the mistake of overstating the importance of small samples (in this case just one single observation, the worst possible inferential mistake a person can make). Mathematicians tend to make egregious mathematical mistakes outside of their theoretical habitat. When Tversky and Kahneman sampled mathematical psychologists, some of whom were authors of statistical textbooks, they were puzzled by their errors. “Respondents put too much confidence in the result of small samples and their statistical judgments showed little sensitivity to sample size.” The puzzling aspect is that not only should they have known better, “they did know better.” And yet…
Taleb, Nassim Nicholas. The Black Swan: The Impact of the Highly Improbable. Random House, 2007. p. 194-5 (italics in original)
For centuries, apologists have attempted to justify our natural desire for play by sneakily rebranding games as a means of refuelling for more work. Puritan clergyman Thomas Wilcox claimed games could be a way “to refreshe our Spirites, dulled or overwhelmed with some labours or studies … that wee maie afterwardes … more joyfully and cheerfully give our selves over to that callyng, wherein it hath pleased God to sette us”.
Today, I frequently see articles attempting something similar by emphasising “the secret benefits of playing games” (I’ve even written them myself). Games teach children numeracy and social skills, they help busy professionals stave off burnout, they have powerful neuroprotective properties for elderly people, and so on.
The evidence for this is mixed. Take the common assertion that learning chess makes you smarter. Back in the 1920s, Russian psychologists Djakow, Petrowski and Rudik subjected the era’s top chess players to a battery of tests to see if they were smarter than their non-chess-playing peers. Sure enough, their subjects performed better – when the task was related to chess. In tests related to general intelligence, they showed no advantage.
https://www.theguardian.com/books/2024/oct/28/the-big-idea-how-games-can-change-your-life