I read an interesting article last night, detailing a public exchange between Daniel Kahneman and Nassim Taleb.
[E]ach man was asked to write a biography of seven words or less. Taleb described himself as: “Convexity. Mental probabilistic heuristics approach to uncertainty.” Kahneman apparently pleaded with the moderator to only use five words, which were: “Endlessly amused by people’s minds.” Not surprisingly these two autobiographies are descriptive of the two men’s bodies of work. Much of the discussion at this event, however, was not about making decisions under uncertainty, but a sort of tit for tat, with Kahneman asking probing questions and making pointed observations of Taleb. Little of the Nobel laureate’s [i.e. Kahneman's] work was discussed.
It would seem that Kahneman had Taleb on the back foot at various times during the exchange, pointing out (among other things) that the latter's framing of situations suffered from a clear "
anchoring" bias.
The above article also reminded me of a lingering question that I have about Taleb's work — not least of all because it relates to the type of research that
made Kahneman famous (i.e. the limits of heuristics in the face of statistical problems). Having failed to get any responses to
my query on Twitter, I'd like to try and flesh it out here.
Let me state up front that I have yet to read any of Taleb's books in full. (They are patiently waiting on my kindle.) However, I
have read several chapters from them and, moreover, a number of the articles that Taleb has penned in different media outlets. For instance,
this essay for Edge magazine which seems to nicely sum up his position.
So, I'm reasonably confident that I know where Taleb is coming from. I should also say that I think some of his points are very well made. Such as the "inverse problem of rare events" — basically, that it is incredibly difficult to gauge the impact of extremely rare events exactly because they occur so infrequently. We lack the very observations that are needed to build up a decent idea of the probability distribution of their associated impact. As Taleb explains in the
Edge essay:
"If small probability events carry large impacts, and (at the same time) these small probability events are more difficult to compute from past data itself, then: our empirical knowledge about the potential contribution — or role — of rare events (probability × consequence) is inversely proportional to their impact."[*]
My reading of Taleb also leads me to think that he that he more or less regards everyone as blind to "black swan" (low probability, high impact) events. If that is true, however, I'm wondering how he squares that notion with the
consistent empirical finding that people tend to
overestimate the likelihood of low probability, high impact events. (And vice versa for more common, low impact events.) Consider the following chart, for example, which was originally produced in a seminal study by
Lichtenstein et al. (1978):
|
Relationship between judged frequency and actual number of fatalities per year for 41 causes of death. |
What we see here is that people have a clear tendency to overstate — by an order of several magnitudes — the relative likelihood of death arising due to "unusual and sensational" causes (tornadoes, floods, etc). The opposite is true for more mundane causes of death like degenerative disease (diabetes, stomach cancer, etc).
Similarly, have a look at Table 2 (p. 19) in this
follow-up study by the same authors, where various groups of people were asked to rank the relative risks of different technologies. We clearly see a incompatibility between the opinions of experts and those expressed by laymen. For example, nuclear power is perceived to be far more risky by members of the general public than by those familiar with the actual number of fatalities and diseases brought on by this technology.
Now, Taleb might respond by that saying these are the exactly the type of misleading comparisons that he is talking about! He could argue that the "actual" observed fatalities are not necessarily an accurate representation of the underlying risks. After all, a single major event could significantly alter the average number of deaths of any particular cause (e.g. nuclear meltdown)...
Well, perhaps, but I'm not totally convinced. For one thing, that says very little about the flipside of this problem, which is the degree to which "normal" causes of death are underestimated — both in absolute terms and relative to more sensational outcomes. Second, by now we have accumulated decent data on numerous low-probability events that have occurred (rare as they are), from the outbreak of plague to massive natural disasters. Third, even disregarding my previous points, it doesn't seem at all obvious to me that the public is guilty of consistently underplaying the role of black swan events. Indeed, if anything they appear to be using a heuristic which causes them to significantly overestimate the likelihood of rare events.... Perhaps as a way of adjusting for the — unquantifiable? — impact that these outcomes could have if they do occur?
To restate my question then to those of you that know Taleb better than myself: Does he ever integrate (or reconcile) his theory about the ignorance of black swan events with the empirical evidence that people consistently overestimate the likelihood of low probability, dramatic outcomes?
UPDATE: This post appears to invoked Taleb's ire in somewhat amusing fashion. See follow-up
here.
UPDATE 2: Second follow-up and some big name support of my basic point
here.
___
[*] This type of unquantifiable uncertainty happens to be a big area of research in the climate change literature. In particular, the 'dismal theorem' proposed by Marty Weitzman, whom I have mentioned numerous times before on this blog. See
here for more.
Comments