Sunday, July 13, 2008

Review of "Fooled by Randomness"

Taleb's follow up, "The Black Swan," has gotten more press and is probably the better book, but they cover the same ground, and I picked up the first book on a lark and on sale. The basic idea is that we live in a much more random universe than we imagine (lots of "black swan" events that are unpredictable and overwhelming, such as 9/11), and it's becoming more and more random all of the time. But for all sorts of reasons, however, we're psychologically and culturally "fooled" by this randomness. We want to tell understandable narratives even when there isn't such a thing.

I found the basic argument compelling, though the author's snarky tone was off-putting at times. As a former historian, I always had difficulty with the 20-20 hindsight carping that makes up much of historical narrative as well as political commentary. As an avid sci-fi fan and game player, the idea that a particular "big" event (such as the start of WWI) could just as easily be explained as the random coincidence of lots of bad choices in this particular timeline out of the infinite number of other timelines that could have occurred. In short, WWI was an "accident," not something that could have predicted with any degree of certainty. Sure, after the fact, we can see why this or that problem might have caused the war to break out, but that overlooks all of the other possible sequences of events that could have occurred based on the same set of problems. See Niall Ferguson's review of "The Black Swan" for a similar argument (http://www.telegraph.co.uk/opinion/main.jhtml?xml=/opinion/2007/04/22/do2201.xml).

On the other hand, applying Taleb's insights to the business world that I live in (higher education) is a bit more difficult for me to do than it is to apply it to how I understand history. Perhaps Taleb's thinking reinforces my long-standing argument that we need to be more willing to innovate in higher education and to provide resources for such projects. That's been my big push in terms of budgeting at JBU, but the concern has always been that in practice, we find it so difficult to say "no" once we've started a project, that "innovation" at an institution like ours will more likely mean "funding pet projects" that will have no real long-term positive consequences. And it that's true, why shouldn't we just give out more money to "everyone" instead of giving out that money to just a few to do something they have particular passion for.

I do take the point, and one of Taleb's key arguments is that you have to have the discipline and intelligence to include "stop-losses," markers when you will indeed pull the plug on a "position." That's really hard for us to do in higher education, especially because that "position" is usually a real live person who you will now have to fire. If you can discipline yourself to determine where that stop-loss is and then be willing to make the hard call when you hit that stop-loss, however, I think Taleb's argument is absolutely correct for our organization. Now if I can just get everyone else to agree and to actually do what it takes to make this strategy work.

As an addendum, Taleb argues that people tend to agree with him in theory but find it very difficult to do in practice. That's been very true in my case. Taleb says that "competition" types (very much me) are too concerned about win-loss records and "optimizers" (again me) spend too much energy trying to find that best solution. Both tendencies cause people to risk too much the huge defeat that will wipe them out completely in their pursuit of playing perfectly and winning each game.

Sure enough, in my game playing, my work, my personal life, etc., I find myself making exactly this mistake over and over again. To take just yesterday's example of playing a game called St. Petersburg, which I've played hundreds of times, I knew what the "right" first round play was for the most efficient eventual score, but I wasn't playing the computer, I was playing real people who don't always act rationally or even know what the most efficient solution might be. So I risked on turn one getting knocked out completely in order to have the best chance of winning a standard game. Sure enough, when the standard game didn't occur, I came in last, and by a wide margin from the first place finisher. If I had protected myself against the disastrous outcome, I might have only come in second or third in this four player game, but I would have at least had a shot and still been in the game. In other words, I made the "black swan" error not even a week after having read a book that convinced me that I should be avoiding "black swan" errors in my life. Yep, it sure is hard to internalize these things.