Beware of geeks’ daring grifts

SBF, the corruption of noble causes, and the poverty of mathematical thinking

Like much of the world, I’ve been raptly following the trial of Sam Bankman-Fried (known acronymically as SBF), whose crypto business FTX spectacularly collapsed last year – taking much of the crypto ecosystem with it. Mostly recently that rapture has involved reading Michael Lewis’s book Going Infinite. (Lewis wrote Liar’s Poker, Moneyball, and perhaps most famously The Big Short; he’s been floating around finance and fraud for forty years.)

There are a million interesting things about a story in which a twenty-something briefly became one of the world’s richest people before being exposed as a fraud, losing billions of dollars of ordinary people’s money in the process. But one that seems particularly interesting is the inextricable link between what SBF did and the good he believed he was doing in the world, and the link between why he failed and the mindset that led him to be so successful in the first place.

There was unquestionably an ethical dimension to what SBF did: he was motivated by his “effective altruism”. Effective altruists (EAs) believe in doing as much good as possible in the world. If you’re donating £100 to charity, EAs reckon that you should make sure that £100 does as much good as is possible. So you should donate it to the cause that provides the most additional quality-adjusted life years for people. Curing malaria, great; donating to a donkey sanctuary, perhaps not so much.

The teenaged SBF found this line of reasoning compelling: he wanted to do as much good as possible with his life. But how should he do that? Should he become a doctor? Train as a drug researcher? Volunteer to go and build schools in Africa? These were all possibilities, but SBF fell under the influence of the Effective Altruist Will MacAskill, who believed that an EA who wanted to have the greatest impact would start by trying to make as much money as possible:

“MacAskill made a rough calculation of the number of lives saved by a doctor working in a poor country, where lives were cheapest to save. Then he posed a question: ‘What if I became an altruistic banker, pursuing a lucrative career in order to donate my earnings?’ Even a mediocre investment banker could expect sufficient lifetime earnings to pay for several doctors in Africa – and thus would save several times more lives than any one doctor.”

And so SBF’s path was set. He would become the richest person in the world, in order to have the greatest positive impact possible on it.

The first interesting thing about this is the apparent disconnect between someone who started their career wanting to do the maximum amount of good in the world, and who seemingly ended it perpetrating a massive fraud. These two things feel like they’re at two separate ends of an ethical spectrum.

But actually they’re closer than you might think. That’s because of noble cause corruption: the idea that, if you’re pursuing noble ends, the nobility of those ends might justify any means by which you reach them. SBF thought that every dollar he acquired was advancing him to the goal of fixing the world’s problems, and that money in his pocket would lead to better outcomes than money in anyone else’s. Given that mindset, is it surprising that he ended up seemingly perpetrating a massive fraud? Nick Asbury has written convincingly on this point in the context of another recent Silicon Valley fraud, Elizabeth Holmes and Theranos:

“Once you’re convinced of the rightness of your cause, it’s easier – consciously or subconsciously – to justify any means towards that end.”

It’s not that the fraud was cynical, and that the thing about healing the world was just a cover story. Genuinely believing in the nobility of your cause can itself lead to these unseemly ends.

The SBF story also reveals that limitations of a mathematical, probabilistic mindset when applied to real-world domains outside of finance. SBF was a consummate gambler. He was able to calculate the expected value (EV) of situations on the fly. He was able to put aside emotions, worries and guilt, and think only of the bet he was making – whether it was a $100 bar game or a $1bn financial trade.

He thought that he could take that mindset out into the real world and flourish there as he had flourished on Wall Street and in the world of crypto. The first place that SBF turned his mind to was the political arena, figuring that he could influence the outcomes of elections through the smart allocation of money. His headline candidate was Carrick Flynn, a Yale- and Oxford-educated Washington D.C. policy wonk who was, by his own admission, “very introverted” and terrified of public speaking. Flynn ran in the primaries for a competitive rural seat in Oregon. SBF spent over $10m on Flynn’s campaign, a preposterously large sum of money for a primary. His rivals – and the public in general – saw this flood of money, and where it was coming from, and stuck two fingers up at Carrick “Creepy Funds”, “billionaire-backed” Flynn. He, perhaps predictably, didn’t win.

Losing elections is one thing, but you can get to even more perverse outcomes by applying those stark, mathematical terms. The Effective Altruism movement started out by focusing on things like curing malaria or preventing the next pandemic. But very quickly it became warped. The number of humans alive today, it reasoned, pales in comparison to the number humans that will eventually live. And so the effective multiplier on what you do for far-off future lives is much greater than what you can do for people who are alive today. That’s how the EA movement, and SBF with them, became obsessed with bonkers ideas like the idea of an artificial superintelligence destroying humanity, to the detriment of living people. As Michael Lewis says:

“One day some historian of effective altruism will marvel at how easily it transformed itself. It turned its back on living people without bloodshed or even, really, much shouting. You might think that people who had sacrificed fame and fortune to save poor children in Africa would rebel at the idea of moving on from poor children in Africa to future children in another galaxy. They didn’t, not really – which tells you something about the role of ordinary human feeling in the movement. It didn’t matter. What mattered was the math. Effective altruism never got its emotional charge from the places that charged ordinary philanthropy. It was always fueled by a cool lust for the most logical way to lead a good life.”

That narrow, STEM-focused worldview is incredibly powerful in certain domains. But it falls down when you try to take it out of them. Probably the most-quoted bit of Lewis’s book, a passage I actually laughed out loud when I read, sums up the emotional, cultural, and more than anything else human poverty of this mindset. Lewis quotes SBF:

“I could go on and on about the failings of Shakespeare… but really I shouldn’t need to: the Bayesian priors are pretty damning. About half the people born since 1600 have been born in the past 100 years, but it gets much worse than that. When Shakespeare wrote almost all Europeans were busy farming, and very few people attended university; few people were even literate – probably as low as ten million people. By contrast there are now upwards of a billion literate people in the Western sphere. What are the odds that the greatest writer would have been born in 1564? The Bayesian priors aren’t very favorable.”

In hindsight, someone who viewed the rich cultural legacy of Shakespeare disapprovingly because it didn’t make sense in Bayesian terms was probably always going to end up in a warped place. The writing was on the wall. The irony is, of course, that while SBF doesn’t understand Shakespeare, Shakespeare would absolutely have understood SBF. But that’s precisely the point: SBF is a lesson in what happens when super-smart people believe a little too much in their own narrow abilities, believe a little too much in the nobility of their causes, and believe too little in the importance of humanity.