Volume 132: VC Flash Mob Collapses Bank.
1. VC Flash Mob Collapses Bank.
tl;dr: First bank run of the social media era notable for its speed.
I probably view the collapse of Silicon Valley Bank (SVB) with more empathy than most. In 2008, when it collapsed, Washington Mutual (WaMu) went from being our largest client on Thursday to no longer existing on Friday. It was shocking and forced us to make painful layoffs, but that’s nothing compared to the people we worked with at the bank who not only lost their jobs but had their retirement savings eviscerated as the WaMu stock in their 401Ks went to zero. So, no matter what your thoughts may be on what went down and who might be to blame, please reserve a moment of empathy for the (non-exec) employees and business partners who, through no fault of their own, have likely been wiped out by this event.
As for banking crises in the US overall, it’s a simple tale. When regulation is lax, and there’s little government oversight, banks take big risks to maximize profits, and bank collapses are the inevitable consequence. When the government regulates more heavily and oversight tightens in response, banks become less profitable and more boring…and collapses don’t happen. This has played out consistently for over 100 years, so we have plenty of evidence to support the case. However, banks will always lobby for deregulation because the pull of profit is strong. Meanwhile, the pull of political donations has rendered banking deregulation one of the few bipartisan issues supported by both parties, irrespective of the known and predictable consequences. (Ironically, the 2018 deregulation that contributed to the collapse of SVB was called the Crapo Bill. Aptly named)
By now, there have been plenty of analyses of what happened and what went wrong, but to better understand the future implications, we might want to separate “what got the bank into trouble” and “what killed the bank.”
What got the bank into trouble was either underestimating or ignoring risk. Specifically, there appear to have been two major risks in play:
Customer concentration risk: SVB’s success as a brand in attracting startup and technology company deposits was risky because this is a connected, homogenous group, subject to herdlike behavior, exacerbated by the fact these deposits were particularly subject to flight, as they overwhelmingly exceeded the insurance limit provided by the FDIC. Doubling deposits between 2020 and 2021 as the everything bubble peaked amplified this risk.
Interest rate and duration risk: Think of a bank as a two-sided market for money. Perfection is a diverse array of depositors giving you cash exactly matching a diverse set of customers you can then lend it to. SVB, by contrast, had a concentrated glut of depositor cash far in excess of the loans it could write itself. So, in search of profit, it went out and used about half its depositor cash to buy long-duration government-backed securities (government debt). This paid an average of 1.6%, which is OK when interest rates are zero, but is very much not OK when rates rise to 4.5%.
As the tech economy darkened and interest rates rose, these two areas of risk combined to put the bank in trouble. Put simply, cash-burning startups needed to draw down on their deposits to pay for the cost of operations, while new deposit inflows slowed to a trickle as VC funding dried up. This meant a consistent outflow of deposits, which the bank was increasingly struggling to meet because it had locked up so much depositor cash in long-dated securities. Worse, because interest rates had gone up, the value of these securities was going down (Nobody is going to buy an old loan paying 1.6% when they can buy a new one paying 4.5% instead. As a result, if the old loan is sold, it has to be sold at a discount to make it equivalent).
Realizing its situation, SVB decided to free up depositor cash by selling a portion of its long-dated securities, which meant eating a loss because it had to sell for less than it had paid. To cover the cost of this now-realized loss, the bank arranged for Goldman Sachs to help it raise capital by selling $2.25bn in equity.
While all of the above reeks of hubris and terrible risk management (SVB went eight months last year without a Chief Risk Officer, and internal recommendations in 2020 to shift from long-duration bonds to short to mitigate interest and duration risk were refused because it would’ve meant reducing profits), it really shouldn’t have been an extinction-level event. In many ways, it this is just so far, so normal, for how banks typically patch over their bad decisions.
No, what almost certainly killed this bank wasn’t that it got into trouble but its abject and utterly inept approach to communicating its situation. In the same way that it went eight months without a Chief Risk Officer, it had nobody at the executive level responsible for communications.
So, with that in mind, let’s look at what happened next.
The bank did nothing to communicate what was happening to its depositors, even though it knew it had a highly concentrated depositor base that was especially subject to both flight (90%-ish uninsured deposits) and herd behavior influenced by a very small number of VCs who’d handed out the money that was now parked at SVB.
Instead, it tried to announce quietly to the financial markets that it had taken a loss and was now conducting an equity raise. (Notably, this is only to be found on their Investor relations page, there is no mention at all in their “newsroom”) Hoping, I guess, that the news would be limited to institutional investors and that the raise would be complete before anyone took notice.
Worse, they did this at the same time that Silvergate, a bank focused on the startup-adjacent world of crypto, had just collapsed into oblivion, which meant both VCs and startup founders were notably jittery.
Then, people online, including an influential fintech newsletter widely read by VCs, picked up on what was happening and started publishing breathless pieces on SVB taking losses, sitting on even greater unrealized losses (not necessarily a problem if you can hold the securities for long enough for interest rates to drop and their value to rise), and raising capital to cover these losses…and, surprised by this news, VCs panicked.
While the reality is likely not as neat as I portray it here, we know that news of the loss and capital raise spread ferociously, fed by the fuel of a complete communication vacuum from SVB, which led to it wholly losing control of the narrative. As a result, by the time it issued the most damning of “everything is stable and sound” statements, the bank was already toast because…
…The VCs had been instructing portfolio companies to review their relationships with SVB, which rapidly leaked into the Twittersphere, causing panic among deposit holders because they needed to make payroll, and the FDIC only insures $250k of depositor funds.
As a result, we saw the fastest bank run in human history and the first (but likely not the last) to be algorithmically accelerated via social media. To put this in context, when it failed in 2008, WaMu faced withdrawals of $16.7bn over two weeks. When it failed this month, $43bn exited SVB in just two days.
It’s truly incredible to consider that a flash mob of maybe 20 or so panicked VCs, algorithmically accelerated on Twitter, led to the largest, fastest withdrawal of depositor cash in banking history.
Had the bank realized that actions communicate at least as loudly, and often more loudly, than words, it would likely be alive today. Had it had an even halfway competent communications leader in the C-suite, it likely would be alive today. (Of course, if it hadn’t made the dumbest of dumb bets against interest rates rising, it would definitely still be alive today, so there is that)
Anyway, what an even halfway competent communication leader would have realized is that you can’t do this at the same time Silvergate was collapsing, that you can’t do this without proactive VC-focused outreach because of how influential they are with your depositor base, and that you probably shouldn’t announce a loss and a capital raise at the same time, but instead spread them out and give people a chance to process and figure out what’s going on first.
So, yeah. Bad decisions were made on the risk management side for sure, but guess what? Companies make bad decisions all the time. What almost certainly killed this bank - or at least made it fail so spectacularly quickly - was a complete failure to read the room and to understand and manage the realities of the modern communication environment, especially with a deeply connected, concentrated, and herdlike depositor base that is highly influenced by a very small number of VCs.
If I were a regulator, it’s not the lousy risk management decisions I’d be most concerned about right now. No, I’d be shitting a brick at the risk of algorithmically accelerated panic besetting a financial system that cannot exist without a foundation of confidence and trust. (And yes, bad actors - likely nation states - are almost certainly in the process of planting the seeds of such distrust even as we speak).
If I were a bank CEO, I’d be looking at who is responsible for communications at my bank. And if I don’t have a strong C-level communication leader who understands the nature of algorithmic acceleration and social media panic, I’d be hiring one immediately. And I’d spend almost any amount of money to get someone who knows what they’re doing. Why? Because the future viability of my business might depend on it.
2. Buh-bye, Beauty.
tl;dr: Not all easy businesses are good businesses.
One of the things tech firms have been smart about is attempting to build businesses where there’s a “moat” that prevents competition from eating their lunch. I suspect this mostly started as a survival mechanism, for without a moat, the tech world operates through a version of Moore’s law - twice as fast, at half the price, every 18 months. In other words, without something to make it difficult, competition will rapidly commodify what you’re selling, which means you need to prevent yourself from being competed out of existence.
At the opposite end of the spectrum, some businesses have such low barriers to entry that pretty much anyone can get into them. At the height of DTC hubris, the online sale of mattresses was one such, with some 175 online mattress retailers in the US at one point (although there are probably many fewer these days).
As the everything bubble peaked, it became sexy to think there was an easy hack that would give low barrier-to-entry businesses something akin to a moat, and that was celebrity sponsorship and ownership. It wasn’t exactly a subtle or nuanced strategy but rather an attempt at financially exploiting celebrity fame in a world where celebrities increasingly capture our attention.
This famously occurred with NFTs (where a few celebs now find themselves in legal jeopardy) and booze.
However, beauty, especially cosmetics, remains the OG low-cost-of-entry, easy-to-exploit celebrity category. Rihanna explicitly using her own cosmetics brand, Fenty, during her Super Bowl performance was no fluke. After all, its $2.8bn valuation, not music, represents the vast majority of her net worth. So, as Fenty and a couple of other celebrity beauty brands became successful, it led to over 50 launching in just the last three years.
I don’t know if you’ve ever noticed on the way from Manhattan to Newark airport in New Jersey, but you’re driving through ground zero for the celebrity cosmetics industry. A not particularly well-kept secret is that most of the seemingly different and magical beauty products in your medicine cabinet come from the same few factories, sold private-label to the brand owner, with the same ingredients, other than perhaps a little fragrance and color.
But just because a business is easy to start doesn’t mean it’ll be particularly cost-effective to run, as the celebrity beauty complex is now finding out, and brands that popped up are just as quickly dying off.
It would be easy to say this is because all the capital sloshing through the system flowed elsewhere as interest rates rose, but there’s also the inevitable issue of consumer fatigue. When things are new, they’re novel, and we pay attention to them as such. This is why we saw a few early successes in celebrity beauty and booze businesses. However, as the novelty wears off and new celebrity brands pour into these categories, we tune them out. It becomes harder to capture our attention because there are so many of them, and they, in effect, become commodified (and that moat you thought you had starts to look very different).
Put simply, if you apply star power to a category where celebrities are notable for their absence, you can cut through, be novel, and achieve success. However, try this same trick in a category already dripping in celebrity star power, and you disappear. It’s no longer novel; it’s just yet more noise for us to tune out. And when we’re in the mode of tuning out the noise, we’re far more likely as consumers to start asking questions about what we’re being asked to buy, which, in this case, led directly to social media influencers shifting from a stance of peddling celebrity products to reviewing and critiquing them for quality and value.
With this in mind, I tip my hat to Ryan Reynolds, who went from Aviation Gin to Mint Mobile. I suspect he realized celebrity booze had reached saturation point when he sold Aviation while cell phones was lucrative open territory.
So, what’s the moral of this tale? Well, celebrity alone isn’t likely enough in categories anyone can enter, especially if there are already loads of them there. Second, product quality still matters. And third, that celebrity endorsement and ownership do not divorce you from having to do the hard yards of tapping into and then continuing to adapt as consumer tastes change.
Because while beauty may be easy to get into, it’s damn hard to succeed in because our tastes are fickle.
2. The Invisible Hand of Algorithms.
tl;dr: Decisions made on our behalf might not be what we’d choose.
I don’t always feel good about the Off Kilter editions that go out. Sometimes I’ve been too busy to do a good job; sometimes, there haven’t been interesting enough things happening to inspire me; and sometimes, it’s just because I’m feeling down and phone it in.
But last week wasn’t one of those weeks. I felt great about it. So color me surprised when I logged in and found almost 100 people had unsubscribed (for context, I usually see 1-5 or so). My first thought was, wow, I must’ve accidentally touched a nerve and really pissed some people off. It was a sinking pit in the stomach; oh shit, what have I done moment.
But, as I looked closer, I realized these unsubscribes happened precisely one minute after the newsletter was sent. Finding this more than a little suspicious, I did a little research and found there are corporate security algorithms that auto-unsubscribe newsletters it suspects of being junk mail or phishing.
Taking another look, the only thing I could think of is that I used the Icelandic symbol in Halli Thorliefsson’s name and linked to a website in Icelandic, which the machines probably couldn’t read, and so deemed it suspicious enough to hit the unsubscribe button.
So, to confirm my suspicions, I emailed the unsubscribers and asked them if they’d meant to. About 2/3 wrote back saying they had no idea they’d been unsubscribed and to add them back.
So I did. Hello again 👋
As I reflected, it got me thinking about the invisible hand of algorithms shaping our lives and making decisions on our behalf, especially as the rapid rise in generative AI promises an equally rapid increase in the algorithmic shaping of society.
Now, I’m no Luddite. This isn’t an anti-algorithm screed. On the contrary, they’re essential to our modern world. Without spam filtering algorithms, email would be unusable. Without algorithmic filtering, social media would be an even worse cesspit than it already is. Without ride-matching algorithms, nobody could take an Uber. Without recommendation algorithms, Netflix might improve discoverability (I kid, I kid).
But there’s also a dark side to all of this. When algorithms invisibly make decisions on our behalf, they might not be making the decisions we ourselves would choose. Worse, they might not be making the decisions their creators intended either. Because algorithms are often designed to learn and optimize for certain characteristics, they tend to drift over time. For example, the Facebook recommendation algorithm was never intended to push people toward extremism. Still, it does so because by optimizing for engagement, it learned that extremism causes certain groups to be very active on the platform, thus increasing the opportunities to serve them ads.
Equally, we often hear about the horrors of algorithmically mediated recruitment, where the algorithms filtering job applications consistently eliminate great applicants before any human gets to see their resume. Or scheduling algorithms that push part-time retail and fast-food employees into inhumane work schedules. Or sentencing algorithms in the criminal justice system that are demonstrably racist.
I once attended a conference where some people in a breakout session advocated the auditing of algorithms. Their case being that, in the same way we audit corporations to prove their accounting is accurate and true, we should require outside auditors to check algorithms to confirm they’re doing what they’re supposed to be doing and aren’t causing any harm.
After last week, it made me think there needs to be a reckoning about this stuff. It’s bigger than whether the tech monopolists are too big or too censorious; it’s fundamentally a question of human agency, how we know which decisions we’re ceding to algorithms, and to then ensure these algorithms are working in our best interests rather than invisibly against them. So, not a very big challenge then. Haha.
Oh, and if I accidentally re-subscribed you and you wanted to stay unsubscribed. I’m sorry. Please hit the button below, and I’ll leave you be.