Hi,
So here are some things I thought about over the past few weeks.
Havens of light at the coming of shadows (or, notes from LessOnline and Manifest Conferences in Berkeley, CA)
I spent two weekends earlier this month at the LessOnline and Manifest Conferences in Berkeley, California. (About which, previously.) I came away refreshed, but far more uncertain about the future ahead. I’m not the first to write an essay about the experience; given that part of the conferences was about celebrating great writing on the Internet, let’s start by doing just that:
The secret behind Manifest’s unparalleled social atmosphere is the people it attracted. Every single person I talked to, without exception, was both smart and interesting. I could walk up to people I had never met before and instantly insert myself into a conversation on ancient Greek military history, prediction markets applied to romance, or whether AI will end the world. I could look across the room, realize “oh wow that’s so-and-so from Twitter!”, introduce myself, and immediately become friends with them…
…The venue fit the occasion perfectly. Lighthaven is a complex of buildings on the site of the now-defunct Rose Garden Inn in Berkeley. When it’s not being used as an event space, it’s the working headquarters of Lightcone Infrastructure and home to many rationalists - a mix of village, hacker house, WeWork, and resort. Lighthaven has six buildings, each with their own unique character - from the wide open, more modern Aumann Hall to the darker, Gothic Bayes Hall. It’s absolutely bursting with places to sit and gather - giant indoor and outdoor sitting areas, more intimate upstairs salons, porches, roof decks, outdoor gazebos, an amphitheater, and even a small geodesic dome. Most rooms are tastefully appointed with soft carpets, low-to-the-ground seating, incredibly well-selected books, and ample natural lighting. Everything has variety, even the green spaces. The huge astroturfed green of Rat Park, meant for large gatherings, contrasts beautifully with the more contemplative Walled Garden, with its trees, flowers, and places to read, do work, or nap.
I had a great time at LessOnline. It was a both a working trip and also a trip to an alternate universe, a road not taken, a vision of a different life where you get up and start the day in dialogue with Agnes Callard and Aristotle and in a strange combination of relaxed and frantically go from conversation to conversation on various topics, every hour passing doors of missed opportunity, gone forever
The Manifest conference has been a successful experiment: put enough introverts with common interests into a confined space and they’ll spontaneously turn into extroverts.
During my recent trip to the Bay Area, I met lots of people who are involved in the field of AI. My general impression is that this region has more smart people than anywhere else, at least per capita. And not just fairly smart, I’m talking about extremely high IQ individuals. I don’t claim to have met a representative cross section of AI people, however, so take the following with a grain of salt.
If you spend a fair bit of time surrounded by people in this sector, you begin to think that San Francisco is the only city that matters; everywhere else is just a backwater. There’s a sense that the world we live in today will soon come to an end, replaced by either a better world or human extinction. It’s the Bay Area’s world, we just live in it.
In other words, I don’t know if the world is going to end, but it seems as though this world is coming to an end….
…All kidding aside, do you think the average person living near Oak Ridge or Alamogordo back in 1945 had any idea what the nearby eggheads were about to cook up?
So, what did I think of all this?
The conferences were celebrations of what I love; predicting the future, nerding out about ideas, trying to figure out what the future holds. I got to spend time talking about the ideas that matter to me with some of my intellectual heroes. Pretty cool.
The venue for the conferences, Lighthaven, is a place of utter happiness and safety. It has that calm, glowing, oasis-like feeling that I recall from the Jonathan Edwards Courtyard on a late night in college — a place where you feel like you have space, and time, and safety. A place where hackers, academics, polymaths, additional polymaths, further polymaths, seriously did we mention the polymaths, and the genius loci of sex, love, and relationship polling datasets (semi-NSFW) will talk, and argue, and pun, and play with ideas. A place where a man in a shiny gold hat will tell you that you're doomed. A place where there’s a puzzlemaster who begins her one-hour costume contest1 by saying, “Let’s review items that we all need in our costume party go-bags.” A place where an old friend will run up to you and ask, “Dave Kasten, do you want to play a social deception game?” with such excitement that you instinctively say “yes!”2 A place where if you wait long enough, everyone you care for the universe might eventually wander by. A place where someone starts a prediction market to guess whether the late-night fireside singing circle lasts past 3am, and the circle lasts almost until dawn…
In short, a good week, a week of peace.
But. Well.
There’s a feeling I couldn’t quite shake, and I think some others (like Scott Sumner, above) couldn’t either: that we’re attending the conference in a time of the Coming of Shadows, where powers beyond our ken begin to gather, and everything’s about to change. The twilight, you might suspect without certainty, of the last great age of mankind.
That same week, Leopold Aschenbrenner released his essay, “Situational Awareness,” which argues seriously about what happens if the current Moore’s Law-like scaling curves for AI just keep going. Aschenbrenner begins his essay like this:
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace many college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be unleashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the willful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
In Aschenbrenner’s telling, we’re head for a super-exponential world where human history basically just ends sometime between 2027 to 2030, as our already-annoyingly-smart human AI researchers are replaced by clouds of automated AI researchers that rapidly blow past all barriers of human knowledge and rapidly and recursively increase the capabilities of their own AI, while the US and the PRC frantically try to hang on to power, and just possibly, get into a shooting war. If we’re really lucky, the insights those AI bring will unlocks prosperity beyond whatever you could imagine — limitless power, scientific innovation, flying cars, immortality, you name it. If we’re really unlucky, we’ve built an alien god that doesn’t care for human values, at best takes control, and at worst casually wipes us all out.
To be clear, many who fear AI could cause human extinction doubt Aschenbrenner’s specific scenario — many were scathingly critical of its straightforward “line goes up” logic — but almost everyone took it seriously, as one carefully-lit torch casting shadows on the wall.
And so now we might be at a moment in humanity’s story where — at least in the eyes of many passing through Lighthaven these past few weeks — the plot arc’s accelerated3, the crises are underway, and no one is quite sure who their friends or enemies are any longer. A moment where old friends’ conversations are strained — some hold three-hour shouting debates outside over a picnic table, others are just staring blankly across the room at others they know so well from decade-long fights on these issues. And a lot of people are scared.
As one attendee working in an AI lab told me about another attendee working on AI safety: “Well, we’re friends, but it’s hard. Because…he thinks I’m going to destroy the world.”
Let’s take a moment here and recognize the obvious, dear and gentle reader.
I’ll admit that what I’m terrified that I’m writing something very silly here.
We could just be wrong, and AI could pose no risk of killing us all. Your Humble Correspondent personally could be overlearning the lessons of COVID, where reading a few smart non-consensus folks talk about exponential curves caused me to be ready for lockdown as of early February, and falling to realize that AI isn’t that. Heck, it’s conceivable that AI could be as disruptive as the invention of the Internet and still pose literally zero risk of killing us all. Two of the smartest people I know place dueling odds of “60%” and “rounds to zero”, respectively, on AI causing human extinction; my own odds are so uncertain that I quote you “9% to 49%” (note the range wide enough to drive a carrier battle group through) on the same question.
Richard Ngo (OpenAI):
One of the rarest things in the world right now is the ability to take superintelligence seriously without your worldview ossifying. It feels like walking a tightrope where any direction you could fall is a different type of crazy.
You might, in fact, get a chance to mock me in a few years’ time for thinking this was such a big concern, and speak of AI hype in the same dismissive tones I currently use to speak of NFTs. Or even if I’m right that AI is transformative, maybe Nora Belrose and Quintin Pope are just right, and AI will be easy to keep under control. The shadows I fear gathering around me could just be a fog of bullshit.
But…I just don’t think I’m wrong; even if I’m wrong, I think the odds are reasonable enough, and the stakes high enough that we should act.
But set the expected-value calculation aside, and let me have the guts to talk about the main chance. I don’t know what change is coming, but it doesn’t seem to be slowing down…
In my lighter moments, I think there’s a real chance AI fundamentally transforms and improves our lives, providing technological and social transformation beyond our imaginings, and we all get to live to our 1000th birthday parties. (Mine will be on the Rings of Saturn; detailed invitation with orbital parameters to follow 960 years hence.)
But in many other moments, I think about how Aschenbrenner ends his essay (which rhymes with our previous discussions):
But the scariest realization is that there is no crack team coming to handle this. As a kid you have this glorified view of the world, that when things get real there are the heroic scientists, the uber-competent military men, the calm leaders who are on it, who will save the day. It is not so. The world is incredibly small; when the facade comes off, it’s usually just a few folks behind the scenes who are the live players, who are desperately trying to keep things from falling apart.
Right now, there’s perhaps a few hundred people in the world who realize what’s about to hit us, who understand just how crazy things are about to get, who have situational awareness. I probably either personally know or am one degree of separation from everyone who could plausibly run The Project. The few folks behind the scenes who are desperately trying to keep things from falling apart are you and your buddies and their buddies. That’s it. That’s all there is.
I’d frame it somewhat differently — I’d say that the crack teams are all currently 150% utilized dealing with problems listed on the front pages of the Washington Post and the President’s Daily Brief, and anything new has to come out of hide, somehow.
But either way, there is no superhero surplus. The entire world’s fate might just rest on a few people in my hometown, and a few people in the Bay Area, and a few scattered elsewhere on the globe. A few people trying to make sure that we win the cool future with flying cars, not the disastrous futures with nuclear war or human extinction. A few people writing policy papers, and trying to get them into the right hands at the critical moments. A loose confederation of friends and allies emerging during some fireside chats at a conference in Berkeley, California, perhaps.
A few people trying to beat back the uncertain shadows, and step into the light.
Disclosures:
Views are my own and do not represent those of current or former clients, employers, friends, the Guardian (UK), or prediction market counterparties.
Your Humble Correspondent got 2nd place.
Said game was roughly Werewolf or Mafia, with additional rules.
You know that trope where the hero is a genius and just Sherlocks their way through the problem with impeccable logic, living in each possible world simultaneously and collecting evidence and running gambits until they cross all but one possibility off their list, despite every distraction thrown their way, and solve the murder? Yeah, the good guys did that to my team of secret bad guys; we never even had a chance.
Indeed, Babylon 5’s “The Coming of Shadows” may be the original, prototypical example of a showrunner using the phrase “accelerating the arc” in serialized television.