Honey, I joined a cabal
Or: Mainstream media still isn't great at covering tech and AI ideologies
A couple of months ago, I saw a tweet about a writing conference called LessOnline. I thought it sounded interesting - I’m known to do a bit of writing - but I wasn't entirely convinced. That said, I was intrigued by their list of "writings we love":
I wasn't familiar with most of these names, but I am unabashedly a fan of Bits about Money, Dan Luu, Money Stuff, Paul Graham, and Wait But Why. I figured I could take away a thing or two from the other authors listed at the conference (and as a bonus, the venue was also in Berkeley). So I went, expecting to meet some new people and learn some new things.
What I didn't expect was that I would be reading about the event two weeks later in the Guardian:
Multiple events hosted at a historic former hotel in Berkeley, California, have brought together people from intellectual movements popular at the highest levels in Silicon Valley while platforming prominent people linked to scientific racism, the Guardian reveals.
But because of alleged financial ties between the non-profit that owns the building – Lightcone Infrastructure (Lightcone) – and jailed crypto mogul Sam Bankman-Fried, the administrators of FTX, Bankman-Fried’s failed crypto exchange, are demanding the return of almost $5m that new court filings allege were used to bankroll the purchase of the property.
During the last year, Lightcone and its director, Oliver Habryka, have made the $20m Lighthaven Campus available for conferences and workshops associated with the “longtermism”, “rationalism” and “effective altruism” (EA) communities, all of which often see empowering the tech sector, its elites and its beliefs as crucial to human survival in the far future.
At these events, movement influencers rub shoulders with startup founders and tech-funded San Francisco politicians – as well as people linked to eugenics and scientific racism.
I originally planned to write a piece solely recapping my experience at LessOnline, but in the face of the Guardian article, it feels worth noting how broad of a brush the media uses on different technology and AI movements - and how it does the public a disservice in the process.
My life among the Rationalists
In hindsight, I was probably a little bit of an outsider at the conference. I don't identify as a Rationalist, and I'm vaguely aware of the LessWrong community. As such, take my observations for what they are - observations. I can't speak to the other events held at Lighthaven, like Manifest, but I can speak to my time amongst the Rationalists, Effective Altruists, and AI Doomers.
The venue itself was gorgeous. From the sidewalk, it doesn't appear to be anything more than a rundown hotel - in fact, I've lived in the area for years and always assumed that's what it was. But behind the wood fences was a pretty incredible space with lots of nooks and crannies, snack bars, whiteboards, and just an overall inviting space.
Given that it was an Unconference, the attendees proposed and scheduled workshops themselves - I even ended up running one about my journey to 10K Substack subscribers. But going into the event, I was expecting about 50/50 writing-oriented workshops and Rationalism-oriented workshops. In practice, it felt like it was 20% writing, 20% Rationalism, 20% AI Safety, and 40% Other.
The "Other" workshops were quite diverse. Some examples of workshop titles not directly related to writing or rationalism:
Conflict improv
Find your life partner
The 2nd worst Star Wars film
Blood on the Clocktower
Venture capital crash course
Association for the advancement of fairytale creatures LARP
Superbabies: paths, dangers, strategies
Game design & interactive narratives
Nonprofit accounting & finance
Taking 30 pills a day to live forever
But far more interesting than the workshops were the conversations I had, and the people I met. Some were writing related - I briefly discussed my writing workflow with Eliezer Yudkowsky - but many of them weren't.
I struck up a conversation with
in a kitchenette about the information asymmetry of clinical study data that's released publicly vs reported to the FDA. I talked to about his experience as a public defender, and how he's applying Rationalist approaches to working with clients. I spoke with about the challenges of building an audience on the internet. I had many, many discussions with folks on AI Safety - some, like , challenged me on my prerequisites for existential risk; others, like William Brewer, argued in favor of AI Safety investment even if you don't believe in a doomsday scenario.And these are just the conversations where I know who the other person was! In the moment, I had no idea who most of these people were, what their background was, or whether they were considered "a big deal." We were discussing the ideas themselves, not the people behind them.
To be clear, I'm not saying there aren't bad actors or dangerous arguments - I don't know enough about the community to make that judgment call. But from what I saw, the people I met wanted to accept ideas on their merit, and that often means giving space to initially weird, repugnant, or potentially harmful ideas that might be outright rejected elsewhere1. And I was pretty struck by how white/male/nerdy2 the demographics were as a whole - while there were women and people of color, they were clearly in the minority3.
I don't know whether I inadvertently met any eugenicists or "scientific racists" at the conference - it's certainly possible. But if I did, does being there make me part of a shadowy cabal, linking tech billionaires with SF politicians and conservative reactionaries?
The media is (still) getting it wrong
Returning to the Guardian piece for a moment - it's actually about a new court filing alleging Lightcone bought the property with FTX funds (which its director, Oliver Habryka, has denied with financial statements). But the facts are dressed up in a lot of insinuations about the kind of people who are wheeling and dealing with FTX's (alleged) money behind closed doors.
In doing so, the piece misses the mark with many of its representations. For example, it invokes the acronym TESCREAL: "an umbrella term for a cluster of movements including EA and rationalism that exercise broad influence in Silicon Valley, and have the ear of the likes of Sam Altman, Marc Andreessen and Elon Musk."
It's a strange way to imply that Altman, Andreessen, and Musk are on the same team as prominent members of these disparate movements. Eliezer Yudkowsky, for example, couldn't be more opposed to Andreessen regarding AI safety and regulation. Yudkowsky believes unchecked AI developments will likely kill us all; Andreessen wants zero regulation in commercializing AI research. They share plenty of underlying assumptions - like AI being a transformative technology - but it feels like saying Lakers players and Celtics players are on the same team because they agree on the rules of basketball.
As AI has gotten bigger, I've seen a lot of think pieces that attempt to paint a picture of the different factions. In the last year alone, Politico, Bloomberg, and The New Yorker have profiled Rationalists, Effective Altruists, and AI doomers - heck, even I’ve written about it. And though major outlets have leveled some fair critiques, they often miss some nuances and technical details.
The awkward thing is that I remember encountering this same thing three years ago - when the New York Times profiled Scott Alexander's blog, Slate Star Codex, as a hub for Rationalists (and occasionally neo-fascists):
Slate Star Codex was a window into the Silicon Valley psyche. There are good reasons to try and understand that psyche, because the decisions made by tech companies and the people who run them eventually affect millions.
...
Slate Star Codex carried an endorsement from Paul Graham, founder of Y Combinator. It was read by Patrick Collison, chief executive of Stripe, the billion-dollar start-up that emerged from the accelerator. Venture capitalists like Marc Andreessen and Ben Horowitz followed the blog on Twitter.
...
The voices also included white supremacists and neo-fascists. The only people who struggled to be heard, Dr. Friedman said, were “social justice warriors.” They were considered a threat to one of the core beliefs driving the discussion: free speech.
At the time,
and had some good responses - but the headline is "Silicon Valley isn't full of fascists." From Yglasias' piece:Social media incentivizes the wrong kind of reading. Today you read someone from a rival school of thought in order to find the paragraph or sentence that, when pulled out of context and paired with a witty Twitter quip, will garner you lots of little hearts. I’m as guilty of doing this as anyone. A lot of very smart people have poured a lot of time and energy into making you want to collect those little hearts.
That said, the way you learn things and get smarter is to read strong writers and try to understand what they’re saying — not by trying to pick it apart for clout or finding ways to caricature and snark about it. Instead, try to understand what it is the writer is saying and why people believe that.
Ultimately, the mainstream media's broad-brush portrayal of AI and tech ideologies does a disservice to the public. By lumping together different groups like rationalists, effective altruists, and AI safety advocates with the same brush and insinuating shadowy connections between them and controversial figures, the media obscures the important differences and nuances between these movements.
As AI becomes an increasingly important part of our lives, it seems important for the public to be able to parse the different arguments and ideologies shaping its trajectory. By flattening the complex landscape of ideas into a simple narrative of tech bros and fascists, the media makes it harder for people to critically engage (and makes it less likely for people actually on the ground to share their stories with reporters, especially if they contain nuance).
If we want to have productive conversations about the future of AI and hold tech leaders accountable, we need journalism that faithfully represents the diversity of thought in this space - not just the most attention-grabbing storylines.
As an example, there were posters in the bathroom that suggested regular loud humming through your nose could be a way to protect against the spread of COVID. I have no idea whether that's true - it sounds ridiculous at first brush - but people had at least gone through the trouble of digging up papers to defend the assertion.
I actually wanted to write "autism-coded" here, but I don’t want it to be taken pejoratively. The event certainly felt like the highest concentration of on-the-spectrum people I had been around in a while.
There was also a very surreal moment at the end of the event when
(who you wouldn’t guess has a polarizing reputation on the internet) held a “sex and attraction” Q&A. The participants were deeply engaged and had smart questions about sex and data, but the optics of a single woman sitting elevated in front of a sea of (mostly) men left a lasting impression.
Thanks for this nuanced take.
Also, you talked about your writing workflow with Eliezer Yudkowsky? That's cool.
On a personal note, this drives home how removed I feel from the "ground zero" of AI as someone based in DK. Many of the people you mention feel like distant, almost abstract figures to me from across the pond. Yet there you are, meeting them in person almost casually.
I think that's why it's trickier for me to tackle deeper opinion pieces about AI. My info diet consists mostly of secondary reading and I don't have a good finger on the pulse.
I guess that's where you come in!