Rationalists believe AI could end up destroying the world. Credit: iStock photo.
The website had a homely, almost slapdash design with a light blue banner and a strange name: Slate Star Codex.
It was nominally a blog, written by a Bay Area psychiatrist who called himself Scott Alexander (a near anagram of Slate Star Codex). It was also the epicenter of a community called the Rationalists, a group that aimed to reexamine the world through cold and careful thought.
In a style that was erudite, funny, strange and astoundingly verbose, the blog explored everything from science and medicine to philosophy and politics to the rise of artificial intelligence. It challenged popular ideas and upheld the right to discuss contentious issues. This might involve a new take on the genetics of depression or criticism of the #MeToo movement. As a result, the conversation that thrived at the end of each blog post — and spilled onto sister forums on the discussion site Reddit, spanning hundreds of thousands of people — attracted an unusually wide range of voices.
“It is the one place I know of online where you can have civil conversations among people with a wide range of views,” said David Friedman, an economist and legal scholar who was a regular part of the discussion. Commenters on the site, he noted, represented a wide cross section of views. “They range politically from communist to anarcho-capitalist, religiously from Catholic to atheist, and professionally from a literal rocket scientist to a literal plumber — both of whom are interesting people.”
The voices also included white supremacists and neo-fascists. The only people who struggled to be heard, Friedman said, were “social justice warriors.” They were considered a threat to one of the core beliefs driving the discussion: free speech.
As the national discourse melted down in 2020, as the presidential race gathered steam, the pandemic spread and protests mounted against police violence, many in the tech industry saw the attitudes fostered on Slate Star Codex as a better way forward. They deeply distrusted the mainstream media and generally preferred discussion to take place on their own terms, without scrutiny from the outside world. The ideas they floated among themselves were often controversial — connected to gender, race and inherent ability, for example — and voices who might push back were kept at bay.
Slate Star Codex was a window into the Silicon Valley psyche. And there are good reasons to try and understand that psyche, because the decisions made by tech companies and the people who run them eventually affect us all.
Silicon Valley, a community of iconoclasts, is struggling to decide what’s off limits for all of us.
At Twitter and Facebook, leaders were reluctant to remove words from their platforms — even when those words were untrue or could lead to violence. At some AI labs, they release products — including facial recognition systems, digital assistants and chatbots — even while knowing they can be biased against women and people of color, and sometimes spew hateful speech.
Why hold anything back? That was often the answer a Rationalist would arrive at.
And perhaps the clearest and most influential place to watch that thinking unfold was on Alexander’s blog.
“It is no surprise that this has caught on among the tech industry. The tech industry loves disrupters and disruptive thought,” said Elizabeth Sandifer, a scholar who closely follows and documents the Rationalists. “But this can lead to real problems. The contrarian nature of these ideas makes them appealing to people who maybe don’t think enough about the consequences.”
The allure of the ideas within Silicon Valley is what made Alexander, who has also written under his given name, Scott Siskind, and his blog essential reading.
But in late June of last year, when I approached Siskind to discuss the blog, it vanished.
What the Rationalists Believe
The roots of Slate Star Codex trace back more than a decade to a polemicist and self-described AI researcher named Eliezer Yudkowsky, who believed that intelligent machines could end up destroying humankind. He was a driving force behind the rise of the Rationalists.
The Rationalists saw themselves as people who applied scientific thought to almost any topic. This often involved “Bayesian reasoning,” a way of using statistics and probability to inform beliefs.
Because the Rationalists believed AI could end up destroying the world — a not entirely novel fear to anyone who has seen science fiction movies — they wanted to guard against it. Many worked for and donated money to MIRI, an organization created by Yudkowsky whose stated mission was “AI safety.”
But it was the other stuff that made the Rationalists feel like outliers. They were “easily persuaded by weird, contrarian things,” said Robin Hanson, a professor of economics at George Mason University who helped create the blogs that spawned the Rationalist movement. “Because they decided they were more rational than other people, they trusted their own internal judgment.”
Many Rationalists embraced “effective altruism,” an effort to remake charity by calculating how many people would benefit from a given donation. Some embraced the online writings of “neoreactionaries” like Curtis Yarvin, who held racist beliefs and denounced American democracy. They were mostly white men, but not entirely.
The community was organized and close-knit. Two Bay Area organizations ran seminars and high-school summer camps on the Rationalist way of thinking.
“The curriculum covers topics from causal modeling and probability to game theory and cognitive science,” read a website promising teenagers a summer of Rationalist learning. “How can we understand our own reasoning, behavior, and emotions? How can we think more clearly and better achieve our goals?”
The Rationalists held regular meetups around the world, from Silicon Valley to Amsterdam to Australia. Some lived in group houses. Some practiced polyamory.
“They are basically just hippies who talk a lot more about Bayes’ theorem than the original hippies,” said Scott Aaronson, a University of Texas professor who has stayed in one of the group houses.
For Kelsey Piper, who embraced these ideas in high school, around 2010, the movement was about learning “how to do good in a world that changes very rapidly.”
Yes, the community thought about AI, Piper said, but it also thought about reducing the price of health care and slowing the spread of disease.
Slate Star Codex, which sprung up in 2013, helped her develop a “calibrated trust” in the medical system. Many people she knew, she said, felt duped by psychiatrists, for example, who they felt weren’t clear about the costs and benefits of certain treatment.
That was not the Rationalist way.
“There is something really appealing about somebody explaining where a lot of those ideas are coming from and what a lot of the questions are,” she said.
‘People Inventing the Future’
Last June, as I was reporting on the Rationalists and Slate Star Codex, I called Sam Altman, chief executive of OpenAI, an artificial intelligence lab backed by a billion dollars from Microsoft. He was effusive in his praise of the blog.
It was, he said, essential reading among “the people inventing the future” in the tech industry.
Altman, who had risen to prominence as the president of the startup accelerator Y Combinator, moved on to other subjects before hanging up. But he called back. He wanted to talk about an essay that appeared on the blog in 2014.
The essay was a critique of what Siskind, writing as Scott Alexander, described as “the Blue Tribe.” In his telling, these were the people at the liberal end of the political spectrum whose characteristics included “supporting gay rights” and “getting conspicuously upset about sexists and bigots.”
But as the man behind Slate Star Codex saw it, there was one group the Blue Tribe could not tolerate: anyone who did not agree with the Blue Tribe. “Doesn’t sound quite so noble now, does it?” he wrote.
Altman thought the essay nailed a big problem: In the face of the “internet mob” that guarded against sexism and racism, entrepreneurs had less room to explore new ideas. Many of their ideas, such as intelligence augmentation and genetic engineering, ran afoul of the Blue Tribe.
Siskind was not a member of the Blue Tribe. He was not a voice from the conservative Red Tribe (“opposing gay marriage,” “getting conspicuously upset about terrorists and commies”). He identified with something called the Grey Tribe — as did many in Silicon Valley.
The Grey Tribe was characterized by libertarian beliefs, atheism, “vague annoyance that the question of gay rights even comes up” and “reading lots of blogs,” he wrote. Most significantly, it believed in absolute free speech.
The essay on these tribes, Altman told me, was an inflection point for Silicon Valley. “It was a moment that people talked about a lot, lot, lot,” he said.
He did not mention names. But Slate Star Codex carried an endorsement from Paul Graham, founder of Y Combinator. It was read by Patrick Collison, chief executive of Stripe, a startup that emerged from the accelerator. Venture capitalists like Marc Andreessen and Ben Horowitz followed the blog on Twitter.
And in some ways, two of the world’s prominent AI labs — organizations that are tackling some of the tech industry’s most ambitious projects — grew out of the Rationalist movement.
In 2005, Peter Thiel, the co-founder of PayPal and an early investor in Facebook, befriended Yudkowsky and gave money to MIRI. In 2010, at Thiel’s San Francisco townhouse, Yudkowsky introduced him to a pair of young researchers named Shane Legg and Demis Hassabis. That fall, with an investment from Thiel’s firm, the two created an AI lab called DeepMind.
Like the Rationalists, they believed that AI could end up turning against humanity, and because they held this belief, they felt they were among the only ones who were prepared to build it in a safe way.
In 2014, Google bought DeepMind for $650 million. The next year, Elon Musk — who also worried AI could destroy the world and met his partner, Grimes, because they shared an interest in a Rationalist thought experiment — founded OpenAI as a DeepMind competitor. Both labs hired from the Rationalist community.
Life in the Grey Tribe
Part of the appeal of Slate Star Codex, readers said, was Siskind’s willingness to step outside acceptable topics. But he wrote in a wordy, often roundabout way that left many wondering what he really believed.
Aaronson, the Texas professor, was turned off by the more rigid and contrarian beliefs of the Rationalists, but he is one of the blog’s biggest champions and deeply admired that it didn’t avoid live-wire topics.
“It must have taken incredible guts for Scott to express his thoughts, misgivings and questions about some major ideological pillars of the modern world so openly, even if protected by a quasi-pseudonym,” he said.
It was the protection of that “quasi-pseudonym” that rankled Siskind when I first got in touch with him. He declined to comment for this article.
As he explored science, philosophy and AI, he also argued that the media ignored that men were often harassed by women. He described some feminists as something close to Voldemort, the embodiment of evil in the Harry Potter books. He said that affirmative action was difficult to distinguish from “discriminating against white men.”
In one post, he aligned himself with Charles Murray, who proposed a link between race and IQ in “The Bell Curve.” In another, he pointed out that Murray believes Black people “are genetically less intelligent than white people.”
He denounced the neoreactionaries, the anti-democratic, often racist movement popularized by Curtis Yarvin. But he also gave them a platform. His “blog roll” — the blogs he endorsed — included the work of Nick Land, a British philosopher whose writings on race, genetics and intelligence have been embraced by white nationalists.
In 2017, Siskind published an essay titled “Gender Imbalances Are Mostly Not Due to Offensive Attitudes.” The main reason computer scientists, mathematicians and other groups were predominantly male was not that the industries were sexist, he argued, but that women were simply less interested in joining.
That week, a Google employee named James Damore wrote a memo arguing that the low number of women in technical positions at the company was a result of biological differences, not anything else — a memo he was later fired over. One Slate Star Codex reader on Reddit noted the similarities to the writing on the blog.
Siskind, posting as Scott Alexander, urged this reader to tone it down. “Huge respect for what you’re trying, but it’s pretty doomed,” he wrote. “If you actually go riding in on a white horse waving a paper marked ‘ANTI-DIVERSITY MANIFESTO,’ you’re just providing justification for the next round of purges.”
Who Needs a Safe Space?
In 2013, Thiel invested in a technology company founded by Yarvin. So did the venture capital firm Andreessen Horowitz, led in the investment by Balaji Srinivasan, who was then a general partner.
That year, when the tech news site TechCrunch published an article exploring the links between the neoreactionaries, the Rationalists and Silicon Valley, Yarvin and Srinivasan traded emails. Srivivasan said they could not let that kind of story gain traction. It was a preview of an attitude that I would see unfold when I approached Siskind in the summer of 2020. (Srinivasan could not be reached for comment.)
“If things get hot, it may be interesting to sic the Dark Enlightenment audience on a single vulnerable hostile reporter to dox them and turn them inside out with hostile reporting sent to *their* advertisers/friends/contacts,” Srivivasan said in an email viewed by The New York Times, using a term, “Dark Enlightenment,” that was synonymous with the neoreactionary movement.
But others, like Thiel, urged their colleagues to keep quiet, saying in emails that they were confident the press would stay away. They were right.
In late June of last year, not long after talking to Altman, the OpenAI chief executive, I approached the writer known as Scott Alexander, hoping to get his views on the Rationalist way and its effect on Silicon Valley. That was when the blog vanished.
The issue, it was clear to me, was that I told him I could not guarantee him the anonymity he’d been writing with. In fact, his real name, Scott Siskind, was easy to find because people had shared it online for years and he had used it on a piece he’d written for a scientific journal. I did a Google search for Scott Alexander and one of the first results I saw in the auto-complete list was Scott Alexander Siskind.
Siskind said in a late-night post on Slate Star Codex that he was going to remove his blog from the internet because The Times threatened to reveal his full name. He said this would endanger him and his patients because he had attracted many enemies online.
I woke up the next morning to a torrent of online abuse, as did my editor, who was named in the farewell note. My address and phone number were shared by the blog’s readers on Twitter. Protecting the identity of the man behind Slate Star Codex had turned into a cause among the Rationalists.
More than 7,500 people signed a petition urging The Times not to publish his name, including many prominent figures in the tech industry. Putting his full name in The Times, the petitioners said, “would meaningfully damage public discourse, by discouraging private citizens from sharing their thoughts in blog form.” On the internet, many in Silicon Valley believe, everyone has the right not only to say what they want but to say it anonymously.
Amid all this, I spoke with Manoel Horta Ribeiro, a computer science researcher who explores social networks at the Swiss Federal Institute of Technology in Lausanne. He was worried that Slate Star Codex, like other communities, was allowing extremist views to trickle into the tech world. “A community like this gives voice to fringe groups,” he said. “It gives a platform to people who hold more extreme views.”
But for Kelsey Piper and many others, the main issue came down to the name, and tying the man known professionally and legally as Scott Siskind to his influential, and controversial, writings as Scott Alexander. Piper, who is a journalist herself, said she did not agree with everything he had written, but she also felt his blog was unfairly painted as an on-ramp to radical views. She worried his views could not be reduced to a single newspaper article.
I assured her my goal was to report on the blog, and the Rationalists, with rigor and fairness. But she felt that discussing both critics and supporters could be unfair. What I needed to do, she said, was somehow prove statistically which side was right.
When I asked Altman, of OpenAI, if the conversation on sites like Slate Star Codex could push people toward toxic beliefs, he said he held “some empathy” for these concerns. But, he added, “people need a forum to debate ideas.”
In August, Siskind restored his old blog posts to the internet. And two weeks ago, he relaunched his blog on Substack, a company with ties to both Andreessen Horowitz and Y Combinator. He gave the blog a new title: Astral Codex Ten. He hinted that Substack paid him $250,000 for a year on the platform. And he indicated the company would give him all the protection he needed.
In his first post, Siskind shared his full name.