Inside the AI Party on the End of the World | EUROtoday
In a $30 million mansion perched on a cliff overlooking the Golden Gate Bridge, a bunch of AI researchers, philosophers, and technologists gathered to debate the top of humanity.
The Sunday afternoon symposium, referred to as “Worthy Successor,” revolved round a provocative thought from entrepreneur Daniel Faggella: The “moral aim” of superior AI ought to be to create a type of intelligence so highly effective and clever that “you would gladly prefer that it (not humanity) determine the future path of life itself.”
Faggella made the theme clear in his invitation. “This event is very much focused on posthuman transition,” he wrote to me by way of X DMs. “Not on AGI that eternally serves as a tool for humanity.”
A celebration stuffed with futuristic fantasies, the place attendees talk about the top of humanity as a logistics downside slightly than a metaphorical one, could possibly be described as area of interest. If you reside in San Francisco and work in AI, then it is a typical Sunday.
About 100 friends nursed nonalcoholic cocktails and nibbled on cheese plates close to floor-to-ceiling home windows going through the Pacific ocean earlier than gathering to listen to three talks on the way forward for intelligence. One attendee sported a shirt that stated “Kurzweil was right,” seemingly a reference to Ray Kurzweil, the futurist who predicted machines will surpass human intelligence within the coming years. Another wore a shirt that stated “does this help us get to safe AGI?” accompanied by a pondering face emoji.
Faggella instructed WIRED that he threw this occasion as a result of “the big labs, the people that know that AGI is likely to end humanity, don’t talk about it because the incentives don’t permit it” and referenced early feedback from tech leaders like Elon Musk, Sam Altman, and Demis Hassabis, who “were all pretty frank about the possibility of AGI killing us all.” Now that the incentives are to compete, he says, “they’re all racing full bore to build it.” (To be honest, Musk nonetheless talks in regards to the dangers related to superior AI, although this hasn’t stopped him from racing forward).
On LinkedIn, Faggella boasted a star-studded visitor checklist, with AI founders, researchers from all the highest Western AI labs, and “most of the important philosophical thinkers on AGI.”
The first speaker, Ginevera Davis, a author based mostly in New York, warned that human values is likely to be not possible to translate to AI. Machines might by no means perceive what it’s prefer to be acutely aware, she stated, and making an attempt to hard-code human preferences into future methods could also be shortsighted. Instead, she proposed a lofty-sounding thought referred to as “cosmic alignment”—constructing AI that may hunt down deeper, extra common values we haven’t but found. Her slides typically confirmed a seemingly AI-generated picture of a techno-utopia, with a bunch of people gathered on a grass knoll overlooking a futuristic metropolis within the distance.
Critics of machine consciousness will say that enormous language fashions are merely stochastic parrots—a metaphor coined by a bunch of researchers, a few of whom labored at Google, who wrote in a well-known paper that LLMs don’t truly perceive language and are solely probabilistic machines. But that debate wasn’t a part of the symposium, the place audio system took as a given the concept superintelligence is coming, and quick.
https://www.wired.com/story/ai-risk-party-san-francisco/