If you have a chance, I highly recommend reading Machines of Loving Grace by Dario Amodei, CEO of Anthropic and creator of Claude: https://darioamodei.com/machines-of-loving-grace.
It is actually very interesting .. and very disturbing.
Machines of Loving Grace (click for the poem)
Amodei claims we’re very close of reaching extremely advanced AI by 2026. This lines up with Sam Altman’s forecast of super intelligence within the next 1,000 days: https://ia.samaltman.com/ . Note that both are avoiding to use AGI (Artificial General Intelligence) wording.
Of course, OpenAI and Anthropic are actively fundraising, so it's part of their founder job to sell their company and tell the world how amazing their work is.
Still, Amodei has always been known for his conservative position on AI—his vision for Anthropic has focused on keeping AI manageable, understandable, and reliable. His current essay, though, hints at a much faster, more powerful transformation than he's suggested in the past.
What I really liked about the essay:
The images he paints are really powerful to make you understand what it will be like: “Getting 100 years of progress compressed into 5-10 years”, “ Building a country with millions of “people” with Nobel price level intelligence (in all disciplines)”.
His prospective really shift the focus away from pure intelligence, which will soon be abundant, and toward other real limits we still face: physical resources, experimental timelines (he uses aging research as an example), and the laws of physics themselves.
The section about biology and neuroscience are very developed. Clearly he has got lots of expertise in these fields.
For the rest of the essay and more specially the parts about government, politics and democracy, I am not so optimistic.
The assumption that progress is always good seems, to me, naive. Looking back at history—the Roman Empire, European colonialism, Slavery, the Native American genocide, and the Third Reich—when one nation holds a technological advantage, it often starts by using it to destroy its rivals.
In a more recent (tech) history, we already know with social media how technological advancements are used by bad actors. Platforms like Facebook (now Meta) have been implicated in scandals such as Cambridge Analytica, which used data to manipulate electoral outcomes. Twitter has repeatedly been criticized for enabling and amplifying divisive figures (starting with D Trump) and for misinformation, contributing to political polarization. Meanwhile, TikTok and Instagram face ongoing lawsuits concerning privacy breaches, discrimination, and terrible impacts on mental health.
Why should we expect the rise of AI to be different?
What will become of humans … and god?
As for the closing chapter. What will human do when they no longer are needed in most of the production process or progress. I frankly don’t know.
A few AI pioneers come to mind here:
“If a machine can think, it might think more intelligently than we do, and then where should we be?” — Alan Turing
“I visualize a time when we will be to robots what dogs are to humans, and I'm rooting for the machines.” — Claude Shannon
So, we should start by treating our dogs kindly. Maybe the future for humans looks more like that of dogs—or even worse, as cattle or horses. At best? Cats !!
Last but not leaset, no mention of what would became of god if a superior AI intelligence is brought to “life”? All religions consider human life as unique and sacred. How will they adapt to a world run by AIs?
Now, I do have a one more question and I don’t frankly know if this is a naive or an “elephant in the room” question.
Why do VC invest in OpenAI, Anthropic if the end game is to make intelligence a commodity?
If powerful AI brings infinite intelligence, the value of intelligence would automatically drop zero (supply and demand).
Companies like Anthropic and OpenAI are nothing else but “intelligence” : engineers with superior abilities, patents, know how … So why would an investor invest in companies whose ultimate goal is to disrupt/destroy the value of the IP/Intelligence that they are supposed to create?
Are they driven by pure philanthropy and the belief in a better world, or do they expect some kind of massive, shared prosperity?
I’d really love to see what that VC pitch looks like.
More on the same topic/article:
https://www.lesswrong.com/posts/oJQnRDbgSS8i6DwNu/the-agi-entente-delusion
@dominiq