The earthquake of the most recent model of AI alarms specialists: “The world is in danger” | Economy | EUROtoday

“We are facing something much, much bigger than covid,” Matt Shumer warned this week. This prestigious programmer who works with synthetic intelligence (AI) has simply printed an article titled one thing huge is going on by which he warns of the threats of recent AI fashions to thousands and thousands of white-collar jobs world wide. His essay has gone viral with greater than 80 million views since Tuesday.

“The reason why so many people in the sector are sounding the alarm right now is because this is already nos has happened,” explains Shumer, who recounts how AI companies are laying off computer scientists and developers because the tools they created are already programming themselves to become smarter. “We are not making predictions. We tell them what has already happened in our own jobs and we warn them that they are next,” he factors out.

Shumer’s article coincides with a week of turmoil on Wall Street. Investors have punished companies that will be most affected by the emergence of this technology. Companies of softwarevideo games or computer developers have received a serious correction in the Stock Market as the high capabilities of the new AI models and the risk they pose for millions of jobs have spread. Experts assure that a child will be able to give instructions to create a custom video game. And language programs created by people with limited computer skills proliferate.

But investors also see how automation is ready to jump into other sectors that are not so obvious such as logistics, insurance companies or consulting companies. With a couple of commands you can create a tax planning program or a bot customer service that surpasses human interaction.

“The rapid progress of AI tools fuels widespread fear of disruption in the industries most exposed to the spread of this technology within the knowledge economy, particularly in non-capital-intensive business models, with software “Investors’ concerns about the disruptive impact of AI continue to weigh on US stocks, from insurance brokers and real estate services to logistics,” explains Swiss investment bank UBS, which nevertheless adopts an optimistic tone for investors: “While the overall impact on these industries and individual companies remains to be seen, we consider [este proceso] a validation of the monetization potential of AI. The advances underscore its transformative nature.”

“This is different from all previous waves of automation, and I need you to understand why,” Shumer advances in a disturbing story that has discovered echo in a number of executives within the sector. “AI doesn’t replace a specific skill. It’s a general substitute for cognitive work. It improves on everything simultaneously. When factories became automated, a laid-off worker could retrain for clerical work. When the internet burst into retail, workers moved into logistics or services. But AI doesn’t leave a convenient gap to fill. Whatever the goal of the training, it’s getting better at that, too,” he provides.

The voices of alarm are louder as huge expertise firms redouble their bets on disruptive expertise. During 2026 alone, the large 4 international expertise firms, Alphabet, Amazon, Meta and Microsoft, plan to take a position greater than 650 billion in AI. It is the most important quantity invested in a single yr in some other technological growth; Not even the enlargement of the railroad on the finish of the nineteenth century, NASA’s applications to overcome house, or the dotcom bubble of the early twenty first century consumed so many sources in such a short while.

These technological colossi, which handle a finances bigger than that of some nations, are launched in a loopy race to develop AI. They want to coach their laptop fashions with 1000’s of computer systems geared up with the most recent era microprocessors. They convey them collectively in gigantic buildings, information facilities, with tons of of servers in order that the system continues studying. And they require particular energy provide vegetation to make sure their huge consumption.

Schumer paints a surprising image. It explains how in recent times enhancements in cognitive fashions created by algorithms have made exponential progress. But the most recent variations of OpenAI, creator of the favored ChatGPT, or Anthopic, which develops the Claude mannequin, “are not gradual improvements. It is something completely different,” he warns.

“AI is not a substitute for specific human jobs, but rather a general job substitute for humans,” says Dario Amodei, CEO of Anthropic, the corporate based by former OpenAI researchers. Amodei printed a disturbing article a few weeks in the past, The adolescence of expertise. How to face and overcome the dangers of highly effective AI, concerning the dangers {that a} expertise of this caliber or basic AI (GAI) will trigger, one that may be capable to suppose for itself. This govt estimates that half of all white-collar jobs on the earth will disappear inside one to 5 years. After analyzing the implications of this revolution within the article, he concludes: “The shock In the short term it will be of unprecedented magnitude.”

This week the corporate has reached a valuation of 380,000 million {dollars}, after the final spherical of financing by which it achieved funds of 30,000 million. Anthropic has positioned itself as one of many expertise firms most involved with safety. It ensures that its mannequin is educated following moral rules to keep away from manipulation and deception.

This week it introduced the creation of a SPAC, a inventory market itemizing instrument, endowed with $20 million, to advertise transparency and safety in synthetic intelligence fashions. The firm Public First seeks to affect legislators to determine regulation and limitations in AI that stop abuses. In actuality, its technique is in opposition to its rival OpenAI, which makes use of extra aggressive ways.

Shumer’s essay additionally coincides with the resignation of two executives from OpenAI and Anthropic, warning of the depth of the modifications which can be coming to the world, not solely on the work degree. “The world is in danger. And not just from AI or bioweapons, but from a whole series of interconnected crises unfolding at this very moment,” wrote Mrinank Sharma, an AI safety researcher who left Anthropic to go to the United Kingdom to write down poetry and “turn invisible.”

Sharma has labored in a single space to attempt to make sure the protection of AI to fight the dangers of AI-assisted bioterrorism and examine “how AI assistants could make us less human.” And he says that he leaves with a sure feeling of resignation.

On Wednesday, Zoe Hitzig, a researcher at OpenAI, the creator of the favored ChatGPT, printed an article on The New York Times alerting about their doubts concerning the brand new follow of AI firms providing promoting. Hitzig, a PhD in Economics at Harvard, wrote: “I have serious reservations about OpenAI’s strategy.” The subsequent day he submitted his resignation. The article explains how many individuals use AI instruments as therapists, to admit their feelings or to speak. The system achieves a bonus when providing promoting and Hitzig sees moral issues.

There are additionally safety dangers. Amodei offers the instance of a brand new nation made up of the 50 million most sensible minds on the earth. They suppose 10 to 100 instances sooner than any human. They by no means sleep. They can use the web, management robots, conduct experiments, and function something with a digital interface. The skilled warns that it might symbolize “the most serious threat to national security that we have faced in a century, possibly ever.”


https://elpais.com/economia/2026-02-14/el-terremoto-de-la-ultima-version-de-la-ia-alarma-a-los-expertos-el-mundo-esta-en-peligro.html