“It shows less intelligence than a cat,” says Turing Prize winner Yann Le Cun, in command of science and synthetic intelligence at Meta | EUROtoday
It’sis a parade paying homage to a scene from the movie I, Robotmade 20 years in the past by Australian Alex Proyas. On October 11, Elon Musk took benefit of the presentation of Robovan, an autonomous shuttle prototype from Tesla, to provide an outline of the most recent model of Optimus, his humanoid robotic. The latter, explains the jack-of-all-trades entrepreneur, will be capable of perform day by day human duties, similar to bringing a bundle, watering your vegetation and even strolling your canine.
In the “very long term”, most people may even afford this helpful robotic for between $20,000 and $30,000. Humanoids, who are sometimes in everlasting quasi-imbalance, transfer on two well-animated legs. “It’s a fine example of balance which demonstrates beautiful mechanics, but already ensured by the Japanese Honda with its Asimo robot 20 years ago,” says Ludovic Righetti, specialist in locomotion in robotics, professor on the New York University and holder of a global chair on the ANITI synthetic intelligence institute in Toulouse. In truth, the motion is fluid and common, however it’s nonetheless a bit gradual in comparison with the humanoid robots developed by the American producer Boston Dynamics, or by the Chinese Unitree.
However, much more than its capacity to maneuver, it’s its capacity to converse that impresses about this robotic. In a video of the occasion, we see Optimus explaining to a Californian that he lives in Palo Alto, the capital of Silicon Valley. “This is where we are trained, where we get our bills, and where we get to work with exceptional people. » And when his interlocutor (in the flesh) asks him what is the hardest part of being a robot, he explains: “Trying to be more human. »
“Less intelligence than a cat”
However, now we have the appropriate to doubt the spontaneous nature of this trade. “This conversation is smoke and mirrors. For me, the robots are tele-operated remotely, the only feat is electromechanical,” explains to Point Yann Le Cun, chief scientist at Meta, Turing Prize (the equal of the Nobel in pc science) and professor at New York University, who requires avoiding all anthropomorphism. “It is important not to confuse rote learning with understanding, or even previously accumulated knowledge with intelligence. »
For him, the development of general artificial intelligence, that is to say capable of equaling humans in all of their cognitive capacities, will take several decades. “Today’s models are trained to predict the next word in a text. But that makes them so good at manipulating language that they deceive us. And because of their enormous memory capacity, they can appear to reason, when in fact they are just regurgitating information they have already been trained on. » Result: “The machine, today, shows less intelligence than a cat. Felines, after all, have a mental model of the physical world, persistent memory, some reasoning ability, and planning ability. “, he recently declared to Wall Street Journal.ALSO READ Artificial intelligence: exclusive extracts from Yuval Noah Harari’s shocking book
Ludovic Righetti calls to be wary of excessive communication: “The only video sequence [d’un seul plan] that lasts more than a few seconds is slow walking on flat ground. All other sequences are 2-3 seconds long before the point of view changes. As a roboticist, if I don’t see a single shot that captures an entire task, it usually suggests that the robot is not performing the task well – in fact, we always demand uncut footage in scientific papers. Of course, what we’re talking about here is communication and not science, but this type of video suggests that all the other tasks the robot is performing aren’t working as well as the video is trying to suggest. »ALSO READ This is what the artificial intelligence of the future could look like
Like children
How can we make progress in the future? “The idea,” continues Yann Le Cun, “is to create models that learn in a way analogous to that of a baby animal, by constructing a model of the world from the visual information it absorbs. » This ability for the machine to understand its environment can come from connected glasses, capable of capturing images of their environment. It will then remain to interpret them.
“One of the big challenges is to establish links between the images that a robot can perceive and the information that it can associate with it. This will allow him to represent part of his environment,” explains Point Joëlle Pineau, professor at Mac Gill University, and likewise head of synthetic intelligence analysis at Meta. Who continues: “It’s promising, however for the second we lack knowledge. »
The problem is due to this fact to permit machines to study to develop psychological fashions of their surroundings by remark, within the method of kids and animals. One of the numerous situations earlier than Optimus – and the opposite robots – can joke about and in context, as naturally as a human.
https://www.lepoint.fr/science/les-performances-d-optimus-le-nouveau-robot-d-elon-musk-relativisees-par-plusieurs-scientifiques-14-10-2024-2572716_25.php