Antropic CEO admits that AI models are less than people

Antropic CEO admits that AI models are less than people

Antropic CEO Dario Amodei believes AI models today and presented it before, in an anthropic code in Claude, in San Francisco on Thursday.

Amodei said it was all at a larger point he made: that AI’s thoughts were not a limit on anthropic passage to AGI – AI human intelligence systems.

“It really depends on how you measure it, but I doubt the AI ​​models may be hallucinate people,” says Amodei, answer TechCrus.

Anthropic CEO is one of the largest industry leaders in the AI ​​models of AI models. In a widespread circula paper he writes last yearAnd Amodei said he believed that I was coming through it easily 2026. He saw a press briefing, saw that the progress of that end, “the water went up to anywhere.”

“Every one always looks for these difficult blocks to what (AI) can,” says Amodei. “They’re no longer seen. There’s nothing like that.”

Some AI leaders believe that the translation presents a lot of obstacles to reaching through. Earlier this week, Google Deepermind CEO Demis Hassabis said AI models now have many “holes,” and take a lot of obvious questions wrong. For example, early this month, a lawyer representing anthropic Forced to apologize to court after they used Claude To create excerpts of a court filing, and AI Chatbot Hallucinated and have names and titles wrong.

It is difficult to verify Amodei’s claim, mostly because most of the benchmarks at Hallucination Pit AI are against each other; They do not compare people’s models. Some methods assist in lowest judgment rates, such as providing models of AI web search access. Separately, some AI models, such as Openii’s GPT-4.5to be significantly lower rates in the benchmarks compared to the first generations of systems.

However, there is also evidence to suggest fabrics that worsen the advanced rational models of AI. OPUII OPUII models and O4-Mini Mini Higher hallucination rates than the previous models of OpenII’s reasonand the company never understands why.

After press briefing, Amodei explained that TV broadcasters, politicians, and people in all kinds of professions have sinned all the time. The fact that AI has committed errors not to knock on the intelligence, according to Amodei. However, the anthropic CEO recognizes the trust where AI models are not real things as a problem.

In fact, anthropic has made a fair research on the AI ​​models deceiving the first side-aide access Claude in Claude Opus 4 shows a Long inclination to devise against people and deceive them. Apollo went to anthropic suggesting not to release the first model. Anthropic says it has some manifestations that show issues that Apollo has grown.

Amani’s comments suggest that anthropic can think of a model AI to be through, or equivalent to human intelligence, even if it is hallucter. An AI translation can fall in through many meaning people, however.

Leave a Reply

Your email address will not be published. Required fields are marked *