Close Menu
World Forbes – Business, Tech, AI & Global Insights
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
What's Hot

Will the United States deport people to Rwanda? | Refugees News

May 23, 2025

Trump bars Harvard international enrolment: How many students will it hurt? | Donald Trump News

May 23, 2025

Panda wins the 2025 Palm Dog award at Cannes — and a look-alike accepts

May 23, 2025
Facebook X (Twitter) Instagram
Trending
  • Will the United States deport people to Rwanda? | Refugees News
  • Trump bars Harvard international enrolment: How many students will it hurt? | Donald Trump News
  • Panda wins the 2025 Palm Dog award at Cannes — and a look-alike accepts
  • Harvard sues over ban on foreign student enrollment
  • ‘Red lines’ stalk fifth round of Iran-US nuclear talks | Politics News
  • Trump administration says Columbia violated civil rights of Jewish students
  • Homeowners spend on renovations and repairs despite the uncertain economy and higher prices
  • After Klarna, Zoom’s CEO also uses an AI avatar on quarterly call
World Forbes – Business, Tech, AI & Global InsightsWorld Forbes – Business, Tech, AI & Global Insights
Friday, May 23
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
World Forbes – Business, Tech, AI & Global Insights
Home » Anthropic CEO claims AI models hallucinate less than humans
AI

Anthropic CEO claims AI models hallucinate less than humans

adminBy adminMay 22, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email
Post Views: 7


Anthropic CEO Dario Amodei believes today’s AI models hallucinate, or make things up and present them as if they’re true, at a lower rate than humans do, he said during a press briefing at Anthropic’s first developer event, Code with Claude, in San Francisco on Thursday.

Amodei said all this in the midst of a larger point he was making: that AI hallucinations are not a limitation on Anthropic’s path to AGI — AI systems with human-level intelligence or better.

“It really depends how you measure it, but I suspect that AI models probably hallucinate less than humans, but they hallucinate in more surprising ways,” Amodei said, responding to TechCrunch’s question.

Anthropic’s CEO is one of the most bullish leaders in the industry on the prospect of AI models achieving AGI. In a widely circulated paper he wrote last year, Amodei said he believed AGI could arrive as soon as 2026. During Thursday’s press briefing, the Anthropic CEO said he was seeing steady progress to that end, noting that “the water is rising everywhere.”

“Everyone’s always looking for these hard blocks on what [AI] can do,” said Amodei. “They’re nowhere to be seen. There’s no such thing.”

Other AI leaders believe hallucination presents a large obstacle to achieving AGI. Earlier this week, Google DeepMind CEO Demis Hassabis said today’s AI models have too many “holes,” and get too many obvious questions wrong. For example, earlier this month, a lawyer representing Anthropic was forced to apologize in court after they used Claude to create citations in a court filing, and the AI chatbot hallucinated and got names and titles wrong.

It’s difficult to verify Amodei’s claim, largely because most hallucination benchmarks pit AI models against each other; they don’t compare models to humans. Certain techniques seem to be helping lower hallucination rates, such as giving AI models access to web search. Separately, some AI models, such as OpenAI’s GPT-4.5, have notably lower hallucination rates on benchmarks compared to early generations of systems.

However, there’s also evidence to suggest hallucinations are actually getting worse in advanced reasoning AI models. OpenAI’s o3 and o4-mini models have higher hallucination rates than OpenAI’s previous-gen reasoning models, and the company doesn’t really understand why.

Later in the press briefing, Amodei pointed out that TV broadcasters, politicians, and humans in all types of professions make mistakes all the time. The fact that AI makes mistakes too is not a knock on its intelligence, according to Amodei. However, Anthropic’s CEO acknowledged the confidence with which AI models present untrue things as facts might be a problem.

In fact, Anthropic has done a fair amount of research on the tendency for AI models to deceive humans, a problem that seemed especially prevalent in the company’s recently launched Claude Opus 4. Apollo Research, a safety institute given early access to test the AI model, found that an early version of Claude Opus 4 exhibited a high tendency to scheme against humans and deceive them. Apollo went as far as to suggest Anthropic shouldn’t have released that early model. Anthropic said it came up with some mitigations that appeared to address the issues Apollo raised.

Amodei’s comments suggest that Anthropic may consider an AI model to be AGI, or equal to human-level intelligence, even if it still hallucinates. An AI that hallucinates may fall short of AGI by many people’s definition, though.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
admin
  • Website

Related Posts

After Klarna, Zoom’s CEO also uses an AI avatar on quarterly call

May 23, 2025

Anthropic’s latest flagship AI sure seems to love using the ‘cyclone’ emoji

May 22, 2025

A safety institute advised against releasing an early version of Anthropic’s Claude Opus 4 AI model

May 22, 2025

Anthropic’s new AI model turns to blackmail when engineers try to take it offline

May 22, 2025

Meta adds another 650 MW of solar power to its AI push

May 22, 2025

Anthropic’s new Claude 4 AI models can reason over many steps

May 22, 2025
Add A Comment
Leave A Reply Cancel Reply

Don't Miss
Billionaires

Elon Musk Will Stay Tesla CEO For Next Five Years And Cut Political Spending

May 20, 2025

Topline Elon Musk on Tuesday said he’s committed to being Tesla’s chief executive for the…

Meet The Saudi Real Estate Tycoon Partnering With The Trump Family

May 20, 2025

Billionaires Who Got Rich Working For Others

May 19, 2025

Here’s How Much Selena Gomez-Actress, Singer, Entrepreneur-Is Worth

May 13, 2025
Our Picks

Will the United States deport people to Rwanda? | Refugees News

May 23, 2025

Trump bars Harvard international enrolment: How many students will it hurt? | Donald Trump News

May 23, 2025

Panda wins the 2025 Palm Dog award at Cannes — and a look-alike accepts

May 23, 2025

Harvard sues over ban on foreign student enrollment

May 23, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to World-Forbes.com
At World-Forbes.com, we bring you the latest insights, trends, and analysis across various industries, empowering our readers with valuable knowledge. Our platform is dedicated to covering a wide range of topics, including sports, small business, business, technology, AI, cybersecurity, and lifestyle.

Our Picks

After Klarna, Zoom’s CEO also uses an AI avatar on quarterly call

May 23, 2025

Anthropic CEO claims AI models hallucinate less than humans

May 22, 2025

Anthropic’s latest flagship AI sure seems to love using the ‘cyclone’ emoji

May 22, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
© 2025 world-forbes. Designed by world-forbes.

Type above and press Enter to search. Press Esc to cancel.