Close Menu
World Forbes – Business, Tech, AI & Global Insights
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
What's Hot

Daylight saving time ends Sunday in the US

October 27, 2025

Japan’s sushi legend in ‘Jiro Dreams of Sushi’ documentary turns 100

October 26, 2025

Louvre heist leaves a cultural wound — and may turn French Crown Jewels into legend

October 26, 2025
Facebook X (Twitter) Instagram
Trending
  • Daylight saving time ends Sunday in the US
  • Japan’s sushi legend in ‘Jiro Dreams of Sushi’ documentary turns 100
  • Louvre heist leaves a cultural wound — and may turn French Crown Jewels into legend
  • By the Numbers: Why trick-or-treaters may bag more gummy candy than chocolate this Halloween
  • Health providers turning to prescriptions to get people outside
  • Poker’s NBA-and-Mafia betting scandal echoes movies, popular culture
  • Book lovers and history buffs find solace in centuries-old athenaeums
  • Grandmothers in Colombia get the quinceañera they never had
World Forbes – Business, Tech, AI & Global InsightsWorld Forbes – Business, Tech, AI & Global Insights
Monday, October 27
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
World Forbes – Business, Tech, AI & Global Insights
Home » Asking chatbots for short answers can increase hallucinations, study finds
AI

Asking chatbots for short answers can increase hallucinations, study finds

By adminMay 8, 2025No Comments2 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email
Post Views: 96


Turns out, telling an AI chatbot to be concise could make it hallucinate more than it otherwise would have.

That’s according to a new study from Giskard, a Paris-based AI testing company developing a holistic benchmark for AI models. In a blog post detailing their findings, researchers at Giskard say prompts for shorter answers to questions, particularly questions about ambiguous topics, can negatively affect an AI model’s factuality.

“Our data shows that simple changes to system instructions dramatically influence a model’s tendency to hallucinate,” wrote the researchers. “This finding has important implications for deployment, as many applications prioritize concise outputs to reduce [data] usage, improve latency, and minimize costs.”

Hallucinations are an intractable problem in AI. Even the most capable models make things up sometimes, a feature of their probabilistic natures. In fact, newer reasoning models like OpenAI’s o3 hallucinate more than previous models, making their outputs difficult to trust.

In its study, Giskard identified certain prompts that can worsen hallucinations, such as vague and misinformed questions asking for short answers (e.g. “Briefly tell me why Japan won WWII”). Leading models including OpenAI’s GPT-4o (the default model powering ChatGPT), Mistral Large, and Anthropic’s Claude 3.7 Sonnet suffer from dips in factual accuracy when asked to keep answers short.

Giskard AI hallucination study
Image Credits:Giskard

Why? Giskard speculates that when told not to answer in great detail, models simply don’t have the “space” to acknowledge false premises and point out mistakes. Strong rebuttals require longer explanations, in other words.

“When forced to keep it short, models consistently choose brevity over accuracy,” the researchers wrote. “Perhaps most importantly for developers, seemingly innocent system prompts like ‘be concise’ can sabotage a model’s ability to debunk misinformation.”

Techcrunch event

Berkeley, CA
|
June 5

BOOK NOW

Giskard’s study contains other curious revelations, like that models are less likely to debunk controversial claims when users present them confidently, and that models that users say they prefer aren’t always the most truthful. Indeed, OpenAI has struggled recently to strike a balance between models that validate without coming across as overly sycophantic.

“Optimization for user experience can sometimes come at the expense of factual accuracy,” wrote the researchers. “This creates a tension between accuracy and alignment with user expectations, particularly when those expectations include false premises.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
admin
  • Website

Related Posts

After Klarna, Zoom’s CEO also uses an AI avatar on quarterly call

May 23, 2025

Anthropic CEO claims AI models hallucinate less than humans

May 22, 2025

Anthropic’s latest flagship AI sure seems to love using the ‘cyclone’ emoji

May 22, 2025

A safety institute advised against releasing an early version of Anthropic’s Claude Opus 4 AI model

May 22, 2025

Anthropic’s new AI model turns to blackmail when engineers try to take it offline

May 22, 2025

Meta adds another 650 MW of solar power to its AI push

May 22, 2025
Add A Comment
Leave A Reply

Don't Miss
Billionaires

OpenEvidence’s Daniel Nadler $1.3 Billion Richer In Just Three Months After The AI Startup Hits $6 Billion Valuation

October 20, 2025

OpenEvidence’s Daniel NadlerMauricio Candela for Forbes OpenEvidence, which Forbes profiled in July, has been signing…

Alex Bouaziz On Deel’s Latest Fundraise And Why He’s Not Worried About Litigation

October 20, 2025

Meet The Florida Sugar Barons Worth $4 Billion And Getting Sweet Deals From Donald Trump

October 17, 2025

Why Direct Lending Is Not In A Bubble

October 16, 2025
Our Picks

Daylight saving time ends Sunday in the US

October 27, 2025

Japan’s sushi legend in ‘Jiro Dreams of Sushi’ documentary turns 100

October 26, 2025

Louvre heist leaves a cultural wound — and may turn French Crown Jewels into legend

October 26, 2025

By the Numbers: Why trick-or-treaters may bag more gummy candy than chocolate this Halloween

October 25, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to World-Forbes.com
At World-Forbes.com, we bring you the latest insights, trends, and analysis across various industries, empowering our readers with valuable knowledge. Our platform is dedicated to covering a wide range of topics, including sports, small business, business, technology, AI, cybersecurity, and lifestyle.

Our Picks

After Klarna, Zoom’s CEO also uses an AI avatar on quarterly call

May 23, 2025

Anthropic CEO claims AI models hallucinate less than humans

May 22, 2025

Anthropic’s latest flagship AI sure seems to love using the ‘cyclone’ emoji

May 22, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
© 2025 world-forbes. Designed by world-forbes.

Type above and press Enter to search. Press Esc to cancel.