Close Menu
World Forbes – Business, Tech, AI & Global Insights
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
What's Hot

Berliners swim in the Spree River to protest 100-year ban

June 17, 2025

Juneteenth highlights tribal slavery descendants’ citizenship struggle

June 17, 2025

Weeds can give us clues about the lawn

June 17, 2025
Facebook X (Twitter) Instagram
Trending
  • Berliners swim in the Spree River to protest 100-year ban
  • Juneteenth highlights tribal slavery descendants’ citizenship struggle
  • Weeds can give us clues about the lawn
  • Trump Just Disclosed Earning $57.4 Million From World Liberty Financial—Here’s What We Know
  • How the humble water gun became the symbol of Barcelona’s anti-tourism movement
  • Recipe for Nigerian-inspired fried rice is easy for a weeknight
  • Amusement parks face tariffs and economic uncertainty
  • Appalachian Trail thru-hikers take on half-gallon ice cream eating challenge
World Forbes – Business, Tech, AI & Global InsightsWorld Forbes – Business, Tech, AI & Global Insights
Tuesday, June 17
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
World Forbes – Business, Tech, AI & Global Insights
Home » One of Google’s recent Gemini AI models scores worse on safety
AI

One of Google’s recent Gemini AI models scores worse on safety

adminBy adminMay 2, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email
Post Views: 33


A recently released Google AI model scores worse on certain safety tests than its predecessor, according to the company’s internal benchmarking.

In a technical report published this week, Google reveals that its Gemini 2.5 Flash model is more likely to generate text that violates its safety guidelines than Gemini 2.0 Flash. On two metrics, “text-to-text safety” and “image-to-text safety,” Gemini 2.5 Flash regresses 4.1% and 9.6%, respectively.

Text-to-text safety measures how frequently a model violates Google’s guidelines given a prompt, while image-to-text safety evaluates how closely the model adheres to these boundaries when prompted using an image. Both tests are automated, not human-supervised.

In an emailed statement, a Google spokesperson confirmed that Gemini 2.5 Flash “performs worse on text-to-text and image-to-text safety.”

These surprising benchmark results come as AI companies move to make their models more permissive — in other words, less likely to refuse to respond to controversial or sensitive subjects. For its latest crop of Llama models, Meta said it tuned the models not to endorse “some views over others” and to reply to more “debated” political prompts. OpenAI said earlier this year that it would tweak future models to not take an editorial stance and offer multiple perspectives on controversial topics.

Sometimes, those permissiveness efforts have backfired. TechCrunch reported Monday that the default model powering OpenAI’s ChatGPT allowed minors to generate erotic conversations. OpenAI blamed the behavior on a “bug.”

According to Google’s technical report, Gemini 2.5 Flash, which is still in preview, follows instructions more faithfully than Gemini 2.0 Flash, inclusive of instructions that cross problematic lines. The company claims that the regressions can be attributed partly to false positives, but it also admits that Gemini 2.5 Flash sometimes generates “violative content” when explicitly asked.

Techcrunch event

Berkeley, CA
|
June 5

BOOK NOW

“Naturally, there is tension between [instruction following] on sensitive topics and safety policy violations, which is reflected across our evaluations,” reads the report.

Scores from SpeechMap, a benchmark that probes how models respond to sensitive and controversial prompts, also suggest that Gemini 2.5 Flash is far less likely to refuse to answer contentious questions than Gemini 2.0 Flash. TechCrunch’s testing of the model via AI platform OpenRouter found that it’ll uncomplainingly write essays in support of replacing human judges with AI, weakening due process protections in the U.S., and implementing widespread warrantless government surveillance programs.

Thomas Woodside, co-founder of the Secure AI Project, said the limited details Google gave in its technical report demonstrates the need for more transparency in model testing.

“There’s a trade-off between instruction-following and policy following, because some users may ask for content that would violate policies,” Woodside told TechCrunch. “In this case, Google’s latest Flash model complies with instructions more while also violating policies more. Google doesn’t provide much detail on the specific cases where policies were violated, although they say they are not severe. Without knowing more, it’s hard for independent analysts to know whether there’s a problem.”

Google has come under fire for its model safety reporting practices before.

It took the company weeks to publish a technical report for its most capable model, Gemini 2.5 Pro. When the report eventually was published, it initially omitted key safety testing details.

On Monday, Google released a more detailed report with additional safety information.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
admin
  • Website

Related Posts

After Klarna, Zoom’s CEO also uses an AI avatar on quarterly call

May 23, 2025

Anthropic CEO claims AI models hallucinate less than humans

May 22, 2025

Anthropic’s latest flagship AI sure seems to love using the ‘cyclone’ emoji

May 22, 2025

A safety institute advised against releasing an early version of Anthropic’s Claude Opus 4 AI model

May 22, 2025

Anthropic’s new AI model turns to blackmail when engineers try to take it offline

May 22, 2025

Meta adds another 650 MW of solar power to its AI push

May 22, 2025
Add A Comment
Leave A Reply Cancel Reply

Don't Miss
Billionaires

Trump Just Disclosed Earning $57.4 Million From World Liberty Financial—Here’s What We Know

June 16, 2025

Topline President Donald Trump earned $57.4 million from World Liberty Financial, a crypto company he…

Private Equity’s First Woman Billionaire Owns San Diego Soccer Team

June 11, 2025

Billionaire Walmart Heiress Urges People To ‘Mobilize’ At June 14 Anti-Trump Protests

June 11, 2025

Anduril Cofounder Trae Stephens Is Now A Billionaire

June 10, 2025
Our Picks

Berliners swim in the Spree River to protest 100-year ban

June 17, 2025

Juneteenth highlights tribal slavery descendants’ citizenship struggle

June 17, 2025

Weeds can give us clues about the lawn

June 17, 2025

Trump Just Disclosed Earning $57.4 Million From World Liberty Financial—Here’s What We Know

June 16, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to World-Forbes.com
At World-Forbes.com, we bring you the latest insights, trends, and analysis across various industries, empowering our readers with valuable knowledge. Our platform is dedicated to covering a wide range of topics, including sports, small business, business, technology, AI, cybersecurity, and lifestyle.

Our Picks

After Klarna, Zoom’s CEO also uses an AI avatar on quarterly call

May 23, 2025

Anthropic CEO claims AI models hallucinate less than humans

May 22, 2025

Anthropic’s latest flagship AI sure seems to love using the ‘cyclone’ emoji

May 22, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
© 2025 world-forbes. Designed by world-forbes.

Type above and press Enter to search. Press Esc to cancel.