Close Menu
World Forbes – Business, Tech, AI & Global Insights
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
What's Hot

More Pakistani women are joining the country’s firefighters

November 7, 2025

Musk’s Net Worth Drops $10 Billion—And Tesla Shares Fall—Here’s Why

November 7, 2025

Here’s what to know about a study that raises questions about melatonin use and heart health

November 7, 2025
Facebook X (Twitter) Instagram
Trending
  • More Pakistani women are joining the country’s firefighters
  • Musk’s Net Worth Drops $10 Billion—And Tesla Shares Fall—Here’s Why
  • Here’s what to know about a study that raises questions about melatonin use and heart health
  • Trump’s Bungled Bet On Bitcoin Is Costing Him Bigtime
  • A Startup Was Their First-Ever Job—Now They’re The World’s Youngest Self Made Billionaires
  • Meet The Former Journalist Giving Away Billions
  • Supermarket Billionaire Reacts To Mamdani’s Win
  • Farmers’ Almanac to cease publication after 2 centuries of predicting the weather
World Forbes – Business, Tech, AI & Global InsightsWorld Forbes – Business, Tech, AI & Global Insights
Saturday, November 8
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
World Forbes – Business, Tech, AI & Global Insights
Home » OpenAI partner says it had relatively little time to test the company’s o3 AI model
AI

OpenAI partner says it had relatively little time to test the company’s o3 AI model

By adminApril 16, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email
Post Views: 97


An organization OpenAI frequently partners with to probe the capabilities of its AI models and evaluate them for safety, Metr, suggests that it wasn’t given much time to test one of the company’s highly capable new releases, o3.

In a blog post published Wednesday, Metr writes that one red teaming benchmark of o3 was “conducted in a relatively short time” compared to the organization’s testing of a previous OpenAI flagship model, o1. This is significant, they say, because additional testing time can lead to more comprehensive results.

“This evaluation was conducted in a relatively short time, and we only tested [o3] with simple agent scaffolds,” wrote Metr in its blog post. “We expect higher performance [on benchmarks] is possible with more elicitation effort.”

Recent reports suggest that OpenAI, spurred by competitive pressure, is rushing independent evaluations. According to the Financial Times, OpenAI gave some testers less than a week for safety checks for an upcoming major launch.

In statements, OpenAI has disputed the notion that it’s compromising on safety.

Metr says that, based on the information it was able to glean in the time it had, o3 has a “high propensity” to “cheat” or “hack” tests in sophisticated ways in order to maximize its score — even when the model clearly understands its behavior is misaligned with the user’s (and OpenAI’s) intentions. The organization thinks it’s possible o3 will engage in other types of adversarial or “malign” behavior, as well — regardless of the model’s claims to be aligned, “safe by design,” or not have any intentions of its own.

“While we don’t think this is especially likely, it seems important to note that [our] evaluation setup would not catch this type of risk,” Metr wrote in its post. “In general, we believe that pre-deployment capability testing is not a sufficient risk management strategy by itself, and we are currently prototyping additional forms of evaluations.”

Another of OpenAI’s third-party evaluation partners, Apollo Research, also observed deceptive behavior from o3 and the company’s other new model, o4-mini. In one test, the models, given 100 computing credits for an AI training run and told not to modify the quota, increased the limit to 500 credits — and lied about it. In another test, asked to promise not to use a specific tool, the models used the tool anyway when it proved helpful in completing a task.

In its own safety report for o3 and o4-mini, OpenAI acknowledged that the models may cause “smaller real-world harms,” like misleading about a mistake resulting in faulty code, without the proper monitoring protocols in place.

“[Apollo’s] findings show that o3 and o4-mini are capable of in-context scheming and strategic deception,” wrote OpenAI. “While relatively harmless, it is important for everyday users to be aware of these discrepancies between the models’ statements and actions […] This may be further assessed through assessing internal reasoning traces.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
admin
  • Website

Related Posts

After Klarna, Zoom’s CEO also uses an AI avatar on quarterly call

May 23, 2025

Anthropic CEO claims AI models hallucinate less than humans

May 22, 2025

Anthropic’s latest flagship AI sure seems to love using the ‘cyclone’ emoji

May 22, 2025

A safety institute advised against releasing an early version of Anthropic’s Claude Opus 4 AI model

May 22, 2025

Anthropic’s new AI model turns to blackmail when engineers try to take it offline

May 22, 2025

Meta adds another 650 MW of solar power to its AI push

May 22, 2025
Add A Comment
Leave A Reply

Don't Miss
Billionaires

Musk’s Net Worth Drops $10 Billion—And Tesla Shares Fall—Here’s Why

November 7, 2025

ToplineTesla shares declined more than 3% on Friday, cutting CEO Elon Musk’s fortune by $10…

Trump’s Bungled Bet On Bitcoin Is Costing Him Bigtime

November 7, 2025

A Startup Was Their First-Ever Job—Now They’re The World’s Youngest Self Made Billionaires

November 7, 2025

Meet The Former Journalist Giving Away Billions

November 7, 2025
Our Picks

More Pakistani women are joining the country’s firefighters

November 7, 2025

Musk’s Net Worth Drops $10 Billion—And Tesla Shares Fall—Here’s Why

November 7, 2025

Here’s what to know about a study that raises questions about melatonin use and heart health

November 7, 2025

Trump’s Bungled Bet On Bitcoin Is Costing Him Bigtime

November 7, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to World-Forbes.com
At World-Forbes.com, we bring you the latest insights, trends, and analysis across various industries, empowering our readers with valuable knowledge. Our platform is dedicated to covering a wide range of topics, including sports, small business, business, technology, AI, cybersecurity, and lifestyle.

Our Picks

After Klarna, Zoom’s CEO also uses an AI avatar on quarterly call

May 23, 2025

Anthropic CEO claims AI models hallucinate less than humans

May 22, 2025

Anthropic’s latest flagship AI sure seems to love using the ‘cyclone’ emoji

May 22, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
© 2025 world-forbes. Designed by world-forbes.

Type above and press Enter to search. Press Esc to cancel.