Close Menu
World Forbes – Business, Tech, AI & Global Insights
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
What's Hot

Tariff threats, wars will slow but not collapse global luxury sales in 2025, new study shows

June 19, 2025

Our song of the summer predictions for 2025

June 19, 2025

Tech tips for tracking pets

June 19, 2025
Facebook X (Twitter) Instagram
Trending
  • Tariff threats, wars will slow but not collapse global luxury sales in 2025, new study shows
  • Our song of the summer predictions for 2025
  • Tech tips for tracking pets
  • South Korea’s last circus, Dongchoon, marks centennial
  • Billionaire Jorge Pérez Plans To Beat Trump’s Immigration Crackdown
  • AP lifestyles reporter discusses chair yoga
  • A Minnesota man cuts short his biking trip in Iran as conflict with Israel breaks out
  • Owners’ anxiety can rub off on pets
World Forbes – Business, Tech, AI & Global InsightsWorld Forbes – Business, Tech, AI & Global Insights
Friday, June 20
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
World Forbes – Business, Tech, AI & Global Insights
Home » OpenAI’s ex-policy lead criticizes the company for ‘rewriting’ its AI safety history
AI

OpenAI’s ex-policy lead criticizes the company for ‘rewriting’ its AI safety history

adminBy adminMarch 6, 2025No Comments4 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email
Post Views: 52


A high-profile ex-OpenAI policy researcher, Miles Brundage, took to social media on Wednesday to criticize OpenAI for “rewriting the history” of its deployment approach to potentially risky AI systems.

Earlier this week, OpenAI published a document outlining its current philosophy on AI safety and alignment, the process of designing AI systems that behave in desirable and explainable ways. In the document, OpenAI said that it sees the development of AGI, broadly defined as AI systems that can perform any task a human can, as a “continuous path” that requires “iteratively deploying and learning” from AI technologies.

“In a discontinuous world […] safety lessons come from treating the systems of today with outsized caution relative to their apparent power, [which] is the approach we took for [our AI model] GPT‑2,” OpenAI wrote. “We now view the first AGI as just one point along a series of systems of increasing usefulness […] In the continuous world, the way to make the next system safe and beneficial is to learn from the current system.”

But Brundage claims that GPT-2 did, in fact, warrant abundant caution at the time of its release, and that this was “100% consistent” with OpenAI’s iterative deployment strategy today.

“OpenAI’s release of GPT-2, which I was involved in, was 100% consistent [with and] foreshadowed OpenAI’s current philosophy of iterative deployment,” Brundage wrote in a post on X. “The model was released incrementally, with lessons shared at each step. Many security experts at the time thanked us for this caution.”

Brundage, who joined OpenAI as a research scientist in 2018, was the company’s head of policy research for several years. On OpenAI’s “AGI readiness” team, he had a particular focus on the responsible deployment of language generation systems such as OpenAI’s AI chatbot platform ChatGPT.

GPT-2, which OpenAI announced in 2019, was a progenitor of the AI systems powering ChatGPT. GPT-2 could answer questions about a topic, summarize articles, and generate text on a level sometimes indistinguishable from that of humans.

While GPT-2 and its outputs may look basic today, they were cutting-edge at the time. Citing the risk of malicious use, OpenAI initially refused to release GPT-2’s source code, opting instead to give selected news outlets limited access to a demo.

The decision was met with mixed reviews from the AI industry. Many experts argued that the threat posed by GPT-2 had been exaggerated, and that there wasn’t any evidence the model could be abused in the ways OpenAI described. AI-focused publication The Gradient went so far as to publish an open letter requesting that OpenAI release the model, arguing it was too technologically important to hold back.

OpenAI eventually did release a partial version of GPT-2 six months after the model’s unveiling, followed by the full system several months after that. Brundage thinks this was the right approach.

“What part of [the GPT-2 release] was motivated by or premised on thinking of AGI as discontinuous? None of it,” he said in a post on X. “What’s the evidence this caution was ‘disproportionate’ ex ante? Ex post, it prob. would have been OK, but that doesn’t mean it was responsible to YOLO it [sic] given info at the time.”

Brundage fears that OpenAI’s aim with the document is to set up a burden of proof where “concerns are alarmist” and “you need overwhelming evidence of imminent dangers to act on them.” This, he argues, is a “very dangerous” mentality for advanced AI systems.

“If I were still working at OpenAI, I would be asking why this [document] was written the way it was, and what exactly OpenAI hopes to achieve by poo-pooing caution in such a lop-sided way,” Brundage added.

OpenAI has historically been accused of prioritizing “shiny products” at the expense of safety, and of rushing product releases to beat rival companies to market. Last year, OpenAI dissolved its AGI readiness team, and a string of AI safety and policy researchers departed the company for rivals.

Competitive pressures have only ramped up. Chinese AI lab DeepSeek captured the world’s attention with its openly available R1 model, which matched OpenAI’s o1 “reasoning” model on a number of key benchmarks. OpenAI CEO Sam Altman has admitted that DeepSeek has lessened OpenAI’s technological lead, and said that OpenAI would “pull up some releases” to better compete.

There’s a lot of money on the line. OpenAI loses billions annually, and the company has reportedly projected that its annual losses could triple to $14 billion by 2026. A faster product release cycle could benefit OpenAI’s bottom line near-term, but possibly at the expense of safety long-term. Experts like Brundage question whether the trade-off is worth it.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
admin
  • Website

Related Posts

After Klarna, Zoom’s CEO also uses an AI avatar on quarterly call

May 23, 2025

Anthropic CEO claims AI models hallucinate less than humans

May 22, 2025

Anthropic’s latest flagship AI sure seems to love using the ‘cyclone’ emoji

May 22, 2025

A safety institute advised against releasing an early version of Anthropic’s Claude Opus 4 AI model

May 22, 2025

Anthropic’s new AI model turns to blackmail when engineers try to take it offline

May 22, 2025

Meta adds another 650 MW of solar power to its AI push

May 22, 2025
Add A Comment
Leave A Reply Cancel Reply

Don't Miss
Billionaires

Billionaire Jorge Pérez Plans To Beat Trump’s Immigration Crackdown

June 18, 2025

Jorge Pérez made his fortune selling luxury condos in South Florida. Now the wealthy immigrant…

Indian Creek Property Near Jeff Bezos Just Sold For Over $100 Million

June 17, 2025

How Much Is Barron Trump Worth?

June 17, 2025

Trump Just Disclosed Earning $57.4 Million From World Liberty Financial—Here’s What We Know

June 16, 2025
Our Picks

Tariff threats, wars will slow but not collapse global luxury sales in 2025, new study shows

June 19, 2025

Our song of the summer predictions for 2025

June 19, 2025

Tech tips for tracking pets

June 19, 2025

South Korea’s last circus, Dongchoon, marks centennial

June 18, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to World-Forbes.com
At World-Forbes.com, we bring you the latest insights, trends, and analysis across various industries, empowering our readers with valuable knowledge. Our platform is dedicated to covering a wide range of topics, including sports, small business, business, technology, AI, cybersecurity, and lifestyle.

Our Picks

After Klarna, Zoom’s CEO also uses an AI avatar on quarterly call

May 23, 2025

Anthropic CEO claims AI models hallucinate less than humans

May 22, 2025

Anthropic’s latest flagship AI sure seems to love using the ‘cyclone’ emoji

May 22, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
© 2025 world-forbes. Designed by world-forbes.

Type above and press Enter to search. Press Esc to cancel.