Close Menu
World Forbes – Business, Tech, AI & Global Insights
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
What's Hot

A Pennsylvania baby is first to receive personalized gene therapy

May 15, 2025

US DOJ investigates UnitedHealth for alleged Medicare fraud: Report | Business and Economy

May 15, 2025

Google rolls out new AI and accessibility features to Android and Chrome

May 15, 2025
Facebook X (Twitter) Instagram
Trending
  • A Pennsylvania baby is first to receive personalized gene therapy
  • US DOJ investigates UnitedHealth for alleged Medicare fraud: Report | Business and Economy
  • Google rolls out new AI and accessibility features to Android and Chrome
  • Cognichip emerges from stealth with the goal of using generative AI to develop new chips
  • Coinbase Rejects $20M Ransom After Rogue Contractors Bribed to Leak Customer Data
  • Could EU tariffs against Russia bring a ceasefire for Ukraine? | Russia-Ukraine war News
  • What have US President Donald Trump’s tariff policies achieved? | Donald Trump News
  • Can President Trump legally accept a $400m plane for free? | Donald Trump News
World Forbes – Business, Tech, AI & Global InsightsWorld Forbes – Business, Tech, AI & Global Insights
Thursday, May 15
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
World Forbes – Business, Tech, AI & Global Insights
Home » OpenAI’s ex-policy lead criticizes the company for ‘rewriting’ its AI safety history
AI

OpenAI’s ex-policy lead criticizes the company for ‘rewriting’ its AI safety history

adminBy adminMarch 6, 2025No Comments4 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email
Post Views: 37


A high-profile ex-OpenAI policy researcher, Miles Brundage, took to social media on Wednesday to criticize OpenAI for “rewriting the history” of its deployment approach to potentially risky AI systems.

Earlier this week, OpenAI published a document outlining its current philosophy on AI safety and alignment, the process of designing AI systems that behave in desirable and explainable ways. In the document, OpenAI said that it sees the development of AGI, broadly defined as AI systems that can perform any task a human can, as a “continuous path” that requires “iteratively deploying and learning” from AI technologies.

“In a discontinuous world […] safety lessons come from treating the systems of today with outsized caution relative to their apparent power, [which] is the approach we took for [our AI model] GPT‑2,” OpenAI wrote. “We now view the first AGI as just one point along a series of systems of increasing usefulness […] In the continuous world, the way to make the next system safe and beneficial is to learn from the current system.”

But Brundage claims that GPT-2 did, in fact, warrant abundant caution at the time of its release, and that this was “100% consistent” with OpenAI’s iterative deployment strategy today.

“OpenAI’s release of GPT-2, which I was involved in, was 100% consistent [with and] foreshadowed OpenAI’s current philosophy of iterative deployment,” Brundage wrote in a post on X. “The model was released incrementally, with lessons shared at each step. Many security experts at the time thanked us for this caution.”

Brundage, who joined OpenAI as a research scientist in 2018, was the company’s head of policy research for several years. On OpenAI’s “AGI readiness” team, he had a particular focus on the responsible deployment of language generation systems such as OpenAI’s AI chatbot platform ChatGPT.

GPT-2, which OpenAI announced in 2019, was a progenitor of the AI systems powering ChatGPT. GPT-2 could answer questions about a topic, summarize articles, and generate text on a level sometimes indistinguishable from that of humans.

While GPT-2 and its outputs may look basic today, they were cutting-edge at the time. Citing the risk of malicious use, OpenAI initially refused to release GPT-2’s source code, opting instead to give selected news outlets limited access to a demo.

The decision was met with mixed reviews from the AI industry. Many experts argued that the threat posed by GPT-2 had been exaggerated, and that there wasn’t any evidence the model could be abused in the ways OpenAI described. AI-focused publication The Gradient went so far as to publish an open letter requesting that OpenAI release the model, arguing it was too technologically important to hold back.

OpenAI eventually did release a partial version of GPT-2 six months after the model’s unveiling, followed by the full system several months after that. Brundage thinks this was the right approach.

“What part of [the GPT-2 release] was motivated by or premised on thinking of AGI as discontinuous? None of it,” he said in a post on X. “What’s the evidence this caution was ‘disproportionate’ ex ante? Ex post, it prob. would have been OK, but that doesn’t mean it was responsible to YOLO it [sic] given info at the time.”

Brundage fears that OpenAI’s aim with the document is to set up a burden of proof where “concerns are alarmist” and “you need overwhelming evidence of imminent dangers to act on them.” This, he argues, is a “very dangerous” mentality for advanced AI systems.

“If I were still working at OpenAI, I would be asking why this [document] was written the way it was, and what exactly OpenAI hopes to achieve by poo-pooing caution in such a lop-sided way,” Brundage added.

OpenAI has historically been accused of prioritizing “shiny products” at the expense of safety, and of rushing product releases to beat rival companies to market. Last year, OpenAI dissolved its AGI readiness team, and a string of AI safety and policy researchers departed the company for rivals.

Competitive pressures have only ramped up. Chinese AI lab DeepSeek captured the world’s attention with its openly available R1 model, which matched OpenAI’s o1 “reasoning” model on a number of key benchmarks. OpenAI CEO Sam Altman has admitted that DeepSeek has lessened OpenAI’s technological lead, and said that OpenAI would “pull up some releases” to better compete.

There’s a lot of money on the line. OpenAI loses billions annually, and the company has reportedly projected that its annual losses could triple to $14 billion by 2026. A faster product release cycle could benefit OpenAI’s bottom line near-term, but possibly at the expense of safety long-term. Experts like Brundage question whether the trade-off is worth it.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
admin
  • Website

Related Posts

Cognichip emerges from stealth with the goal of using generative AI to develop new chips

May 15, 2025

Google rolls out new AI and accessibility features to Android and Chrome

May 15, 2025

Hedra, the app used to make talking baby podcasts, raises $32M from a16z

May 15, 2025

Harvey reportedly in discussions to raise $250M at $5B valuation

May 15, 2025

Grok is unpromptedly telling X users about South African ‘white genocide’

May 14, 2025

OpenAI brings its GPT-4.1 models to ChatGPT

May 14, 2025
Add A Comment
Leave A Reply Cancel Reply

Don't Miss
Billionaires

Here’s How Much Selena Gomez-Actress, Singer, Entrepreneur-Is Worth

May 13, 2025

Contrary to reports of her 10-figure status, Forbes estimates the Disney star turned business mogul’s…

Looking Back At Trump’s Years-Long Obsession With Oversized Airplanes

May 13, 2025

Selena Gomez’s Mental Health Startup Wondermind Lays Off Nearly Two-Thirds Of Its Employees

May 13, 2025

Billionaires And CEOs Are Seeking Personal Security At Record Rates

May 9, 2025
Our Picks

A Pennsylvania baby is first to receive personalized gene therapy

May 15, 2025

US DOJ investigates UnitedHealth for alleged Medicare fraud: Report | Business and Economy

May 15, 2025

Google rolls out new AI and accessibility features to Android and Chrome

May 15, 2025

Cognichip emerges from stealth with the goal of using generative AI to develop new chips

May 15, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to World-Forbes.com
At World-Forbes.com, we bring you the latest insights, trends, and analysis across various industries, empowering our readers with valuable knowledge. Our platform is dedicated to covering a wide range of topics, including sports, small business, business, technology, AI, cybersecurity, and lifestyle.

Our Picks

Google rolls out new AI and accessibility features to Android and Chrome

May 15, 2025

Cognichip emerges from stealth with the goal of using generative AI to develop new chips

May 15, 2025

Hedra, the app used to make talking baby podcasts, raises $32M from a16z

May 15, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
© 2025 world-forbes. Designed by world-forbes.

Type above and press Enter to search. Press Esc to cancel.