Close Menu
World Forbes – Business, Tech, AI & Global Insights
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
What's Hot

Improvements in ‘reasoning’ AI models may slow down soon, analysis finds

May 12, 2025

Trump administration welcomes 59 white South Africans as refugees to the US | Donald Trump News

May 12, 2025

Trump signs executive order to bring down prescription drug prices | Donald Trump News

May 12, 2025
Facebook X (Twitter) Instagram
Trending
  • Improvements in ‘reasoning’ AI models may slow down soon, analysis finds
  • Trump administration welcomes 59 white South Africans as refugees to the US | Donald Trump News
  • Trump signs executive order to bring down prescription drug prices | Donald Trump News
  • Apple Patches Major Security Flaws in iOS, macOS Platforms
  • Google I/O 2025: How to watch all the AI and Android reveals
  • Hamas frees soldier Edan Alexander as Gaza faces bombardment, famine risk | Israel-Palestine conflict News
  • Even a16z VCs say no one really knows what an AI agent is
  • House Republicans propose $5 billion for private school vouchers
World Forbes – Business, Tech, AI & Global InsightsWorld Forbes – Business, Tech, AI & Global Insights
Tuesday, May 13
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
World Forbes – Business, Tech, AI & Global Insights
Home » Crowdsourced AI benchmarks have serious flaws, some experts say
AI

Crowdsourced AI benchmarks have serious flaws, some experts say

adminBy adminApril 22, 2025No Comments4 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email
Post Views: 24


AI labs are increasingly relying on crowdsourced benchmarking platforms such as Chatbot Arena to probe the strengths and weaknesses of their latest models. But some experts say that there are serious problems with this approach from an ethical and academic perspective.

Over the past few years, labs including OpenAI, Google, and Meta have turned to platforms that recruit users to help evaluate upcoming models’ capabilities. When a model scores favorably, the lab behind it will often tout that score as evidence of a meaningful improvement.

It’s a flawed approach, however, according to Emily Bender, a University of Washington linguistics professor and co-author of the book “The AI Con.” Bender takes particular issue with Chatbot Arena, which tasks volunteers with prompting two anonymous models and selecting the response they prefer.

“To be valid, a benchmark needs to measure something specific, and it needs to have construct validity — that is, there has to be evidence that the construct of interest is well-defined and that the measurements actually relate to the construct,” Bender said. “Chatbot Arena hasn’t shown that voting for one output over another actually correlates with preferences, however they may be defined.”

Asmelash Teka Hadgu, the co-founder of AI firm Lesan and a fellow at the Distributed AI Research Institute, said that he thinks benchmarks like Chatbot Arena are being “co-opted” by AI labs to “promote exaggerated claims.” Hadgu pointed to a recent controversy involving Meta’s Llama 4 Maverick model. Meta fine-tuned a version of Maverick to score well on Chatbot Arena, only to withhold that model in favor of releasing a worse-performing version.

“Benchmarks should be dynamic rather than static data sets,” Hadgu said, “distributed across multiple independent entities, such as organizations or universities, and tailored specifically to distinct use cases, like education, healthcare, and other fields done by practicing professionals who use these [models] for work.”

Hadgu and Kristine Gloria, who formerly led the Aspen Institute’s Emergent and Intelligent Technologies Initiative, also made the case that model evaluators should be compensated for their work. Gloria said that AI labs should learn from the mistakes of the data labeling industry, which is notorious for its exploitative practices. (Some labs have been accused of the same.)

“In general, the crowdsourced benchmarking process is valuable and reminds me of citizen science initiatives,” Gloria said. “Ideally, it helps bring in additional perspectives to provide some depth in both the evaluation and fine-tuning of data. But benchmarks should never be the only metric for evaluation. With the industry and the innovation moving quickly, benchmarks can rapidly become unreliable.”

Matt Frederikson, the CEO of Gray Swan AI, which runs crowdsourced red teaming campaigns for models, said that volunteers are drawn to Gray Swan’s platform for a range of reasons, including “learning and practicing new skills.” (Gray Swan also awards cash prizes for some tests.) Still, he acknowledged that public benchmarks “aren’t a substitute” for “paid private” evaluations.

“[D]evelopers also need to rely on internal benchmarks, algorithmic red teams, and contracted red teamers who can take a more open-ended approach or bring specific domain expertise,” Frederikson said. “It’s important for both model developers and benchmark creators, crowdsourced or otherwise, to communicate results clearly to those who follow, and be responsive when they are called into question.”

Alex Atallah, the CEO of model marketplace OpenRouter, which recently partnered with OpenAI to grant users early access to OpenAI’s GPT-4.1 models, said open testing and benchmarking of models alone “isn’t sufficient.” So did Wei-Lin Chiang, an AI doctoral student at UC Berkeley and one of the founders of LMArena, which maintains Chatbot Arena.

“We certainly support the use of other tests,” Chiang said. “Our goal is to create a trustworthy, open space that measures our community’s preferences about different AI models.”

Chiang said that incidents such as the Maverick benchmark discrepancy aren’t the result of a flaw in Chatbot Arena’s design, but rather labs misinterpreting its policy. LM Arena has taken steps to prevent future discrepancies from occurring, Chiang said, including updating its policies to “reinforce our commitment to fair, reproducible evaluations.”

“Our community isn’t here as volunteers or model testers,” Chiang said. “People use LM Arena because we give them an open, transparent place to engage with AI and give collective feedback. As long as the leaderboard faithfully reflects the community’s voice, we welcome it being shared.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
admin
  • Website

Related Posts

Improvements in ‘reasoning’ AI models may slow down soon, analysis finds

May 12, 2025

Google I/O 2025: How to watch all the AI and Android reveals

May 12, 2025

Even a16z VCs say no one really knows what an AI agent is

May 12, 2025

Google launches new initiative to back startups building AI

May 12, 2025

Sam Altman apparently does not respect olive oil

May 12, 2025

OpenAI’s Stargate project reportedly struggling to get off the ground, thanks to tariffs

May 12, 2025
Add A Comment
Leave A Reply Cancel Reply

Don't Miss
Billionaires

Skechers’ Greenbergs Set To Pocket Up To $1.1 Billion From Sale To 3G

May 6, 2025

Skechers founders Robert Greenberg (left) and Michael Greenberg (right) started the brand more than 30…

Trump Organization Admits President Still Controls His Business

May 6, 2025

Forbes Richest Person In Every State 2025

April 30, 2025

These Billionaire Signers Of The Giving Pledge Signers On Why The Philanthropy Group Still Matters

April 29, 2025
Our Picks

Improvements in ‘reasoning’ AI models may slow down soon, analysis finds

May 12, 2025

Trump administration welcomes 59 white South Africans as refugees to the US | Donald Trump News

May 12, 2025

Trump signs executive order to bring down prescription drug prices | Donald Trump News

May 12, 2025

Apple Patches Major Security Flaws in iOS, macOS Platforms

May 12, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to World-Forbes.com
At World-Forbes.com, we bring you the latest insights, trends, and analysis across various industries, empowering our readers with valuable knowledge. Our platform is dedicated to covering a wide range of topics, including sports, small business, business, technology, AI, cybersecurity, and lifestyle.

Our Picks

Improvements in ‘reasoning’ AI models may slow down soon, analysis finds

May 12, 2025

Google I/O 2025: How to watch all the AI and Android reveals

May 12, 2025

Even a16z VCs say no one really knows what an AI agent is

May 12, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
© 2025 world-forbes. Designed by world-forbes.

Type above and press Enter to search. Press Esc to cancel.