Close Menu
World Forbes – Business, Tech, AI & Global Insights
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
What's Hot

2,500 revelers in baroque costumes dance until dawn at Versailles’ masked ball

June 22, 2025

Armani’s global aesthetic shines in bohemian Emporio Armani show, though designer misses Milan bow

June 21, 2025

Greenland celebrates its National Day to mark the summer solstice

June 21, 2025
Facebook X (Twitter) Instagram
Trending
  • 2,500 revelers in baroque costumes dance until dawn at Versailles’ masked ball
  • Armani’s global aesthetic shines in bohemian Emporio Armani show, though designer misses Milan bow
  • Greenland celebrates its National Day to mark the summer solstice
  • Stonehenge solstice sunrise draws druids, pagans and revelers
  • Dolce & Gabbana embrace wrinkled romance for spring-summer 2026
  • Nutritionists say most people don’t need extra protein in their diet
  • AP PHOTOS: Highlights of International Day of Yoga
  • Giorgio Armani will not attend runway shows during Milan Fashion Week
World Forbes – Business, Tech, AI & Global InsightsWorld Forbes – Business, Tech, AI & Global Insights
Sunday, June 22
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
World Forbes – Business, Tech, AI & Global Insights
Home » Study accuses LM Arena of helping top AI labs game its benchmark
AI

Study accuses LM Arena of helping top AI labs game its benchmark

adminBy adminMay 1, 2025No Comments6 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email
Post Views: 31


A new paper from AI lab Cohere, Stanford, MIT, and Ai2 accuses LM Arena, the organization behind the popular crowdsourced AI benchmark Chatbot Arena, of helping a select group of AI companies achieve better leaderboard scores at the expense of rivals.

According to the authors, LM Arena allowed some industry-leading AI companies like Meta, OpenAI, Google, and Amazon to privately test several variants of AI models, then not publish the scores of the lowest performers. This made it easier for these companies to achieve a top spot on the platform’s leaderboard, though the opportunity was not afforded to every firm, the authors say.

“Only a handful of [companies] were told that this private testing was available, and the amount of private testing that some [companies] received is just so much more than others,” said Cohere’s VP of AI research and co-author of the study, Sara Hooker, in an interview with TechCrunch. “This is gamification.”

Created in 2023 as an academic research project out of UC Berkeley, Chatbot Arena has become a go-to benchmark for AI companies. It works by putting answers from two different AI models side-by-side in a “battle,” and asking users to choose the best one. It’s not uncommon to see unreleased models competing in the arena under a pseudonym.

Votes over time contribute to a model’s score — and, consequently, its placement on the Chatbot Arena leaderboard. While many commercial actors participate in Chatbot Arena, LM Arena has long maintained that its benchmark is an impartial and fair one.

However, that’s not what the paper’s authors say they uncovered.

One AI company, Meta, was able to privately test 27 model variants on Chatbot Arena between January and March leading up to the tech giant’s Llama 4 release, the authors allege. At launch, Meta only publicly revealed the score of a single model — a model that happened to rank near the top of the Chatbot Arena leaderboard.

Techcrunch event

Berkeley, CA
|
June 5

BOOK NOW

A chart pulled from the study. (Credit: Singh et al.)

In an email to TechCrunch, LM Arena Co-Founder and UC Berkeley Professor Ion Stoica said that the study was full of “inaccuracies” and “questionable analysis.”

“We are committed to fair, community-driven evaluations, and invite all model providers to submit more models for testing and to improve their performance on human preference,” said LM Arena in a statement provided to TechCrunch. “If a model provider chooses to submit more tests than another model provider, this does not mean the second model provider is treated unfairly.”

Supposedly favored labs

The paper’s authors started conducting their research in November 2024 after learning that some AI companies were possibly being given preferential access to Chatbot Arena. In total, they measured more than 2.8 million Chatbot Arena battles over a five-month stretch.

The authors say they found evidence that LM Arena allowed certain AI companies, including Meta, OpenAI, and Google, to collect more data from Chatbot Arena by having their models appear in a higher number of model “battles.” This increased sampling rate gave these companies an unfair advantage, the authors allege.

Using additional data from LM Arena could improve a model’s performance on Arena Hard, another benchmark LM Arena maintains, by 112%. However, LM Arena said in a post on X that Arena Hard performance does not directly correlate to Chatbot Arena performance.

Hooker said it’s unclear how certain AI companies might’ve received priority access, but that it’s incumbent on LM Arena to increase its transparency regardless.

In a post on X, LM Arena said that several of the claims in the paper don’t reflect reality. The organization pointed to a blog post it published earlier this week indicating that models from non-major labs appear in more Chatbot Arena battles than the study suggests.

One important limitation of the study is that it relied on “self-identification” to determine which AI models were in private testing on Chatbot Arena. The authors prompted AI models several times about their company of origin, and relied on the models’ answers to classify them — a method that isn’t foolproof.

However, Hooker said that when the authors reached out to LM Arena to share their preliminary findings, the organization didn’t dispute them.

TechCrunch reached out to Meta, Google, OpenAI, and Amazon — all of which were mentioned in the study — for comment. None immediately responded.

LM Arena in hot water

In the paper, the authors call on LM Arena to implement a number of changes aimed at making Chatbot Arena more “fair.” For example, the authors say, LM Arena could set a clear and transparent limit on the number of private tests AI labs can conduct, and publicly disclose scores from these tests.

In a post on X, LM Arena rejected these suggestions, claiming it has published information on pre-release testing since March 2024. The benchmarking organization also said it “makes no sense to show scores for pre-release models which are not publicly available,” because the AI community cannot test the models for themselves.

The researchers also say LM Arena could adjust Chatbot Arena’s sampling rate to ensure that all models in the arena appear in the same number of battles. LM Arena has been receptive to this recommendation publicly, and indicated that it’ll create a new sampling algorithm.

The paper comes weeks after Meta was caught gaming benchmarks in Chatbot Arena around the launch of its above-mentioned Llama 4 models. Meta optimized one of the Llama 4 models for “conversationality,” which helped it achieve an impressive score on Chatbot Arena’s leaderboard. But the company never released the optimized model — and the vanilla version ended up performing much worse on Chatbot Arena.

At the time, LM Arena said Meta should have been more transparent in its approach to benchmarking.

Earlier this month, LM Arena announced it was launching a company, with plans to raise capital from investors. The study increases scrutiny on private benchmark organization’s — and whether they can be trusted to assess AI models without corporate influence clouding the process.

Update on 4/30/25 at 9:35pm PT: A previous version of this story included comment from a Google DeepMind engineer who said part of Cohere’s study was inaccurate. The researcher did not dispute that Google sent 10 models to LM Arena for pre-release testing from January to March, as Cohere alleges, but simply noted the company’s open source team, which works on Gemma, only sent one.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
admin
  • Website

Related Posts

After Klarna, Zoom’s CEO also uses an AI avatar on quarterly call

May 23, 2025

Anthropic CEO claims AI models hallucinate less than humans

May 22, 2025

Anthropic’s latest flagship AI sure seems to love using the ‘cyclone’ emoji

May 22, 2025

A safety institute advised against releasing an early version of Anthropic’s Claude Opus 4 AI model

May 22, 2025

Anthropic’s new AI model turns to blackmail when engineers try to take it offline

May 22, 2025

Meta adds another 650 MW of solar power to its AI push

May 22, 2025
Add A Comment
Leave A Reply Cancel Reply

Don't Miss
Billionaires

Billionaire Jorge Pérez Plans To Beat Trump’s Immigration Crackdown

June 18, 2025

Jorge Pérez made his fortune selling luxury condos in South Florida. Now the wealthy immigrant…

Indian Creek Property Near Jeff Bezos Just Sold For Over $100 Million

June 17, 2025

How Much Is Barron Trump Worth?

June 17, 2025

Trump Just Disclosed Earning $57.4 Million From World Liberty Financial—Here’s What We Know

June 16, 2025
Our Picks

2,500 revelers in baroque costumes dance until dawn at Versailles’ masked ball

June 22, 2025

Armani’s global aesthetic shines in bohemian Emporio Armani show, though designer misses Milan bow

June 21, 2025

Greenland celebrates its National Day to mark the summer solstice

June 21, 2025

Stonehenge solstice sunrise draws druids, pagans and revelers

June 21, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to World-Forbes.com
At World-Forbes.com, we bring you the latest insights, trends, and analysis across various industries, empowering our readers with valuable knowledge. Our platform is dedicated to covering a wide range of topics, including sports, small business, business, technology, AI, cybersecurity, and lifestyle.

Our Picks

After Klarna, Zoom’s CEO also uses an AI avatar on quarterly call

May 23, 2025

Anthropic CEO claims AI models hallucinate less than humans

May 22, 2025

Anthropic’s latest flagship AI sure seems to love using the ‘cyclone’ emoji

May 22, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
© 2025 world-forbes. Designed by world-forbes.

Type above and press Enter to search. Press Esc to cancel.