Close Menu
World Forbes – Business, Tech, AI & Global Insights
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
What's Hot

Edmunds small luxury SUV test: 2025 Acura ADX vs 2025 BMW X1

July 30, 2025

How composting works wherever you live

July 30, 2025

PHOTO ESSAY: A rickshaw driver and his dog are winning hearts in Nepal’s Kathmandu

July 30, 2025
Facebook X (Twitter) Instagram
Trending
  • Edmunds small luxury SUV test: 2025 Acura ADX vs 2025 BMW X1
  • How composting works wherever you live
  • PHOTO ESSAY: A rickshaw driver and his dog are winning hearts in Nepal’s Kathmandu
  • Female tour guides in Afghanistan lead women-only groups as some travelers return
  • Starbucks looks to protein drinks, other new products to turn around lagging sales
  • How Larry Ellison And David Ellison Pulled Off The Paramount Deal
  • Tracee Ellis Ross offers tips on solo travel in new docuseries for Roku
  • Booker Prize winner Kiran Desai is up for the award again with a long-awaited novel
World Forbes – Business, Tech, AI & Global InsightsWorld Forbes – Business, Tech, AI & Global Insights
Wednesday, July 30
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
World Forbes – Business, Tech, AI & Global Insights
Home » AI Hallucinations Create a New Software Supply Chain Threat
Cybersecurity

AI Hallucinations Create a New Software Supply Chain Threat

adminBy adminApril 14, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email
Post Views: 49


Package hallucinations represent a common issue within code-generating Large Language Models (LLMs) that opens the door for a new type of supply chain attack, researchers from three US universities warn.

Referred to as ‘slopsquatting’, package hallucination occurs when the code generated by a LLM recommends or references a fictitious package.

Researchers from the University of Texas at San Antonio, University of Oklahoma, and Virginia Tech warn that threat actors can exploit this by publishing malicious packages with the hallucinated names.

“As other unsuspecting and trusting LLM users are subsequently recommended the same fictitious package in their generated code, they end up downloading the adversary-created malicious package, resulting in a successful compromise,” the academics explain in a recently published research paper (PDF).

Considered a variation of the classical package confusion attack, slopsquatting could lead to the compromise of an entire codebase or software dependency chain, as any code relying on the malicious package could end up being infected.

The academics show that, out of 16 popular LLMs for code generation, none was free of package hallucination. Overall, they generated 205,474 unique fictitious package names. Most of the hallucinated packages (81%) were unique to the model that generated them.

For commercial models, hallucinated packages occurred in at least 5.2% of cases, and the percentage jumped to 21.7% for open source models. These hallucinations are often persistent within the same model, as 58% would repeat within 10 iterations.

The academics also warn that, while the risk of LLMs recommending malicious or typosquatted packages has been documented before, it was considered low, and that the concept of package hallucinations was not considered. With the rapid evolution of Gen-AI, the risks linked to its use have also escalated.

Advertisement. Scroll to continue reading.

The researchers conducted 30 tests (16 for Python and 14 for JavaScript) to produce a total of 576,000 code samples. During evaluation, the models were prompted twice for package names, resulting in a total of 1,152,000 package prompts.

“These 30 tests generated a total of 2.23 million packages in response to our prompts, of which 440,445 (19.7%) were determined to be hallucinations, including 205,474 unique non-existent packages,” the academics note.

The academics also discovered that the models were able to detect most of their own hallucinations, which would imply that “each model’s specific error patterns are detectable by the same mechanisms that generate them, suggesting an inherent self-regulatory capability.”

To mitigate package hallucination, the researchers propose prompt engineering methods such as Retrieval Augmented Generation (RAG), self-refinement, and prompt tuning, along with model development techniques such as decoding strategies or supervised fine-tuning of LLMs.

Related: AI Now Outsmarts Humans in Spear Phishing, Analysis Shows

Related: Vulnerabilities Expose Jan AI Systems to Remote Manipulation

Related: Google Pushing ‘Sec-Gemini’ AI Model for Threat-Intel Workflows

Related:Microsoft Bets $10,000 on Prompt Injection Protections of LLM Email Clien



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
admin
  • Website

Related Posts

O2 Service Vulnerability Exposed User Location

May 20, 2025

Madhu Gottumukkala Officially Announced as CISA Deputy Director

May 20, 2025

BreachRx Lands $15 Million as Investors Bet on Breach-Workflow Software

May 19, 2025

Printer Company Procolored Served Infected Software for Months

May 19, 2025

UK Legal Aid Agency Finds Data Breach Following Cyberattack

May 19, 2025

480,000 Catholic Health Patients Impacted by Serviceaide Data Leak

May 19, 2025
Add A Comment
Leave A Reply Cancel Reply

Don't Miss
Billionaires

How Larry Ellison And David Ellison Pulled Off The Paramount Deal

July 29, 2025

David Ellison, son of software centi-billionaire Larry Ellison, nurtured a relationship with Paramount over the…

The Founder Of Shake Shack Is Now A Billionaire

July 26, 2025

‘South Park’ Creators Trey Parker and Matt Stone Are Now Billionaires

July 25, 2025

How Jeffrey Epstein Got So Rich

July 25, 2025
Our Picks

Edmunds small luxury SUV test: 2025 Acura ADX vs 2025 BMW X1

July 30, 2025

How composting works wherever you live

July 30, 2025

PHOTO ESSAY: A rickshaw driver and his dog are winning hearts in Nepal’s Kathmandu

July 30, 2025

Female tour guides in Afghanistan lead women-only groups as some travelers return

July 30, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to World-Forbes.com
At World-Forbes.com, we bring you the latest insights, trends, and analysis across various industries, empowering our readers with valuable knowledge. Our platform is dedicated to covering a wide range of topics, including sports, small business, business, technology, AI, cybersecurity, and lifestyle.

Our Picks

After Klarna, Zoom’s CEO also uses an AI avatar on quarterly call

May 23, 2025

Anthropic CEO claims AI models hallucinate less than humans

May 22, 2025

Anthropic’s latest flagship AI sure seems to love using the ‘cyclone’ emoji

May 22, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
© 2025 world-forbes. Designed by world-forbes.

Type above and press Enter to search. Press Esc to cancel.