Close Menu
World Forbes – Business, Tech, AI & Global Insights
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
What's Hot

What a Federal Reserve rate cut means for your finances

October 29, 2025

Character.AI to ban minors from using its chatbots

October 29, 2025

A recipe for Fish Stick Panzanella from ‘The Blue Food Cookbook’

October 29, 2025
Facebook X (Twitter) Instagram
Trending
  • What a Federal Reserve rate cut means for your finances
  • Character.AI to ban minors from using its chatbots
  • A recipe for Fish Stick Panzanella from ‘The Blue Food Cookbook’
  • Zimmern and Seaver promote fish and seafood in the ‘Blue Food Cookbook’
  • NFL fans want a longer season, new poll finds
  • Edmunds compares the new BMW X3 and Mercedes-Benz GLC
  • German exhibition explores history of fragrance
  • Jim Morrison’s historic ski descent on Mount Everest’s most dangerous run
World Forbes – Business, Tech, AI & Global InsightsWorld Forbes – Business, Tech, AI & Global Insights
Wednesday, October 29
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
World Forbes – Business, Tech, AI & Global Insights
Home » AI Hallucinations Create a New Software Supply Chain Threat
Cybersecurity

AI Hallucinations Create a New Software Supply Chain Threat

By adminApril 14, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email
Post Views: 92


Package hallucinations represent a common issue within code-generating Large Language Models (LLMs) that opens the door for a new type of supply chain attack, researchers from three US universities warn.

Referred to as ‘slopsquatting’, package hallucination occurs when the code generated by a LLM recommends or references a fictitious package.

Researchers from the University of Texas at San Antonio, University of Oklahoma, and Virginia Tech warn that threat actors can exploit this by publishing malicious packages with the hallucinated names.

“As other unsuspecting and trusting LLM users are subsequently recommended the same fictitious package in their generated code, they end up downloading the adversary-created malicious package, resulting in a successful compromise,” the academics explain in a recently published research paper (PDF).

Considered a variation of the classical package confusion attack, slopsquatting could lead to the compromise of an entire codebase or software dependency chain, as any code relying on the malicious package could end up being infected.

The academics show that, out of 16 popular LLMs for code generation, none was free of package hallucination. Overall, they generated 205,474 unique fictitious package names. Most of the hallucinated packages (81%) were unique to the model that generated them.

For commercial models, hallucinated packages occurred in at least 5.2% of cases, and the percentage jumped to 21.7% for open source models. These hallucinations are often persistent within the same model, as 58% would repeat within 10 iterations.

The academics also warn that, while the risk of LLMs recommending malicious or typosquatted packages has been documented before, it was considered low, and that the concept of package hallucinations was not considered. With the rapid evolution of Gen-AI, the risks linked to its use have also escalated.

Advertisement. Scroll to continue reading.

The researchers conducted 30 tests (16 for Python and 14 for JavaScript) to produce a total of 576,000 code samples. During evaluation, the models were prompted twice for package names, resulting in a total of 1,152,000 package prompts.

“These 30 tests generated a total of 2.23 million packages in response to our prompts, of which 440,445 (19.7%) were determined to be hallucinations, including 205,474 unique non-existent packages,” the academics note.

The academics also discovered that the models were able to detect most of their own hallucinations, which would imply that “each model’s specific error patterns are detectable by the same mechanisms that generate them, suggesting an inherent self-regulatory capability.”

To mitigate package hallucination, the researchers propose prompt engineering methods such as Retrieval Augmented Generation (RAG), self-refinement, and prompt tuning, along with model development techniques such as decoding strategies or supervised fine-tuning of LLMs.

Related: AI Now Outsmarts Humans in Spear Phishing, Analysis Shows

Related: Vulnerabilities Expose Jan AI Systems to Remote Manipulation

Related: Google Pushing ‘Sec-Gemini’ AI Model for Threat-Intel Workflows

Related:Microsoft Bets $10,000 on Prompt Injection Protections of LLM Email Clien



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
admin
  • Website

Related Posts

O2 Service Vulnerability Exposed User Location

May 20, 2025

Madhu Gottumukkala Officially Announced as CISA Deputy Director

May 20, 2025

BreachRx Lands $15 Million as Investors Bet on Breach-Workflow Software

May 19, 2025

Printer Company Procolored Served Infected Software for Months

May 19, 2025

UK Legal Aid Agency Finds Data Breach Following Cyberattack

May 19, 2025

480,000 Catholic Health Patients Impacted by Serviceaide Data Leak

May 19, 2025
Add A Comment
Leave A Reply

Don't Miss
Billionaires

Trump Donor Tim Mellon Has Likely Donated More Than Half His Fortune To Politics

October 28, 2025

Timothy Mellon and his first wife Susan Tracy Mellon attend a party in 1981—the year…

Billionaire Kwek Leng Beng’s CDL Sells 84% Of Residential Towers Amid Singapore Property Boom

October 27, 2025

Here’s All The Vineyards, Restaurants And Properties In Which Gavin Newsom Owns Stakes

October 26, 2025

These Are The Billionaires Cutting Checks To Stop Zohran Mamdani

October 24, 2025
Our Picks

What a Federal Reserve rate cut means for your finances

October 29, 2025

Character.AI to ban minors from using its chatbots

October 29, 2025

A recipe for Fish Stick Panzanella from ‘The Blue Food Cookbook’

October 29, 2025

Zimmern and Seaver promote fish and seafood in the ‘Blue Food Cookbook’

October 29, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to World-Forbes.com
At World-Forbes.com, we bring you the latest insights, trends, and analysis across various industries, empowering our readers with valuable knowledge. Our platform is dedicated to covering a wide range of topics, including sports, small business, business, technology, AI, cybersecurity, and lifestyle.

Our Picks

After Klarna, Zoom’s CEO also uses an AI avatar on quarterly call

May 23, 2025

Anthropic CEO claims AI models hallucinate less than humans

May 22, 2025

Anthropic’s latest flagship AI sure seems to love using the ‘cyclone’ emoji

May 22, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
© 2025 world-forbes. Designed by world-forbes.

Type above and press Enter to search. Press Esc to cancel.