Close Menu
World Forbes – Business, Tech, AI & Global Insights
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
What's Hot

Daylight saving time ends Sunday in the US

October 27, 2025

Japan’s sushi legend in ‘Jiro Dreams of Sushi’ documentary turns 100

October 26, 2025

Louvre heist leaves a cultural wound — and may turn French Crown Jewels into legend

October 26, 2025
Facebook X (Twitter) Instagram
Trending
  • Daylight saving time ends Sunday in the US
  • Japan’s sushi legend in ‘Jiro Dreams of Sushi’ documentary turns 100
  • Louvre heist leaves a cultural wound — and may turn French Crown Jewels into legend
  • By the Numbers: Why trick-or-treaters may bag more gummy candy than chocolate this Halloween
  • Health providers turning to prescriptions to get people outside
  • Poker’s NBA-and-Mafia betting scandal echoes movies, popular culture
  • Book lovers and history buffs find solace in centuries-old athenaeums
  • Grandmothers in Colombia get the quinceañera they never had
World Forbes – Business, Tech, AI & Global InsightsWorld Forbes – Business, Tech, AI & Global Insights
Monday, October 27
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
World Forbes – Business, Tech, AI & Global Insights
Home » Google launches ‘implicit caching’ to make accessing its latest AI models cheaper
AI

Google launches ‘implicit caching’ to make accessing its latest AI models cheaper

By adminMay 8, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email
Post Views: 167


Google is rolling out a feature in its Gemini API that the company claims will make its latest AI models cheaper for third-party developers.

Google calls the feature “implicit caching” and says it can deliver 75% savings on “repetitive context” passed to models via the Gemini API. It supports Google’s Gemini 2.5 Pro and 2.5 Flash models.

That’s likely to be welcome news to developers as the cost of using frontier models continues to grow.

We just shipped implicit caching in the Gemini API, automatically enabling a 75% cost savings with the Gemini 2.5 models when your request hits a cache 🚢

We also lowered the min token required to hit caches to 1K on 2.5 Flash and 2K on 2.5 Pro!

— Logan Kilpatrick (@OfficialLoganK) May 8, 2025

Caching, a widely adopted practice in the AI industry, reuses frequently accessed or pre-computed data from models to cut down on computing requirements and cost. For example, caches can store answers to questions users often ask of a model, eliminating the need for the model to re-create answers to the same request.

Google previously offered model prompt caching, but only explicit prompt caching, meaning devs had to define their highest-frequency prompts. While cost savings were supposed to be guaranteed, explicit prompt caching typically involved a lot of manual work.

Some developers weren’t pleased with how Google’s explicit caching implementation worked for Gemini 2.5 Pro, which they said could cause surprisingly large API bills. Complaints reached a fever pitch in the past week, prompting the Gemini team to apologize and pledge to make changes.

In contrast to explicit caching, implicit caching is automatic. Enabled by default for Gemini 2.5 models, it passes on cost savings if a Gemini API request to a model hits a cache.

Techcrunch event

Berkeley, CA
|
June 5

BOOK NOW

“[W]hen you send a request to one of the Gemini 2.5 models, if the request shares a common prefix as one of previous requests, then it’s eligible for a cache hit,” explained Google in a blog post. “We will dynamically pass cost savings back to you.”

The minimum prompt token count for implicit caching is 1,024 for 2.5 Flash and 2,048 for 2.5 Pro, according to Google’s developer documentation, which is not a terribly big amount, meaning it shouldn’t take much to trigger these automatic savings. Tokens are the raw bits of data models work with, with a thousand tokens equivalent to about 750 words.

Given that Google’s last claims of cost savings from caching ran afoul, there are some buyer-beware areas in this new feature. For one, Google recommends that developers keep repetitive context at the beginning of requests to increase the chances of implicit cache hits. Context that might change from request to request should be appended at the end, the company says.

For another, Google didn’t offer any third-party verification that the new implicit caching system would deliver the promised automatic savings. So we’ll have to see what early adopters say.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
admin
  • Website

Related Posts

After Klarna, Zoom’s CEO also uses an AI avatar on quarterly call

May 23, 2025

Anthropic CEO claims AI models hallucinate less than humans

May 22, 2025

Anthropic’s latest flagship AI sure seems to love using the ‘cyclone’ emoji

May 22, 2025

A safety institute advised against releasing an early version of Anthropic’s Claude Opus 4 AI model

May 22, 2025

Anthropic’s new AI model turns to blackmail when engineers try to take it offline

May 22, 2025

Meta adds another 650 MW of solar power to its AI push

May 22, 2025
Add A Comment
Leave A Reply

Don't Miss
Billionaires

OpenEvidence’s Daniel Nadler $1.3 Billion Richer In Just Three Months After The AI Startup Hits $6 Billion Valuation

October 20, 2025

OpenEvidence’s Daniel NadlerMauricio Candela for Forbes OpenEvidence, which Forbes profiled in July, has been signing…

Alex Bouaziz On Deel’s Latest Fundraise And Why He’s Not Worried About Litigation

October 20, 2025

Meet The Florida Sugar Barons Worth $4 Billion And Getting Sweet Deals From Donald Trump

October 17, 2025

Why Direct Lending Is Not In A Bubble

October 16, 2025
Our Picks

Daylight saving time ends Sunday in the US

October 27, 2025

Japan’s sushi legend in ‘Jiro Dreams of Sushi’ documentary turns 100

October 26, 2025

Louvre heist leaves a cultural wound — and may turn French Crown Jewels into legend

October 26, 2025

By the Numbers: Why trick-or-treaters may bag more gummy candy than chocolate this Halloween

October 25, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to World-Forbes.com
At World-Forbes.com, we bring you the latest insights, trends, and analysis across various industries, empowering our readers with valuable knowledge. Our platform is dedicated to covering a wide range of topics, including sports, small business, business, technology, AI, cybersecurity, and lifestyle.

Our Picks

After Klarna, Zoom’s CEO also uses an AI avatar on quarterly call

May 23, 2025

Anthropic CEO claims AI models hallucinate less than humans

May 22, 2025

Anthropic’s latest flagship AI sure seems to love using the ‘cyclone’ emoji

May 22, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
© 2025 world-forbes. Designed by world-forbes.

Type above and press Enter to search. Press Esc to cancel.