Google’s AI R&D lab DeepMind says it has developed a new AI system to tackle problems with “machine-gradable” solutions.
In experiments, the system, called AlphaEvolve, could help optimize some of the infrastructure Google uses to train its AI models, DeepMind said. The company says it’s building a user interface for interacting with AlphaEvolve, and plans to launch an early access program for selected academics ahead of a possible broader rollout.
Most AI models hallucinate. Owing to their probabilistic architectures, they confidently make things up sometimes. In fact, newer AI models like OpenAI’s o3 hallucinate more than their predecessors, illustrating the challenging nature of the issue.
AlphaEvolve introduces a clever mechanism to cut down on hallucinations: an automatic evaluation system. The system uses models to generate, critique, and arrive at a pool of possible answers to a question, and automatically evaluates and scores the answers for accuracy.

AlphaEvolve isn’t the first system to take this tack. Researchers, including a team at DeepMind several years ago, have applied similar techniques in various math domains. But DeepMind claims AlphaEvolve’s use of “state-of-the-art” models — specifically Gemini models — makes it significantly more capable than earlier instances of AI.
To use AlphaEvolve, users must prompt the system with a problem, optionally including details like instructions, equations, code snippets, and relevant literature. They must also provide a mechanism for automatically assessing the system’s answers in the form of a formula.
Because AlphaEvolve can only solve problems that it can self-evaluate, the system can only work with certain types of problems — specifically those in fields like computer science and system optimization. In another major limitation, AlphaEvolve can only describe solutions as algorithms, making it a poor fit for problems that aren’t numerical.
To benchmark AlphaEvolve, DeepMind had the system attempt a curated set of around 50 math problems spanning branches from geometry to combinatorics. AlphaEvolve managed to “rediscover” the best-known answers to the problems 75% of the time and uncover improved solutions in 20% of cases, claims DeepMind.
DeepMind also evaluated AlphaEvolve on practical problems, like boosting the efficiency of Google’s data centers, and speeding up model training runs. According to the lab, AlphaEvolve generated an algorithm that continuously recovers 0.7% of Google’s worldwide compute resources on average. The system also suggested an optimization that reduced the overall time it takes Google to train its Gemini models by 1%.
To be clear, AlphaEvolve isn’t making breakthrough discoveries. In one experiment, the system was able to find an improvement for Google’s TPU AI accelerator chip design that had been flagged by other tools earlier.
DeepMind, however, is making the same case that many AI labs do for their systems: that AlphaEvolve can save time while freeing up experts to focus on other, more important work.