[DigitalToday reporter Jinju Hong (홍진주)] Google DeepMind released one-year results for its generative artificial intelligence tool AlphaEvolve.
On May 8 local time, online media outlet Gigazine reported that AlphaEvolve improved performance and cost metrics across many areas, including DNA analysis, power grids, natural-disaster forecasting, quantum computing, Google’s internal infrastructure and logistics.
AlphaEvolve is built on Gemini. It generates candidates for new code and computational methods. An automated evaluation system scores them for speed, accuracy and cost savings, then further improves the higher-performing candidates. It focuses on finding better algorithms for problems that can be assessed quantitatively, rather than being a simple code-generation tool. When it unveiled the system in May last year, Google DeepMind suggested it could discover new solutions to unknown algorithms or unsolved problems.
Among the newly disclosed cases, the improvement to the DNA analysis model DeepConsensus in life sciences stands out. AlphaEvolve reduced variant-detection errors by 30 percent, cutting cases where disease-related variants in genetic data were missed or non-existent variants were incorrectly detected. For researchers, it is a figure that can raise the accuracy of genetic data interpretation.
In power grids, it produced results in solving the "AC optimal power flow" problem. In allocating electricity efficiently while meeting constraints at power plants and transmission lines, the power-grid AI model’s rate of finding feasible solutions jumped from 14 percent to more than 88 percent. More feasible solutions can reduce the burden of manual fixes or corrections through separate systems during operations.
Improvements were also confirmed in natural-disaster forecasting models. In an AI model that predicts risks for 20 types of natural disasters, including wildfires, floods and tornadoes, it revised how Earth-observation data are processed and raised overall accuracy by 5 percent. In quantum computing, it proposed a quantum circuit for running molecular simulations on Google’s quantum processor Willow, and the circuit had 10 times fewer errors than existing optimization methods.
Its use in supporting mathematics research has also expanded. Through collaboration with UCLA mathematician Terence Tao, AlphaEvolve supported research into the so-called Erdos problem group. It was also used to improve lower-bound records on classic mathematics tasks such as the traveling salesman problem and Ramsey numbers.
Its application to Google’s internal infrastructure has also moved into full swing. AlphaEvolve was deployed to optimize next-generation TPU designs, and it completed in 2 days work that can take months when carried out with humans at the center in improving cache replacement policies. In internal processing for the database Google Spanner, it lowered the write amplification rate by 20 percent, and it also reduced software storage capacity by about 9 percent through compiler optimization strategies.
Commercial applications are also increasing. Klarna doubled training speed for large transformer models, and semiconductor startup Substrate improved computational lithography processing speed by several times. FM Logistic cut annual travel distance by more than 15,000 km through delivery route optimization, and WPP raised AI model accuracy for advertising campaigns by 10 percent. Schrödinger increased by about 4 times the speed of models related to molecular simulations used in new drug development and materials development.
Google DeepMind defines AlphaEvolve as a "Gemini-based coding agent" and is expanding its applications across multiple industries. It also presented AlphaEvolve as an important advance toward an "era in which AI discovers and improves algorithms". The core of this announcement is that, in just 1 year after its release, it secured examples of use extending beyond a research tool to Google infrastructure and corporate operations.