[DigitalToday reporter Chi-gyu Hwang] China AI startup DeepSeek published a paper on a new AI technology called Engram.
Reports by foreign media outlets including The Information said the paper lists DeepSeek founder Liang Wenfeng, DeepSeek researchers and Peking University researchers as co-authors. Engram supports having AI retrieve simple factual information, such as national capitals or the years of historical events, through conditional lookup from a separate memory store instead of repeatedly computing it.
Existing generative AI models have had to reconstruct such information through the internal model each time, which makes computation costly. Engram reduces those costs and helps focus more resources on higher-level reasoning. This is particularly advantageous for improving the operating efficiency of large language models (LLMs), and faster responses and better accuracy are expected in multiple chat sessions or continuous command processing.
DeepSeek suggested it may also apply the technology to its next-generation V4 model. V4 is a successor to V3, which was released in December last year, and is known to have significantly improved code generation capabilities. It is expected to be unveiled around the Lunar New Year holiday in February.
Along with the paper, DeepSeek released code implementing Engram on GitHub, strengthening cooperation with the open-source ecosystem.