KAIST said on Jan. 27 that a research team led by Hyunwoo Kim (김현우), a professor in the School of Computing, jointly with a Korea University research team developed a new technique that can effectively transfer learned knowledge between different AI models.
In the AI field, vision-language models (VLM) that understand photos and text together have been advancing rapidly. These models pre-train on large-scale image and language data and can adapt relatively quickly to new domains with only small amounts of data.
But having to repeat the adaptation process from scratch whenever a new AI model emerges has been seen as a major inefficiency. Existing adaptation methods also have limits, including difficulty in using them when model structures differ even slightly, or a sharp rise in memory and computing costs because multiple models must be used at the same time.
To address the problem, the research team proposed TransMiter, a transferable adaptation method that can reuse learned knowledge regardless of a model's structure or size. The core is directly moving adaptation experience accumulated during learning to another AI model. Without modifying AI's complex internal structure, it passes on tips learned by looking only at prediction results to another AI.
Even if AI models look different, if their answers to the same question are organized as a reference, know-how learned by one AI can be used immediately by another. There is no need to repeat complex and time-consuming learning processes, and speed hardly slows down.
The research team explained that it first demonstrated that AI adaptation knowledge, long considered nearly impossible to reuse when model structure or size differs, can be precisely transplanted regardless of model type. It is also expected to be used for a "knowledge patch" technology that reduces repetitive learning costs and updates large language models in real time to fit needed fields.
Kim said expanding the research could sharply reduce post-training costs that had to be repeatedly incurred each time fast-advancing hyperscale language models emerge. He said it would make it possible to patch models to easily add domain-specific expertise.
The study listed as co-authors Taehoon Song (송태훈), a master's student in KAIST's School of Computing, Sanghyuk Lee (이상혁), a postdoctoral researcher, and Jihwan Park (박지환), a doctoral student at Korea University. The results were selected for an oral presentation at AAAI 2026, an international AI academic conference.
Kim's lab also published a total of 3 papers including TabFlash, a technology that advances understanding of tables within documents, conducted jointly with Google Cloud AI, in addition to this paper.