Twelvelabs introduces natural-language search to the UNICEF Korea Committee’s video archive. [Photo: Twelvelabs]

Video-understanding AI company Twelvelabs said on Tuesday it converted the UNICEF Korea Committee’s 8 terabytes (TB) of unstructured video and photo data into an AI-based digital archive.

The UNICEF Korea Committee had managed a large volume of materials stored across individual staff PCs and network-attached storage (NAS). To find a video for a specific campaign, staff had to manually check thousands of folders. When files were saved with only simple filenames, it was difficult to identify their contents, and some materials were not fully used.

Twelvelabs applied its "video-native" technology, which analyses video by time flow and situational context, to address the problem. Based on multimodal AI that structurally understands relationships among people, actions, objects and backgrounds in video, the system immediately presents relevant segments and timestamps when staff enter natural-language sentences such as "a scene of children drinking water at a drinking water site in Africa." Newly accumulated data are also managed as real-time searchable assets through automatic indexing.

With the deployment, search time for the UNICEF Korea Committee’s materials fell about 95 percent from before, Twelvelabs said. As repetitive searching work decreased, more time became available for core tasks such as campaign planning and content production.

Twelvelabs CEO Jaesung Lee (이재성) said making good use of video assets could help decision-making on what support is needed and how planned activities work in the field. He said the company plans to expand cooperation so institutions and companies with large-scale unstructured data can create meaningful value from video assets.

Keyword

#Twelvelabs #UNICEF Korea Committee #NAS #multimodal AI #natural-language search
Copyright © DigitalToday. All rights reserved. Unauthorized reproduction and redistribution are prohibited.