Twelve Labs, a global video understanding AI company, said on March 23 it has installed its multimodal AI model Marengo on Gettyimagebank, an image and video platform operated by Getty Images Korea.
The integration allows natural-language searches of 270,000 pieces of content held by Gettyimagebank. It targets about 100,000 Getty Images Korea customers.
Video search on traditional stock platforms has remained centered on keywords based on metadata and tags. Marengo integrates and analyzes visual, audio and text information within video. The company explained that when users enter specific scenes such as "children playing on a beach" or "an office worker walking with an umbrella on a rainy night," the AI produces results that match the context. It is designed to understand the overall context and meaning of a video rather than simply recognizing objects.
Marengo touts a 43 percent performance edge over big tech models from Google and OpenAI. It has also been supplied to Amazon Bedrock as the first Korean AI model and the first video AI model to do so.
Lee Jae-sung (이재성), CEO of Twelve Labs, said, "We will lead the innovative changes AI is creating in the media industry." Yoon Chun-hee (윤춘희), CEO of Getty Images Korea, said she expects natural-language video search to improve customers' search efficiency and create a positive virtuous cycle in the creator ecosystem.