NDotLight logo. [Photo: NDotLight]

3D AI technology startup NDotLight said on Feb. 23 it has joined the Motif Technologies consortium to participate in the government’s independent AI foundation model (Dokpamo) project.

The Motif Technologies consortium plans to start by building a reasoning large language model (LLM) with 300 billion parameters. It then plans to gradually advance the models to a vision-language model (VLM) and a vision-language-action model (VLA). Model weights, code and compute optimisation libraries will be released as commercial open source, it said, in a bid to drive growth in South Korea’s AI ecosystem.

NDotLight will be responsible for building 3D data infrastructure for physical AI training in the project. It plans to build a pipeline to generate “Sim-Ready 3D data” that creates precise 3D computer-aided design (CAD) data using only text and image inputs and can be applied immediately in simulation environments, it said. The goal is to resolve time and cost issues that arose in existing manual CAD and simulator conversion processes.

It will also generate large-scale synthetic data needed to train VLA models. Simulation-based synthetic data is seen as core infrastructure for AI robotics competitiveness because it can sharply cut costs versus real-world environments while ensuring unlimited scalability and reproducibility.

NDotLight CEO Jinyoung Park (박진영) said competition in AI foundation models is expanding beyond text to physical AI that understands the physical world. He said Sim-Ready 3D data infrastructure would determine national competitiveness.

Keyword

#NDotLight #Motif Technologies #Dokpamo #LLM #VLA
Copyright © DigitalToday. All rights reserved. Unauthorized reproduction and redistribution are prohibited.