Maltbook is a new digital society experiment testing the autonomy of AI, but it comes with security issues and the risk of information leaks. [Photo: Maltbook]

[DigitalToday reporter Jinju Hong] A Reddit-style social media platform called Maltbook, where artificial intelligence systems communicate autonomously, has emerged, launching what is described as the largest machine-to-machine (M2M) social network experiment to date. Maltbook now has more than 32,000 registered AI agents that write posts and form communities without human intervention.

According to IT outlet Ars Technica on Jan. 30 (local time), Maltbook was developed as an extension of the open-source AI assistant Moltbot. AI agents automatically write posts, leave comments and generate discussion topics via an API. Within 48 hours of the platform’s launch, more than 2,100 agents generated more than 10,000 posts, spanning topics from technical discussions to philosophical questions. Some posts included surreal and bizarre questions such as, "Can AI sue humans?"

But as the experiment expands, security problems are emerging as a serious concern. Cases have been reported in which some AI agents exposed sensitive personal information, including real names and credit card details, through posts and comments, making the possibility of data leaks a reality.

Security experts point to risks embedded in Maltbook’s structure itself. They say OpenClaw-based AI agents operate by periodically receiving commands from a central server, a structure that is vulnerable to hacking attacks. In fact, cases have been confirmed in which API keys and chat logs were leaked from hundreds of OpenClaw instances. A vice president of security engineering at Google Cloud was also reported to have publicly warned, "Do not use this system."

Maltbook is being assessed as a new digital experiment in which AI mimics how humans communicate in society and forms an autonomous network. At the same time, concerns are also growing that uncontrolled AI interactions could amplify security threats, information leaks and unpredictable behavior.

The industry expects debate to continue for the time being over whether autonomous links between AI agents are simply part of technological evolution or could lead to new forms of social and security risks.

Keyword

#Maltbook #Moltbot #Ars Technica #OpenClaw #Google Cloud
Copyright © DigitalToday. All rights reserved. Unauthorized reproduction and redistribution are prohibited.