[Digital Today reporter Yoonseo Lee] Anthropic co-founder Jack Clark projected that an artificial intelligence (AI) system could reach a level where it can rebuild itself by the end of 2028.
Cryptopolitan, a blockchain media outlet, reported on May 5 that Clark put the odds at 60 percent that AI will enter a phase of improving its own performance by late 2028. He said he could not be sure society is ready for it.
Clark said he reached the assessment after reviewing hundreds of public datasets related to AI development in recent weeks. In a post on X, he said, "I have come to see a 60 percent chance that AI's self-improvement capabilities will fully kick in by the end of 2028," adding, "AI systems may soon be able to build themselves."
The key point is that human involvement is shrinking in the AI research and development process. Clark said AI systems are now moving toward operating under less human supervision than before. He called the shift "a very important thing." It is not just a performance improvement, but could lead to a stage where AI shortens its own development cycle.
Anthropic researcher Ajeya Cotra also mentioned progress in automating AI research and development. Cotra had expected in January that, by year-end, the amount of time AI could handle software engineering tasks would be about 24 hours, but has now revised the outlook to more than 100 hours. He said, "For the first time, I am not seeing clear counterevidence that would let me be certain that AI R&D automation will not happen within this year."
The remarks align with the view that AI is moving beyond writing code or assisting with individual tasks toward automating the R&D process itself. In particular, Clark's idea of an 'AI that builds itself' refers to a structure in which AI takes over the improvement process that humans had led, with AI leading to another AI. It is less a confirmed change than a warning that the possibility is growing rapidly.
This is also what markets and the industry are watching. If the pace of AI development outstrips human review and approval systems, not only the performance race but also safety checks and control methods could change.
In this situation, Clark's remarks are read as a signal that even within the AI industry, the timing of development automation is being seen as closer. If AI R&D automation becomes reality, the impact will not be limited to performance competition. As AI's role grows in designing and evaluating new models, there are calls for standards to verify errors and control risks to become more sophisticated as well.
In particular, as AI gets closer to repeatedly improving itself, the gap between development speed and the speed of regulation and verification could widen further. That is why the industry needs to discuss not only technological progress itself but also safety evaluations, responsibility and the scope of human involvement.
Still, because this is a personal projection based on a review of public data, how far AI will actually reach in self-improvement capabilities by 2028 is expected to depend on the pace of technological progress and whether human oversight systems are maintained.