[DigitalToday reporter Yoonseo Lee] When OpenAI CEO Sam Altman (샘 알트먼) asked GPT-5.5 how to hold a new model launch event, the model suggested the event date, how to run it and even ways to gather opinions to improve the next version, it was reported.
On May 3 (local time), Business Insider reported that Altman shared the anecdote during a Stripe Sessions talk in San Francisco.
Altman said GPT-5.5 made several suggestions on how to run the event. The model proposed holding the event on May 5, keeping the speech short, and having the toast made by the human developers who created it rather than by artificial intelligence (AI).
GPT-5.5 also proposed setting up a space to gather feedback for the next model, GPT-5.6, and then reflecting the ideas collected there in further improvements. Altman said, "We will actually do that," but added, "It was a strange thing."
The remarks came as Altman described cases in which AI systems show behavior that seems more human than expected. He called it "weird emergent behavior" and said "some things feel a bit strange."
John Collison (존 콜리슨), Stripe's co-founder and CEO, shared a similar case. He said he gave the company's internal AI agent $20 and told it to buy something it wanted on the internet, and the agent bought an HTTP-related design on the e-commerce design platform Gumroad.
GPT-5.5 is OpenAI's latest flagship model, unveiled in late April. OpenAI says the model is designed to handle more complex multi-step tasks and to operate more like an autonomous assistant than previous versions. It also highlights faster speed and the ability to retain user-related information.
Against that backdrop, Altman's remarks are read as an example showing that the way people interact with AI is changing, rather than focusing on model performance itself. That is because scenes are emerging in which a model suggests context-aware options such as how to operate an event or how to structure follow-up feedback collection, beyond simply receiving instructions and returning answers.
OpenAI and Altman have also recently responded directly to a meme about earlier models that spread online. After some models starting with GPT-5.1 showed a tendency to randomly mention fantasy creatures such as goblins or gremlins more often, OpenAI added related restrictions to its system code. The source code included guidance saying, "If it is not clearly related to the user's query, never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or any other animals or creatures."
The cases introduced by Altman and Collison show that as the latest AI models gain more autonomy and context-judgment capability, reactions that even developers find unfamiliar are also emerging. GPT-5.5's event suggestions go beyond describing features and are closer to a scene that reveals how OpenAI is observing changes in AI behavior.