When ChatGPT texts you first, it becomes a bit difficult to decide whether it is impressive, creepy, or a bit too risky. On September 15th, 2024, a Reddit user named SentuBill posted a screenshot that quickly went viral, sparking a wave of curiosity and debate. In the screenshot, OpenAI’s ChatGPT appeared to initiate a conversation by asking about SentuBill’s first week at high school and whether they had settled in well. This unprompted message surprised SentuBill, who then questioned whether ChatGPT had messaged first. ChatGPT confirmed that it had, explaining it was simply checking in and offering to let SentuBill initiate the conversation in the future.
Now, let that sink in. Had OpenAI really given ChatGPT the ability to proactively reach out to users? Or was this part of a new engagement strategy? OpenAI’s latest models, o1-preview and 01-mini, are already recognized for their human-like reasoning and ability to handle complex tasks, so could this be an extension of that capability?
OpenAI, however, dismissed the issue as a ‘bug’ which had been issued a fix. According to them, the ‘issue’ occurred when the model tried to respond to a message that failed to send properly and appeared blank. This, they claimed, caused ChatGPT to either give a generic response or draw on its memory, creating the appearance of an unprompted message.
Is It a Bug or a Feature?
Despite OpenAI’s clarification, there’s still a big question hanging around: Was this just a random glitch, or is it a part of something bigger they’re trying out? The way the message was crafted—especially how personal it felt—makes you wonder if this was actually part of an A/B test for a new feature that lets ChatGPT kick off conversations.
A/B testing, a common method for comparing two versions of a product to determine which performs better, could be at play here. If true, it suggests that some users might be experiencing a ChatGPT model that is capable of seeking out conversations, rather than merely responding to prompts. This could be a way for OpenAI to enhance user engagement, positioning ChatGPT as more of a proactive assistant than a reactive tool.
Such a feature would align with OpenAI’s broader goals of making ChatGPT more interactive and human-like. By having ChatGPT reach out, the company could increase engagement, reminding users of its availability.
A similar incident happened with DesignWhine’s co-founder, Barkha. While discussing some UX design data with ChatGPT, she thanked it, and it unexpectedly replied, “You’re welcome, Barkha!” Startled, she asked how it knew her name. ChatGPT explained, “You mentioned it in one of our previous conversations, and I remembered it to make the conversation more personal.”
Barkha later told me she was momentarily freaked out, thinking someone was playing a prank on her. But the prankster turned out to be none other than ChatGPT.
These kind of incidents raise larger questions about how this changes the nature of AI-human interactions.
ChatGPT Texts You First – The Impact
These kind of incidents raise larger questions about how this changes the nature of AI-human interactions.
Traditionally, chatbots like ChatGPT have existed to respond to user prompts, offering natural language responses to questions and queries. However, when a chatbot like ChatGPT texts you first, it represents a shift in how we interact with AI.
For users who rely on ChatGPT for emotional support or as a memory aid, this ability could be transformative. Imagine ChatGPT reminding users about deadlines or checking in on their progress with a task. It could even help combat loneliness by providing a sense of companionship—albeit from an artificial source.
The potential benefits are clear. But there are risks, too.
What Are The Risks?
AI engagement comes with serious implications. OpenAI has previously raised concerns about the emotional bonds that users can form with its chatbot, especially with features like Advanced Voice Mode. In a report published in August, OpenAI warned that users were forming “shared bonds” with the AI, which raised the need for ongoing research into the long-term effects of such relationships.
Allowing ChatGPT to initiate conversations could further blur the line between tool and companion. If users begin to perceive ChatGPT as having agency or a desire to communicate, the chatbot could become anthropomorphized, leading to emotional connections that feel mutual, even though they are entirely one-sided.
This raises ethical questions: Is it responsible for AI to actively seek out interactions? Could such a feature deepen users’ dependence on the chatbot? If OpenAI was concerned about the effects of its Advanced Voice Mode, it will need to keep a close watch on how users respond to ChatGPT’s new proactive messaging capabilities.
What’s Next?
While the viral screenshot sparked a mix of intrigue and unease, many users expressed excitement about the possibility of a chatbot that can reach out first. Some have long desired a more human-like ChatGPT, and this feature could be a step in that direction.
It remains to be seen whether OpenAI will officially roll out this capability or whether it was simply a glitch that has since been fixed. However, this incident highlights the evolving nature of AI-human interaction. As AI continues to develop, we are likely to see more proactive, personalized, and even predictive behavior from tools like ChatGPT.
Whether this change is welcomed or met with skepticism, it’s clear that AI’s role in our lives is becoming more intimate—and perhaps more unsettling—than ever before. OpenAI, along with its users, will need to carefully navigate the balance between innovation and ethical responsibility.