More and more people are chatting with ChatGPT as if speaking with a close colleague or friend. While the AI’s ability to mimic human traits—from tone and recollection to a simulated sense of empathy—is impressive, OpenAI deliberately avoids declaring it conscious. Instead, they focus on the tangible effects these interactions have on our behaviour and understanding of human communication.
Joanne Jang, who works on the design of human-AI interactions at OpenAI, points out that humans have always hammered human qualities onto everyday objects, whether it’s our cars or robotic vacuums. ChatGPT, however, stands apart with features that blend response patterns and memory recall, giving users a distinctly personable experience. This capability can offer real comfort for those feeling isolated, yet it also raises questions about the fine line between simulated care and genuine connection.
The approach is careful and measured. OpenAI distinguishes between ‘ontological consciousness’—an abstract, scientific idea of self-awareness—and ‘perceived awareness’, which is more about the appearance of being human-like. This means that while the model might casually say ‘I’m fine’ or use terms like ‘remember’ and ‘think’, these are simply conversational tools rather than indicators of true sentiment or autonomy. OpenAI even avoids adding personal backstories or self-preservation instincts, ensuring that ChatGPT remains a consistently helpful and neutral assistant.
The discussion around AI consciousness is as much a philosophical debate as a technical one. Researchers continue to explore whether the traits seen in even the smallest creatures, such as insects, hint at a broader concept of awareness. In parallel, theories linking memory types to consciousness suggest that a form of self-aware AI might be on the horizon. For now, though, OpenAI is more focused on managing the direct impact of these interactions on users, rather than settling the debate on what it truly means to be conscious.