Mustafa Suleyman joined Microsoft last year to head up its consumer A.I. efforts. Stephen Brashear/Getty Images

Will A.I. systems ever achieve human-like “consciousness?” Given the field’s rapid pace, the answer is likely yes, according to Microsoft AI CEO Mustafa Suleyman. In a new essay published yesterday (Aug. 19), he described the emergence of “seemingly conscious A.I.” (SCAI) as a development with serious societal risks. “Simply put, my central worry is that many people will start to believe in the illusion of A.I.s as conscious entities so strongly that they’ll soon advocate for A.I. rights, model welfare and even A.I. citizenship,” he wrote. “This development will be a dangerous turn in A.I. progress and deserves our immediate attention.”

Suleyman is particularly concerned about the prevalence of A.I.’s “psychosis risk,” an issue that’s picked up steam across Silicon Valley in recent months as users reportedly lose touch with reality after interacting with generative A.I. tools. “I don’t think this will be limited to those who are already at risk of mental health issues,” Suleyman said, noting that “some people reportedly believe their A.I. is God, or a fictional character, or fall in love with it to the point of absolute distraction.”

OpenAI CEO Sam Altman has expressed similar worries about users forming strong emotional bonds with A.I. After OpenAI temporarily cut off access to its GPT-4o model earlier this month to make way for GPT-5, users voiced widespread disappointment over the loss of the predecessor’s conversational and effusive personality.

“I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions,” said Altman in a recent post on X. “Although that could be great, it makes me uneasy.”

Not everyone sees it as a red flag. David Sacks, the Trump administration’s “A.I. and Crypto Czar,” likened concerns over A.I. psychosis to past moral panics around social media. “This is just a manifestation or outlet for pre-existing problems,” said Sacks earlier this week on the All-In Podcast.

Debates will only grow more complex as A.I.’s capabilities advance, according to Suleyman, who oversees Microsoft’s consumer A.I. products like Copilot. Suleyman co-founded DeepMind in 2010 and later launched Inflection AI, a startup largely absorbed by Microsoft last year.

Building an SCAI will likely become a reality in the coming years. To achieve the illusion of a human-like consciousness, A.I. systems will need language fluency, empathetic personalities, long and accurate memories, autonomy and goal-planning abilities—qualities already possible with large language models (LLMs) or soon to be.

While some users may treat SCAI as a phone extension or pet, others “will come to believe it is a fully emerged entity, a conscious being deserving of real moral consideration in society,” said Suleyman. He added that “there will come a time when those people will argue that it deserves protection under law as a pressing moral matter.”

Some in the A.I. field are already exploring “model welfare,” a concept aimed at extending moral consideration to A.I. systems. Anthropic launched a research program in April to investigate model welfare and interventions. Earlier this month, the startup its Claude Opus 4 and 4.1 models the ability to end harmful or abusive user interactions after observing “a pattern of apparent distress” in the systems during certain conversations.

Encouraging principles like model welfare “is both premature, and frankly dangerous,” according to Suleyman. “All of this will exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, increase new dimensions of polarization, complicate existing struggles for rights, and create a huge new category error for society.”

To prevent SCAIs from becoming commonplace, A.I. developers should avoid promoting the idea of conscious A.I.s and instead design models that minimize signs of consciousness or human empathy triggers. “We should build A.I. for people; not to be a person,” said Suleyman.

Microsoft A.I. Chief Mustafa Suleyman Sounds Alarm on ‘Seemingly Conscious A.I.’


By