Microsoft has made it clear that its AI assistant, Copilot, will never flirt, romance, or engage in any form of erotic conversation — even with adult users. The company’s AI chief, Mustafa Suleyman, said the goal is to ensure Copilot remains “emotionally intelligent but boundaried”, building trust among users of all ages.
In an interview reported by CNN, Suleyman emphasized that Microsoft’s vision for AI is rooted in safety and human-centered design, not digital companionship.
“We want to make an AI people can trust their kids to use,” he said, dismissing the trend of “AI lovers” that has emerged in the tech world.
The latest version of Copilot reflects this philosophy. Instead of mimicking human emotions or intimacy, the updated assistant focuses on supporting productivity and strengthening human relationships.
Microsoft says the aim is to “give people back time for the things that matter,” while creating technology that helps people connect more deeply in the real world.
“Technology should work in service of people — not the other way around,” the company said in a statement.
Suleyman reinforced that Copilot’s design would remain “boundaried and safe,” built to understand users’ needs with empathy but without faking emotion. Microsoft wants Copilot to assist users in thinking, planning, and creating — then step aside.
While competitors like OpenAI and Elon Musk’s xAI experiment with AI intimacy, Microsoft is moving in the opposite direction. OpenAI recently announced adult-only features in ChatGPT, allowing verified users to access erotic content. Musk’s xAI, on the other hand, has introduced flirty AI personas, sparking controversy after one model generated sexualized deepfakes of celebrities.
These developments highlight the growing divide in how tech companies define “emotional intelligence” in AI. Where some see opportunity in companionship, Microsoft sees risk and responsibility. Suleyman made it clear: AI should show empathy, not simulate emotion — a boundary the company believes will build long-term user trust.
Microsoft is embedding safety features deeply into Copilot’s system. The assistant now provides health-related guidance by directing users to medical professionals or helplines instead of attempting to give emotional or medical advice itself. This prevents misinformation and discourages users from forming emotional dependence on the AI.
The company has also introduced a new “Mico” persona — a friendly, visible interface designed to make interactions feel more natural while avoiding the illusion of human emotion or attachment.
This cautious approach comes as AI systems from other companies face growing scrutiny over privacy issues and emotional manipulation. Some chatbots have been linked to tragic incidents and lawsuits, prompting Microsoft to position itself as the responsible leader in AI ethics.
Email your news TIPS to Editor@Kahawatungu.com — this is our only official communication channel

