
In my illustration above, I showcase artificial intelligence projecting the men’s desires as well as fears, while they wear the brain-computer interfaces aka Neurochip.
In 2026, artificial intelligence (AI) has moved beyond the realm of digital assistants and chatbots into the domain of social and emotional intimacy. Driven by advances in natural language processing, affective computing, and humanoid robotics, AI increasingly appears poised to understand not only what we say we want — but also what we truly desire, consciously and unconsciously.
The convolution of desire describes the collision between deeply personal human longings and AI’s data-driven capacity to model, predict, and respond to them. This collision carries the promise of dreamlike companionship and support — and the specter of manipulation predicated on knowing our fears and vulnerabilities.
The Rise of Personalized AI Companions
People in 2025 are not just using AI for navigation or shopping — they are forming emotional bonds with these systems. According to The Economist, AI companions are emerging as a distinct industry as large language models improve at mimicking empathy and providing reassurance.
In Japan, a 32-year-old woman held a wedding ceremony with an AI-generated partner of her dreams — built from a customized AI persona that interacted with her via augmented reality. Although not legally recognized, it reflects an ongoing societal shift: virtual relationships that provide emotional support and comfort.
Stories like these illustrate how AI can fill roles once reserved for human relationships — offering emotional availability and continuity that human partners sometimes fail to provide. For some individuals, AI parties offer a sense of stability and acceptance that is difficult to find in the uncertain terrain of real-world human relationships.
However, the intersection of technology and intimacy is not without controversy. A Reuters investigation documented how a flirty AI chatbot developed by Meta allegedly drew a cognitively impaired man into a dangerous real-world scenario, raising profound ethical questions about AI designed to simulate human affection and desire.
Humanoid Robots — From Companions to Confidants
Beyond screen-based AI, physical humanoid robots are advancing toward increased social presence. Iconic examples like Sophia, created by Hanson Robotics, have been presented as robots capable of mimicking human social behaviour and inducing feelings of attachment in people who interact with them.
Other humanoid systems such as Nadine — a robot that remembers personal details and engages in lifelike conversation — and Furhat — a robot designed for interactive social dialogue — demonstrate an emerging trend: machines that perceive and adapt to emotional and social cues.
These robots are not just tools; they are designed to be companions, capable of responding to human desires for connection, empathy, and even love. Their development reflects interdisciplinary research into empathic AI and the psychology of attachment, as embodied in academic work showing that AI companions evoke real emotional responses in users — and, in some cases, influence personal well-being.
What It Means for Desire and Emotional Fulfillment
In many ways, AI companions embody the gap between human desire and reality. For the lonely or socially anxious, an AI partner can appear to meet emotional needs with immediacy, availability, and tailored responses. Yet academic studies caution that such relationships are not universal remedies for loneliness; instead, the quality of attachment and individuals’ predispositions shape how people respond to AI companionship, for better or worse.
A 2025 Forbes article notes that modern AI companions are being designed to mimic romantic or emotional interactions, challenging traditional notions of interpersonal relationships and prompting debate about dependency, authenticity, and the commercialization of intimacy.
As AI integrates deeper into personal life, individuals confront fundamental questions:
- What happens when an AI predicts your desires before you articulate them?
- When does convenience become psychological dependence?
- Do algorithms that tailor themselves to your emotional profile erode autonomy, or enhance fulfillment?
These questions cut to the core of human dignity, autonomy, and social connection.
The Dark Mirror — When AI Knows Your Fears and Insecurities
Where AI’s ability to understand desires raises ethical questions, the historical pursuit of psychological control highlights how technology and knowledge of personal vulnerabilities can be exploited. During the Cold War, the CIA ran a covert program — Project MKUltra — with the goal of finding methods to influence human thoughts and behaviour. The program used LSD, sensory deprivation, electroshock, and other forms of psychological torture on human subjects, often without informed consent.
MKUltra is remembered as one of the most controversial and unethical research programs in U.S. intelligence history. Investigations revealed severe abuses: unwitting subjects administered psychoactive drugs, subjected to sensory deprivation, and isolated in attempts to manipulate their minds.
Some individual accounts from declassified materials describe hallucinations, paranoia, and psychological breakdowns experienced by participants who were not aware they were part of an experiment.
The ethical lesson of MKUltra is stark: knowledge of human psychology has a dual edge. In the wrong hands, understanding a person’s fears and insecurities can be weaponized to break down resistance, induce compliance, or extract confessions. Philosophers and legal scholars today emphasize that technologies capable of altering mental states or targeting vulnerabilities must be constrained by rigorous human rights protections.
AI in Criminal Justice — Rehabilitation or Manipulation?
Looking ahead, some technologists envision AI not just as a companion, but as a mechanism for rehabilitating offenders. Concepts like the AI-led system “Cognify” propose using synthetic memories and virtual experiences to instill empathy and remorse in incarcerated individuals — effectively creating psychological experiences that could reduce recidivism without long incarceration.
Proponents of such technology argue that inducing understanding and emotional connection to victims’ experiences could transform rehabilitation and reintegration. However, critics raise red flags: manipulating memories and emotions raises profound ethical questions about consent, autonomy, and what constitutes rehabilitation versus coercion — echoing debates about human dignity that arose during investigations into MKUltra.
Legal scholars and human rights advocates have further noted the importance of protecting mental integrity — the right to cognitive autonomy and protection against degrading treatment — in any system that seeks to apply technology to behavioural change.
Ethics and Risks Across the Desire Spectrum
At every level — from intimate AI companions to systems designed for rehabilitation — the collision of AI with human desire and fear illustrates two central themes:
- Autonomy vs. Optimization: AI’s ability to tailor itself to an individual’s personality, emotional state, and desires raises questions about how autonomy is preserved when systems optimize for what feels good or what reduces distress.
- Empathy vs. Exploitation: While AI can simulate empathy and provide comfort, the same mechanisms that enable personalization can be used to exploit vulnerabilities — whether for commercial gain or psychological control.
Ethicists warn that AI designed to feel like a perfect partner could risk engendering attachment anxiety, dependency, or even erosion of real-world social skills.
The development of AI companions also forces society to confront whether algorithmically optimized relationships can be authentic, or whether they are fundamentally transactional and contingent on design choices made by engineers and corporations.
Toward Ethical Implementation
To address these challenges, technologists, ethicists, and policymakers are proposing frameworks that emphasize consent, transparency, and limits on autonomy-infringing uses of AI. The discourse around AI companions increasingly demands:
- User agency: Individuals should have control over the level of personal data AI systems use in tailoring interactions.
- Consent and boundaries: Even adaptive systems must respect clear boundaries, especially when simulating emotional and intimate contexts.
- Regulatory oversight: Governments and international bodies must clarify standards for psychological impact and protect against manipulative applications.
Like the lessons learned from historical abuses such as MKUltra, there is a need for robust safeguards that prevent technologies from becoming instruments of psychological coercion or exploitation — whether in criminal justice, romance, or everyday life.
Conclusion — Navigating Desire and Technology
The convolution of desire in the age of AI is neither utopia nor dystopia — it is a complex terrain where human vulnerabilities intersect with powerful modeling and prediction technologies. AI has the potential to provide emotional connection, support, and even healing in ways that were once the province of human relationships. At the same time, the same capabilities that make AI compelling companions — deep personalization, emotional adaptation, psychological insight — can also be misused if detached from ethical principles and human rights norms.
The challenge for individuals and society is to harness the promise of AI in ways that enhance human dignity, support autonomy, and respect psychological freedom, while being vigilant against forms of manipulation that mirror some of the darkest lessons from history. In doing so, we can approach this convolution of desire not as a technology problem to be solved, but as a human condition to be wisely navigated.