
In my illustration above, I feature scientist (The Bitcoin Man), showing a cyborg client of the latest facial expressions of robots.
Robots are learning to smile at us.
From eerily humanlike androids such as Sophia to cartoon-cute home companions like Cozmo and Kuri, designers are betting that expressive faces will make machines easier to live with, work with, and even care about. But every extra eyebrow twitch or lip curl also pulls robots deeper into the psychological minefield of how humans read faces — and into the notorious “uncanny valley.”
This article explores why roboticists are giving machines expressive faces, what we know about how people respond, and where this might be taking human–robot relationships.
Why give robots faces at all?
For decades, many industrial robots got by just fine with no faces — just arms, grippers and safety cages. As robots moved into homes, shops and hospitals, however, designers realized that purely functional machines can feel opaque and intimidating. Faces are a shortcut to social understanding.
Psychologists have long shown that facial expressions are central to how we communicate emotion and intent. Wired’s overview of the “science of smiles” notes that expressions are powerful social signals, and researchers even use robots as controlled tools to study how people interpret emotional cues. If a robot can “look” surprised after an error or “smile” when greeting you, it can instantly telegraph what it’s doing and how you should respond.
This is the logic behind SoftBank’s humanoid Pepper, marketed as an “emotional robot” that recognizes voices and reads facial expressions to adjust its responses. Pepper’s creators explicitly pitch it as a machine that understands human feeling, not just speech and commands.
At the same time, roboticists have learned the hard way that making robots too humanlike can backfire.
The shadow of the uncanny valley
The “uncanny valley”, coined by Japanese roboticist Masahiro Mori in 1970, describes how our comfort with humanlike machines rises as they get more realistic — until a point where they become almost, but not quite, human, and suddenly feel creepy instead of cute. Lifelike faces with just-off eyes or stiff expressions can provoke alarm rather than empathy.
Media coverage has turned the uncanny valley into a cultural touchstone. The Atlantic has repeatedly used it as a lens to discuss robots and digital characters, noting that androids that imperfectly mimic human facial expressions can seem “haunting,” and that similar discomfort appears in near-photorealistic animation and even hyper-personalized digital interfaces that feel “too close and not close enough” to real people.
Journalists visiting laboratories that build ultra-realistic androids report the same tension. A Verge feature on Hiroshi Ishiguro’s Geminoid robots describes a machine that closely mirrors its creator’s expressions, capable of displaying a surprisingly complex range of emotions — but also pushing straight into uncanny territory because of how nearly human the face appears.
Designers have responded with two broad strategies:
- Leaning into realism — building machines like Sophia or Geminoid that deliberately flirt with human likeness.
- Stylizing robots — creating clearly artificial but readable faces, like cartoon characters.
Both approaches rely heavily on facial expression, but in very different ways.
Realistic faces: Sophia and the androids
Hanson Robotics’ Sophia is perhaps the most widely covered expressive android. With a humanlike face made of a soft “frubber” skin over a mechanical skull, she can produce dozens of facial expressions driven by servomotors and cables beneath the surface. A Wired photo essay describes how Sophia’s design team aimed to make her “the world’s most expressive robot,” complete with tracked eye contact, smiles, and frowns.
Forbes has chronicled Sophia’s rise from lab prototype to media celebrity and even “robot citizen,” emphasizing her ability to imitate human gestures and expressions as a core part of her appeal. Cameras in her eyes and torso track faces and visual cues, while software coordinates facial movements with speech to sustain the illusion of a conversational partner.
But as journalists have also pointed out, this kind of hyper-realistic performance raises ethical and perceptual questions. When an android’s facial expressions are convincingly human but its understanding is shallow, are viewers being misled about what the machine can actually feel or think? Coverage of Sophia often highlights the gap between the robot’s expressive face and its limited autonomy, illustrating the uncanny valley in social rather than purely visual terms.
Similar tensions show up in other research platforms. The iCub humanoid, for example, is designed to study child cognition and can pull cute, toddler-like faces — but videos of it crawling or staring can easily read as “creepy” rather than adorable, as Verge writers have noted.
Roboy, a child-sized robot built with artificial tendons for natural movement, has also gone through facial redesigns to make its expression less frightening, intentionally drifting away from full realism to a more cartoonish look.
In each case, small changes in eyebrows, eye movement, or mouth shaping can tip a robot from engaging to unsettling.
Cute and cartoony: expressive faces that don’t pretend to be human
Many consumer-focused robots take the opposite route: they aim to be obviously nonhuman, but deeply expressive.
Anki’s Cozmo is a palm-sized toy robot whose “face” is a stylized digital display. Yet its designers worked with a former Pixar animator to give it dozens of nuanced expressions and gestures, so that it feels like an animated movie character come to life.
Cozmo squints, glares, droops and perks up, and these expressions are tightly synchronized with head tilts, arm movements and electronic vocalizations. The Verge and Wired both emphasized how much these facial performances contribute to the sense that Cozmo has a personality and mood, even though nobody would mistake it for a person.
Kuri, a home robot designed as a rolling cone with big, blinking eyes, follows a similar philosophy. Wired’s account of Kuri’s development explains that its creators treated expressiveness as a primary design goal, crafting eye shapes and motion patterns that convey curiosity and attention without needing a realistic mouth or nose. Simple “looks” — a glance, a blink, a head tilt — become its facial language.
Time’s coverage of Honda’s 3E-A18 “social empathy robot,” shown at CES 2018, likewise focused on its glowing, emoji-like face that changes expression as it mingles with people. It doesn’t resemble a human head; instead, it borrows visual cues from digital icons and animated characters that people already find familiar and friendly.
This cartoonish style helps avoid the uncanny valley. The faces are clearly symbolic, not skin-and-bone replicas, but they still tap into our instinctive ability to read emotion from eyes and posture.
Faces that talk: service, workplace and caregiving robots
Facial expressions aren’t just for cute toy companions. They’re also increasingly central to robots meant for work and care.
Service and retail robots
In retail and service environments, expressive faces can make robots feel more approachable and less threatening.
At Lowe’s hardware stores, the OSHbot robot guides customers through aisles. Reporting in The New Yorker notes that its designers deliberately kept it on the non-human side of the uncanny valley, avoiding a fully human face while still giving it enough cues for people to understand its role. Faceless or abstract interfaces help customers see the machine as a tool, not a pseudo-person, but OSHbot still needs to orient toward people, gesture and respond appropriately.
Baxter, an industrial robot from Rethink Robotics, uses a cartoonish face on a screen to signal its state: its “eyes” glance where its arm will move, and its expression changes to show when it’s busy or confused. The Verge’s coverage highlights how this simple digital face helps factory workers quickly understand Baxter’s intentions and feel comfortable working beside it, without mistaking it for a sentient colleague.
Pepper, meanwhile, blends speech, gestures and facial cues to act as a greeter in shops, banks and even Pizza Hut branches in Asia. WSJ reports emphasize that Pepper reads customers’ facial expressions and responds with animated arm movements and head tilts, reinforcing its role as a conversational host rather than a mere kiosk.
Care and education
In caregiving and therapy, facial expressions become even more critical, because the robot’s main job is emotional support.
An Atlantic feature on robots in caregiving notes that some designers deliberately avoid highly realistic faces for elder-care robots, precisely to sidestep the uncanny valley and keep expectations in check. Yet these robots still need expressive motion — hand gestures, head nods, light-based “eyes” — to appear attentive and empathetic.
For children with autism, expressive social robots can be therapeutic tools. The Guardian describes robots such as Zeno and Milo, small humanoids with highly mobile faces used to help children practice recognizing emotions. Children are shown different expressions and asked to identify them, while the robot simultaneously tracks their reactions. In trials, some children who rarely spoke to adult teachers would freely interact with the robot, suggesting that a simplified, controllable face can be less overwhelming than a human’s constantly changing expressions.
Other platforms used in autism research, like Kaspar and Nao, intentionally have neutral or minimal faces to reduce ambiguity, highlighting how nuanced the design choices are: too much expression can overload some users, while too little can make interaction confusing.
Do expressive robots change how we feel?
Experiments suggest we respond to robot faces in surprisingly human ways.
A study summarized by Wired involved an omelette-cooking robot named Bert with expressive eyes, eyebrows, and mouth. When Bert looked sad and apologized after mistakes, participants not only preferred working with it over a perfectly accurate but expressionless version—they sometimes lied to it to avoid “hurting its feelings.” The expressive robot took longer to complete tasks, yet people liked it more, implying that emotional rapport can trump raw efficiency.
The Verge has covered research showing that people feel real empathy when they see robots treated kindly or cruelly, with brain scans revealing reactions similar to those triggered by human subjects. Another Verge story recounts an art installation where electrical activity from slime mold was translated into emotions and displayed on a robotic face; visitors instinctively read joy, anger and fear into the changing expressions — even though they knew the “feeling” organism had no human-like inner life.
These findings echo broader discussions in The Atlantic and elsewhere: as bots and digital assistants mimic human nuance more closely, people can form emotional attachments or feel disturbed by their near-humanness, especially when the underlying systems are still narrow and brittle.
Faces as a design language, not a lie
All of this raises a thorny question: are expressive robot faces a kind of deception? If a machine furrows its brow in “concern” or flashes a reassuring smile, is it pretending to feel something it doesn’t?
Some designers argue that expressions should be treated as a user interface, not a claim of inner emotion. The New Yorker’s discussion of OSHbot and other service robots emphasizes the value of transparency: machines should signal clearly what they can and can’t do, even if they use humanlike gestures to do it. Atlantic writers similarly urge caution about sliding from useful anthropomorphism into manipulative pseudo-personhood, particularly in caregiving and political contexts.
Others point out that humans already use plenty of socially expected expressions — polite smiles, professional enthusiasm — that don’t always reflect deep feeling. In that sense, robots’ “fake” emotions are not unlike our own surface-level performances. What matters is whether the behavior helps or harms the people interacting with them.
One promising direction is to design robots whose expressiveness is clearly stylized: more Pixar than porcelain doll. Cozmo, Kuri, Baxter and Honda’s glowing-faced 3E-A18 all fall into this category. Their faces are easy to read but impossible to mistake for human, which may be the sweet spot for many everyday roles.
Looking ahead: beyond the valley
As robots move further into homes, hospitals and public spaces, facial expressions will only become more central to their design.
Journalists visiting cutting-edge labs write about android heads that mirror human expressions with uncanny precision, raising questions about what happens when such faces are mounted on agile bodies and backed by more sophisticated AI.
At the same time, cultural critics note that our fascination with lifelike mechanical faces is centuries old, from clockwork automatons to animatronic toys like Teddy Ruxpin — and that the uncanny valley may be a fundamental part of how brains, human and even non-human, respond to almost-alive things.
Whether future robots look like friendly appliances, stylized pets, or eerily familiar strangers, their faces will act as the front line of human–machine interaction. Every raised eyebrow, blink and half-smile will be a design choice about trust, transparency and how much “personhood” we are willing to grant our creations.
The challenge for roboticists isn’t just to make robots that can pull the right expression, but to decide when and why they should — and how to keep those faces on the comforting side of the uncanny line.