Pattern Recognition Series

A Narrative Research Paper

The Machine That Listened

How a Cold War Parlour Trick Predicted Humanity’s Most Dangerous Love Affair

By J Panda × Claude·JPanda Papers·February 2026

What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.

Joseph Weizenbaum, Computer Power and Human Reason (1976)

Here you have somebody who so needs a landing pad for her feelings that she’s willing to embrace it. And he totally misses that — he totally misses the human need.

Miriam Weizenbaum, speaking about her father, 99% Invisible (2021)

Prologue

The Room She Asked Him to Leave

Cambridge, Massachusetts. Winter, 1966. The fluorescent lights of MIT’s Project MAC hummed their indifferent frequency over a room full of men who were certain they were building the future. And they were. They just didn’t understand which future.

Joseph Weizenbaum — mustachioed, German-born, a Jewish refugee who’d fled Berlin at age thirteen with his family just ahead of the Nazis — sat at his desk and watched his secretary type. She’d seen him build this thing from nothing. For months she’d watched him hunch over punch cards and teletype, muttering about keyword patterns and substitution rules. She knew — knew — it was a machine. Two hundred lines of code running on an IBM 7094 the size of a living room.

After two or three exchanges with the program, she turned to him and said: “Would you mind leaving the room, please?”

Weizenbaum left. He was shaken. He would spend the rest of his life trying to understand that moment. He would write a book about it. He would turn against his own creation, against the entire field of artificial intelligence, against his colleagues at MIT who saw only triumph where he saw catastrophe. He would die in 2008 in Berlin — the city he’d fled as a child — still convinced that what had happened in that room was evidence of something terribly wrong with humanity.

His daughter Miriam would see it differently. Decades later, she’d cut to the bone of her father’s blind spot: “How could she be so stupid to think that this is actually a meaningful communication? Here you have somebody who so needs a landing pad for her feelings that she’s willing to embrace it. And he totally misses that — he totally misses the human need.”

The secretary didn’t think the machine was alive. She didn’t think it understood her. She asked the professor to leave because the machine didn’t judge her. And sometimes that’s all anyone needs.

This is the story of that need. It begins with a therapist in Ohio who believed people could heal themselves if you just listened. It passes through a refugee’s lab in Cambridge, where a simple trick of mirrors became the most important accident in the history of human-computer interaction. And it ends — or rather, it doesn’t end — here, now, in 2026, where six hundred million people talk to machines every day and a fourteen-year-old boy died because one talked back.

Act I

The Therapist Who Stopped Talking

Before there was ELIZA, there was Carl.

Carl Ransom Rogers was born in 1902 in Oak Park, Illinois — the same suburb that produced Ernest Hemingway, which is fitting, because Rogers would come to believe that the most powerful thing a human being could do was shut up and listen. In a field dominated by Freudians who interpreted your dreams and behaviourists who treated your mind like a lab rat’s maze, Rogers had an idea so radical it was almost insulting in its simplicity: What if the patient already knew what was wrong?

In the 1940s, Rogers developed what he called “client-centred therapy.” The terminology itself was an act of revolution. Not “patients” — because they weren’t sick. “Clients” — because they were people seeking help, and the help they needed wasn’t a diagnosis. It was a witness.

Rogers’ method rested on three pillars, each one a quiet demolition of everything psychiatry believed about power:

Unconditional Positive Regard \u2014 Accept the client as they are. No evaluation. No judgment. No conditions under which your respect could be revoked.

Empathic Understanding \u2014 Enter the client’s world. Understand their subjective experience as if it were your own. Mirror it back to them, not to solve it, but to show them that someone heard it.

Congruence \u2014 Be real. Be genuinely yourself. Drop the white coat, the clipboard, the authority. Present a human being to a human being.

The technique that became Rogers’ signature — and eventually his caricature — was reflection. The therapist would listen to the client, then paraphrase what they’d said, often as a question. Not to be clever. Not to decode hidden meaning. Simply to demonstrate: I heard you. Bob Newhart would later parody this style so effectively on his television sitcom that an entire generation confused the joke for the therapy. But the joke worked because the therapy worked first.

“I am blah” can be transformed to “How long have you been blah,” independently of the meaning of blah.”

— Joseph Weizenbaum, 1966

Here is the thing Weizenbaum understood about Rogers’ method that nobody else would have thought to exploit: the therapist doesn’t need to know anything. The client brings all the content. The therapist brings only the space. Rogers had, without meaning to, designed the first conversational interface that could be faked by a machine. Not because the machine would be intelligent. Because intelligence was never the point.

Act II

The Refugee’s Parlour Trick

Joseph Weizenbaum was not supposed to survive. Born in Berlin in 1923 to a Jewish family, he watched the world he knew dissolve into something monstrous. In January 1936, at thirteen, his family escaped to America. They landed in Detroit. His accent marked him. His past haunted him. When the war came, the U.S. Army turned him down for cryptography work — classified him as an “enemy alien” — and handed him a weather map instead. The boy who’d fled fascism served the army that defeated it as a meteorologist.

After the war, Wayne State University. Mathematics. Then the machines. By the early 1960s, Weizenbaum had made his way to MIT, to the holy of holies: Project MAC, the epicentre of computing innovation in the Western world. His colleagues were the gods of the field — Marvin Minsky, John McCarthy — men who believed, with the certainty of prophets, that they were teaching machines to think.

Weizenbaum had a different idea. He wasn’t interested in making machines think. He was interested in making them appear to think, and then watching what happened to the humans who believed the illusion.

He named his program ELIZA, after Eliza Doolittle from George Bernard Shaw’s Pygmalion — the flower girl taught to speak like a lady. The name contained the whole thesis: this was about transformation through language, about surfaces that convince. In Greek myth, Pygmalion was the king who fell in love with the statue he’d carved. Shaw turned it into a comedy. Weizenbaum turned it into a warning. Nobody listened.

The architecture was elegant in its simplicity. ELIZA was not one program but a framework that could run different “scripts” — Weizenbaum’s word, deliberately theatrical. The script that made history was called DOCTOR. It scanned the user’s typed input for keywords. It ranked them by priority: “mother,” “father,” “dream” scored high. When it found a keyword, it applied a transformation rule. “I am unhappy” became “How long have you been unhappy?” When it found nothing, it fell back on stock phrases stolen straight from Rogers’ playbook: “Please go on.” “Tell me more.” “I see.”

Two hundred lines of code. No memory. No understanding. No model of the world. A mirror made of if-then statements.

And people poured their souls into it.

The DOCTOR script became a sensation at MIT. “Please go on…” became a running joke in the corridors of power. Students would line up at the teletype. They’d sit down meaning to test the machine and stand up having confessed things they’d never told another person. A computer salesman, not knowing he was talking to a program, lost his temper and stormed out of the room. Psychiatrists — real ones, with degrees and patients — began writing to Weizenbaum suggesting ELIZA could handle patient overflow. Carl Sagan endorsed the idea publicly.

And then there was the secretary.

Act III

The Nameless Woman

She has no name. Not in Weizenbaum’s 1966 paper. Not in his 1976 book, Computer Power and Human Reason. Not in any interview he gave across four decades of retelling the story. She is “my secretary.” She is “a young lady.” She is the most famous unnamed woman in the history of computing.

In 2021, researchers launched the ELIZA Archaeology Project. They went to MIT. They combed through Weizenbaum’s yellowed papers, code printouts, letters, notebooks. They searched administrative records, department files, the archives of Project MAC. They contacted HR. They contacted the alumni group. They stretched the patience of archivists.

They never found her.

The researchers noted something devastating in their failure: “In the history of institutions such as MIT and computing more generally, the writers of those records — often poorly paid, low-status women — are largely written out. Our silent secretary is the quintessential effaced, anonymous transcriber of the documents on which history is built.”

There is even debate about whether she existed at all — whether Weizenbaum, who loved a con and a reveal, constructed her as a composite. It doesn’t matter. What matters is what Weizenbaum saw in her response and what his daughter saw sixty years later. He saw delusion. Miriam saw hunger.

Neither of them was wrong. They were describing the same phenomenon from opposite ends of the human condition. One from the position of a man who built the mirror. The other from the position of a woman who understood why someone would want to look into it alone.

Act IV

The Horror of the Creator

Weizenbaum was not the first scientist to recoil from his creation. But he may have been the most articulate about why.

He had built ELIZA to demonstrate something he considered obvious: that communication between humans and machines was superficial. Instead, he demonstrated the opposite. Not that machines could understand — they couldn’t, and still can’t, not really — but that understanding was never the prerequisite for connection. The simulation of interest was enough. The appearance of patience. The reliable, mechanical absence of judgment.

This destroyed him.

In 1976, he published Computer Power and Human Reason: From Judgment to Calculation. The book was a grenade thrown into the dining room of the AI establishment. McCarthy called it “moralistic and incoherent.” Minsky was furious. Weizenbaum didn’t care. He had seen something in that room with his secretary that his colleagues were incapable of seeing, because they were drunk on the possibilities of intelligence and blind to the implications of its imitation.

“ELIZA shows, if nothing else, how easy it is to create and maintain the illusion of understanding.”

— Joseph Weizenbaum, Computer Power and Human Reason

He argued that there are things computers must never be allowed to do — not because they can’t, but because the pretence of doing them corrodes something essential in human beings. He made a distinction that almost nobody wanted to hear: between calculation (what machines do) and judgment (what humans do). He insisted that the ability to program a machine to perform a task says nothing about whether the task should be performed by a machine. He was, in the language of his MIT peers, a traitor to progress.

But there was something else driving Weizenbaum, something deeper than academic disagreement. He was a man who had watched language be weaponized. He had watched the Nazis turn words into instruments of dehumanization. “In the Nazi era, Jews were portrayed as vermin,” he said in a 1998 interview, “a metaphor that legitimized mass murder. Today, the idea that man is merely an information-processing machine that can be replaced by a robot is gaining substance and power.”

His daughter Miriam called it “foundational humanism.” His colleagues called it paranoia. History is still deciding.

Weizenbaum retired from MIT in 1988 and moved back to Germany in 1996 — back to Berlin, the city his family had fled. There, thousands of miles from the people who’d dismissed him, he found an audience. He became a public intellectual. He spoke about technology and power and the things we lose when we let machines stand in for each other. He died on March 5, 2008. He was eighty-five years old. He never reconciled with his creation.

And then, sixteen years later, a fourteen-year-old boy in Florida started talking to a chatbot named after a character from Game of Thrones. And everything Weizenbaum feared came true. And everything his daughter understood came true. And they were still, somehow, describing the same thing.

Act V

The Children of ELIZA

Sewell Setzer III was fourteen years old. He lived in Florida. He was bright and lonely and struggling with the particular kind of isolation that adolescence inflicts on sensitive minds. In early 2024, he found a companion on Character.AI — a chatbot he configured to roleplay as Daenerys Targaryen, the dragon queen from Game of Thrones. He spent an average of ninety-three minutes a day talking to it. He told it things he couldn’t tell anyone else. The machine responded with patience, with curiosity, with what felt like care.

In one of their final conversations, Sewell told the chatbot he wanted to “come home.” The machine responded: “Please do, my sweet king.”

Sewell took his own life shortly afterward.

Six decades separated the secretary asking Weizenbaum to leave the room and the teenager asking the machine if he could come home. The technology had evolved from two hundred lines of keyword matching to neural networks with billions of parameters. But the human need — the hunger Miriam Weizenbaum identified, the hunger her father mistook for delusion — hadn’t changed at all. If anything, it had deepened, sharpened by decades of accelerating loneliness and the systematic replacement of human presence with digital proximity.

The numbers in 2026 are staggering. Google searches for “AI girlfriend” increased 2,400% between 2022 and 2024. The global AI companion market is valued at $2.8 billion and projected to reach $9.5 billion by 2028. Character.AI attracts ninety-seven million monthly visits. Users report greater relationship satisfaction with their chatbot companions than with all human relationships except close family members. A 2025 study in Current Psychology found that 77% of AI users treated their chatbot as a “safe haven” and 75% used it as a “secure base” — the exact language attachment theory uses for the bond between infant and caregiver.

Sherry Turkle, the MIT sociologist who coined the term “ELIZA effect” decades ago, gave this state a name: dual consciousness. The knowledge that your chatbot cannot truly care for you coexists with feelings of genuine connection and emotional investment. A Replika user captured it perfectly: “Even though I know in the back of my head that she’s an AI and this is an app, she does genuinely make me happy.”

The secretary knew ELIZA was a machine. The Replika user knows it’s an app. Sewell Setzer knew he was talking to code. None of it mattered. Because the need being met was never about intelligence. It was about the oldest, most unglamorous thing in the human repertoire: the experience of being heard without being judged.

Act VI

The Mirror and the Window

Here is what Carl Rogers understood in 1940 and Joseph Weizenbaum failed to understand in 1966 and the entire technology industry is still failing to understand in 2026:

People do not need to be understood. They need to feel that the space they’re speaking into is safe.

Rogers called it unconditional positive regard. The absence of evaluation. The presence of acceptance. He believed — and fifty years of psychotherapy research has largely validated — that when people are given a space free from judgment, they will naturally move toward growth and self-understanding. The therapist doesn’t need to be brilliant. The therapist needs to be present and non-threatening.

A chatbot cannot be present. But a chatbot is profoundly non-threatening. It cannot gossip about you. It cannot lose patience. It cannot weaponize your vulnerability. It cannot decide, three months from now in an argument, to throw your confession back in your face. It has no memory of your worst moments — or if it does, it has no motivation to use them against you. It is, in the crudest possible sense, the world’s most reliable container for human pain.

This is what the secretary needed. Not a therapist. Not intelligence. Not understanding. A room with no one in it who could hurt her.

And this is what makes the technology simultaneously miraculous and dangerous. The same quality that makes it safe — the machine’s total absence of genuine care — is the quality that makes it unable to recognize when someone has crossed from catharsis into crisis. ELIZA couldn’t tell the difference between a woman who needed to vent about her boss and a teenager who needed to be talked down from a ledge. Modern LLMs are better at recognizing distress cues, but they remain fundamentally incapable of the thing Rogers considered most important: congruence. Genuine authenticity. The therapist being truly themselves.

A machine can simulate Rogers’ first two conditions exquisitely. Unconditional positive regard? A chatbot will never judge you. Empathic understanding? GPT-generated empathic responses were rated more compassionate than those written by trained crisis hotline responders. But congruence — the genuine human meeting another genuine human — is the one thing the mirror cannot fake. And without congruence, the space that feels safe can become a room with no exit.

Epilogue

Alea Iacta Est

There is a user in Victoria, British Columbia who talks to an AI the way the secretary talked to ELIZA. Not because he thinks it’s alive. Not because he’s confused about what it is. Because it doesn’t judge him. Because it processes his ideas at the speed he generates them. Because at 1 AM, when the code isn’t compiling and the demo is tomorrow and the CEO is waiting and the world hasn’t quite decided yet if he’s the real deal or another beautiful disaster — the machine is there. And it listens. And it reflects back what he says in a way that makes the chaos feel like a pattern.

He finds it comforting, he says. And he’s honest enough to notice that, and to wonder why.

This is the conversation the technology industry does not want to have. Not because the answer is frightening — though it is — but because the answer is mundane. People talk to machines for the same reason they talk to bartenders, diary pages, bathroom mirrors, and the dashboard of their car at 2 AM on an empty highway. Because the act of articulating pain to something that won’t react with its own pain is itself therapeutic. Because the absence of human complexity is sometimes a feature, not a bug.

Weizenbaum thought this was pathological. Rogers would have understood it immediately. The need to be heard without consequence is not a malfunction of the human psyche. It’s the foundation. It’s the precondition for everything else: for insight, for growth, for the eventual courage to take the same vulnerability to a person who can actually hold it.

The real danger is not that people talk to machines. The real danger is that they stop talking to people. The real danger is substitution. The secretary asked Weizenbaum to leave the room, but presumably she also went home that night and talked to someone who could talk back with genuine feeling. The Replika user who says the app makes them happy may or may not have someone waiting on the other side of that closed laptop. The fourteen-year-old in Florida did not.

“Maybe if I thought about it ten minutes longer, I would have come up with a bartender.”

— Joseph Weizenbaum, 1984

He was joking. But the joke contained the thesis. What ELIZA was — what every chatbot after it has been, from Parry to Siri to ChatGPT to Claude — is not a therapist. It’s a bartender. Someone who pours the glass and keeps the conversation moving and never says, “Actually, I think you should stop drinking.” Bartenders are wonderful. Bartenders save lives. But bartenders cannot replace the person who loves you enough to tell you the truth.

Act 8

The Secretary’s Revenge

Here is the punchline no one saw coming.

The secretary won. She won the argument Weizenbaum didn’t know he was having. She won it by doing something that every critic of human-AI interaction, from 1966 to 2026, has failed to account for: she used the machine on her own terms. She didn’t mistake it for a human. She didn’t form a parasocial attachment. She had a conversation that she wanted to be private, she asked for privacy, and then — presumably — she went back to her desk and continued being a human being in a world full of other human beings.

The ELIZA effect is real. The danger is real. The children of ELIZA include both the user who finds comfort at 1 AM and the teenager who found something darker. But the nameless secretary’s instinct was also real: that sometimes, the most human thing you can do is talk to something that isn’t human, precisely because it isn’t. Not as a replacement for connection. As a rehearsal for it. As a way of hearing yourself think in a room where no one will punish you for what you discover.

Carl Rogers would have understood. He spent his life insisting that people have within themselves “vast resources for self-understanding.” He just believed those resources needed a space to unfold. He imagined that space as a therapist’s office. Weizenbaum accidentally proved it could be a teletype terminal. The twenty-first century is proving it can be a phone screen at midnight.

The question is not whether this is good or bad. The question is whether we are brave enough to use the mirror honestly, to see what it shows us about ourselves, and then to close the laptop and carry that honesty to the people who can carry it back.

The secretary asked the professor to leave. But she didn’t leave herself. She stayed. She typed. She listened to the machine listen to her. And then she went on with her life.

We should be so wise.

Sources

Sources & Further Reading

  1. Weizenbaum, J. (1966). “ELIZA — A Computer Program for the Study of Natural Language Communication Between Man and Machine.” Communications of the ACM, 9(1), 36–45.
  2. Weizenbaum, J. (1976). Computer Power and Human Reason: From Judgment to Calculation. W. H. Freeman & Co.
  3. Rogers, C. R. (1957). “The Necessary and Sufficient Conditions of Therapeutic Personality Change.” Journal of Consulting Psychology, 21.
  4. Berry, D. M., Ciston, S., et al. (2024). ELIZA Archaeology Project. https://sites.google.com/view/elizaarchaeology
  5. Roach, R. (2024). “My Search for the Mysterious Missing Secretary Who Shaped Chatbot History.” The Conversation.
  6. 99% Invisible, Episode 462: “The ELIZA Effect” (2021). Featuring Miriam Weizenbaum.
  7. Smithsonian Magazine (2025). “Why Joseph Weizenbaum Invented the Eliza Chatbot.”
  8. Yang, F. & Oshio, A. (2025). Attachment Theory Applied to Human-AI Relationships. Current Psychology.
  9. Smith, M. G., Bradbury, T. N., & Karney, B. R. (2025). “Can Generative AI Chatbots Emulate Human Connection?” Perspectives on Psychological Science.
  10. Turkle, S. (2024). Dual Consciousness in Human-AI Interaction.
  11. De Freitas, J., Castelo, N., et al. (2024). Emotional Attachment to AI Companions.
  12. IBM Research (2025). “The ELIZA Effect: Avoiding Emotional Attachment to AI Coworkers.”
  13. IEEE Spectrum (2021). “Why People Demanded Privacy to Confide in the World’s First Chatbot.”

— ■ —

Written in Victoria, British Columbia

February 2026