Stop anthropomorphizing the machine
I’ve seen a lot written on the unexpected dangers of letting LLMs run amok in society. Some of the consequences of opening Pandora's box include: AI-induced psychosis, the break-down of truth, proliferation of slop, disregard of human creativity, and collapse of cognitive function.
But I have yet to see much written on how the way we interact with machines is impacting how we talk to other people.
For much of 2024 I worked for a small tech startup. My boss and his co-founder were the only other people at the company. As such, we were understaffed and my boss decided to try to rely on AI for many things he’d normally have an employee do.
Messages from my boss became incessant, there became an expectation of work at all hours, because a chatbot is available at all hours.
Anytime I pushed backed he would try “prompting” me again. Because he had been taught by LLMs that you can just try again. And because of their stochastic underpinnings, you might get a different answer.
His directions to me became robotic. Any of my personality was to be stripped out of work. Because he was expecting the output of a machine, output which has no personality.
It’s awful to be treated this way. It’s degrading to be prompted like a chatbot.
I quit that job and I never looked back. And in the year and a half since I’ve thought a lot about what that behavior means for society writ large.
The traits of machines
A machine is available 24/7. A machine can never be held accountable. It can never feel pain.1 A machine cannot consent. It cannot complain. It cannot connect with a human being.
When we anthropomorphize a machine, when we use our language to personify it, we begin to prescribe the qualities of these machines to the human beings we interact with.
We begin to expect people to be available 24/7. We skirt responsibility for our actions. We begin to not care that our words can hurt people. We disregard consent. We stop caring about connection.
Loss of humanity
Sometimes when I’m on Hinge I’ll see a prompt on profiles where people say “When I need advice, I go to ChatGPT.”2
I worry about how pervasive this may be. Have a substantial number of people begun to turn to a mimeograph of humanity instead of turning to their friends, their parents, their mentors, their coworkers, or their therapist?
I worry about how this will impact our relationships. When your friends are not sycophantic enough – not available enough, not easy enough – to talk to, do you just dump your friends?
And once this is set in stone, do people ever stop choosing to go to a chatbot instead of choosing to go to another person?
You're talking to no one
We must re-frame our interactions with machines before it is too late. And we must be clear with others that when they anthropomorphize machines, they are damaging the fabric of a humane society.
You are not communicating with an entity when you "talk to" a chatbot. Instead, a computer program is generating some convincing text. And you are reading that text.
Any emotion in your exchange with a chatbot originates with you and is reciprocated by no one.
Honk if you're lonely
But wait!
What if people are lonely? Isn't a simulacrum of connection better than no connection at all?
No. The loss of humanity is something much worse than loneliness.
And I do not mean in the Paul Simon sense. A machine will never relate to Simon and Garfunkel's I Am a Rock.↩
Or Claude, or whatever.↩