a personal site

My stance on AI

Artificial intelligence is science fiction. No one has ever created "intelligence" in the form of a computer program. And it is debatable that anyone ever will. When people say "AI," they really mean LLMs (and occasionally other forms of machine learning). As I think about my stance on "AI," I am just talking about LLMs.

Here is my policy:

I do not knowingly use LLMs for anything I write. I actively seek out products and services that do not have LLMs in them. And I will continue to leave services that insist on using LLMs.

I reject the convenience offered by LLMs. I have seen from my own personal usage that LLMs cause brainrot and an inability to think for oneself. I do not think about this as a "moral" stance (see next section). Rather, I think the use of these models is contradictory to living a fulfilling life.

I am not against others using LLMs. And there are cases where I consider the use of LLMs outside of writing.1

However, though I am not against others using them, I do actively push-back on people anthropomorphizing large language models. I think such behavior is harmful to society and how we interact with our fellow human beings.


Is it moral to use generative AI?

There are absolutely moral cases against generative AI. While the environmental usage of single-queries is overblown, it is clear that new data center construction and the electricity needed for "inference" is catastrophic for the environment.

We also cannot forget how these models came to be. They only exist because big tech companies hoovered up billions of human-generated pieces of content. Without pay and without attribution. Furthermore, human-made art is now being shoved aside in favor of generative slop. This has the potential to bankrupt artists.

The job-apocalypse from AI is almost certainly fake and a PR talking point of big tech. But it is becoming indisputable that the use of LLMs will increase people's workloads without any increase in pay or decrease in stress.

I also encourage everyone interested in LLMs to look into how their outputs create a perfect storm for addiction. Much has been written on the subject, and I'm sure much more will be written in the coming years.

On top of all of that, there are also the massive psychological harms of letting sycophantic parrots run amuck.

I could go on and on about the moral case here. And I likely will in a future post.


With this said, a true hater sharpens their sword. I reluctantly encourage those who are AI-skeptical and have not used LLMs to also play around with the frontier models and learn how they work. The underlying technology is fascinating, and something I continue to learn about and research.

Personally, my sword has been sharpened thoroughly. And I now regret ever outsourcing my own human thinking and individuality to a machine.

  1. Sometimes I will use Kagi's "assistant" model whilst coding. Particularly to clean up existing code/stylesheets. But every time I use it, I feel icky. I'm also often not sure I'm really accomplishing much by using these tools. Are they truly saving me more time than copying and pasting?

    I also do not buy the argument that vibe coding with an AI tool is magically better than the old days of searching the web and consulting Stack Overflow. If I was a better coder, I do not think I would use these models at all. And I struggle with the thought that these models could be holding me back from becoming a better coder.

#AI #tech