Life can be (a) dream

[Index - Posts - Art museum - Other cool things - Changelog]

January 21, 2026 - AI, AI, AI, AI, AI!

Everyone has their own AI post. This is mine. Please treat it well.

My claim is very simple: I do not have anything against the fundamental technology of LLMs, but I have everything against how they are framed and how they are typically used. I think it is insulting that the way LLMs work is not being widely spread to the greater population. Almost everyone understands that LLMs and other AIs train on data and produce output based off of that, but they seem to only have this single sentence of understanding.

I once got roped into helping with an impromptu explanation of how AI works and how to use it to a group of teachers. I had a whole 15 minutes to prepare, and in that time I came to the conclusion that the most important thing I could do is explain how it works so they truly understood its weaknesses. Do you have an older, not-hip relative in your life that keeps producing and sending you absolute AI trash every single day? The one that brags about it to no end and how cool it is for no reason? I had a few of these older people pop into my head as I saw this group. They were a bit older, and I wanted to prevent this happening to them at all costs. I really did try to explain how LLMs work. "Think of it as glorified autocomplete, but with whole words and sentences! Given your question, it basically tries to generate the most common responses afterwards based on its training set! When it writes an essay, it finishes the essay by constantly asking itself "given this essay so far, what should the next word be?" The reason people worry about accuracy and why you should check its output is because it's trained on some good data, yes, but also data from random social media sites, which of course isn't always fully accurate, and the output is inconsistent because LLMs have a property called "temperature", which controls its randomness and makes sure that you don't always get the same answer to the same prompt--"

I got cut off by the person running it. Politely, but still! I was mad at first, but I don't really have the ability to read groups of people. He did, and he saw that they simply did not care. I went home feeling a little strange about the whole thing. Do people not care to understand something that they plan to use so extensively? Do we just prompt this thing day in and day out, not understanding how it even produces the answers that it gives to us? And, worse, are they even reluctant to learn how it works? Why is it like this?

At that time, I thought that the most dangerous group of AI users were the outsourcers. This group is exactly why so many people have the absolute hatred of AI that they do. Any sort of work, any passing idea, any question, or anything to think through at all will just get outsourced to an AI by this group. There is nothing sacred to them, and they have no understanding of how it works. I'm not as scared of the outsourcers anymore. If anything, I have come to appreciate them exactly because they are so blatant. There is no question that what they are producing is AI generated. Their ability to destroy credibility makes others reluctant to outsource their own thinking to AI. Unfortunately, the person that is most hurt by this is the outsourcer themselves. Their skill in a topic is too low to remotely critique or otherwise improve what the AI is generated. And now, they have no incentive to learn on their own, because the AI will simply do it for them!

Even if they're not as scary as I thought, the outsourcers are still important because they teach us that LLMs seem to work best when they seem like magic. As in, if you don't understand where its answers come from, it'll look like the most incredible thing in the world. And, indeed, to these people, it really is the most incredible thing! This is what leads to blind acceptance of its answers, even outside of this group, and what leads to people sending copy-paste output of what the LLM said in the work group chat rather than sharing their own expertise.

I now have to return to my original question: why are people reluctant to learn about how AI works? I have a few theories. The first is that it is in that person's best interest to see AI as something more than a robot. This applies especially to people who use it as a makeshift therapist or dating partner. Why would you want to imagine the thing to be helping you as a robot instead of a kind and caring intelligence? This is exactly why there was such a backlash when ChatGPT was made to sound less human. Sounding robotic and formal breaks the magic! The second is that there is very little easy or intermediate level information to be found. If the information cannot be easily found, or requires you to look up about 30 machine learning terms, you'll of course lose interest altogether. But I think the third is the most damning of all: it is in a company's best interest to see and sell it as more than a robot, because it obviously means it can reduce workload far more than regular automation! Regular automation can take care of manual tasks, but AI can take care of abstract labor, such as decisions, coding, design, and more! When AI companies talk as if AI can take over the world and workforce, how could a company not buy into it? And with how costly it is to develop and research AI, the AI companies have to convince everyone with all their strength that their work can truly take over the world. No sort of investment in them would happen at all if it wasn't for this! The reality of its ability to take over the world does not matter. What matters is that it is in their best interest to have others believe that it can.

So, the blatant misuse of AI is because we have been encouraged into seeing it as something beyond what it is, and something that could surpass humanity. Really, for the time being, it's just a robot. Robots are good at some tasks, and not all. This robot happens to be good at generating corporate style words, coding, generating images, translation, initial research, and analysis. It has a high error rate and will occasionally misrepresent its own abilities to users, but we accept it because it can do things that no other robot has been able to do before and with very little prior set up.

Because it is a robot, we need to understand that if we use it instead of producing our own work, then our skills will atrophy. In some cases, this doesn't matter. If an LLM is producing a script to save someone two hours of manual typing, they don't care about their ability to produce scripts. They care to save their time! But, if someone offloads core parts of their work and their decision making to it, they should not be surprised when they find that they suddenly need to run to an LLM to make every decision because they have lost the ability to altogether. And, because it is a robot, we need to help others who have fallen into using it for companionship. A robot is no companion, but someone who makes fun of others for using it this way isn't a very good companion of their own. Someone using an AI as a partner is a symptom of a sick society, not a symptom of the evils of AI.

The only thing that still bothers me is this: I thought that we wanted to automate the workforce so people would have more opportunities for engaging work and more time for personal pursuits, but it turns out that hiring people for engaging work is expensive, and AI is not. To me, this is the most insulting part of all. AIs are made for corporate benefit, not for human benefit. Any seemingly human benefit is only meant to be used in order to speed up corporate output. What would AI tools designed with only the individual in mind look like? What is the inherent value of image generation? What is the value of spreading a two sentence email out into two paragraphs? What is the value of it completing a whole code project for you? What is the value of it producing music? Can we only find value in final products?