If you’re not tired of hearing the term artificial intelligence yet, there’s some serious potential behind the concept. A computer system that solves the world’s problems? Who wouldn’t want that? For example, most philosophers. When humans have nothing to strive for, they create discord and dissatisfaction. These things, these frictions, are requirements for the development of subjectivity and autonomy. It’s like Rise Against’s Tim McIlrath said, “You don’t know your worth until you get hit.” The one that will solve all your problems will probably kill you as a person.
Fortunately, unless you’re responsible for ChatGPT, Gemini, Grok, or some other conversational AI, that’s not a problem the world is immediately facing. Misguided people chasing utopia (we can argue, but you’re dead wrong) will continue to run for quite some time. AI won’t save the world right away. I can only do homework for high school students.
Artificial intelligence evaluation
You can test this yourself if you have at least one area of expertise acquired the old-fashioned way. Choose your favorite subject, such as an episode of Supernatural, the known behavior of Earth’s sun, or Japan’s optical history in the late 20th century, and start asking your chosen artificial intelligence questions about it. From basic questions to more advanced and expert level questions. You will see something amazing happen.
Conversational artificial intelligence gets most things right. It should because it is a powerful search engine. In fact, it’s the only one that’s really properly suited. My personal theory is that Google burned down a fully functional search division in order to force users to adopt Gemini, at least as a first step. Oh, and for the money.
But if you’re familiar with the subject, you’ll realize there’s a mistake there. It’s not everywhere, and it’s not every time, but there are enough mistakes to turn away someone who doesn’t know any better. If you’re using AI as a learning tool, but you don’t have a human teacher who a) already knows you well, or b) does, you’re being misguided. Otherwise it just looks right. I’m confident. Also, how do you spot the mistakes if you’re new to the subject?
What you think you’re getting.
What you actually get.
you’re not alone
Microsoft’s recent troubles with maintaining Windows updates, which caused problems both last December and this January, aren’t laid at the foot of artificial intelligence — Microsoft is pushing AI into its products, and isn’t allowed to make its technology look bad — but there’s a reason why one of the first things it does with a broken computer is check the last thing it did. For Microsoft, that meant introducing massive amounts of AI automation into the coding chain.
Microsoft chief Satya Nadella said last year that up to 30% of the company’s code is written by AI. This is just speculation on my part, but code that you’re confident is working mostly well probably hasn’t been tested very closely. As it turns out, the system has performed these tasks correctly quite often. But gradual errors can creep in there. “Vibecoding” is used as an adjective in some circles for very good reason.
While we can’t say with certainty that Microsoft’s recent Windows issues are related to AI, the introduction of this technology was a major recent change to the way the company makes its products. Unless all of Microsoft’s best software engineers have been replaced by idiots and no one cares to mention it to anyone else, it’s a very reasonable assumption that the problem is that artificial intelligence is giving us things that seem to be true but are simply not true.
looking for the answer
Right now, the best use of AI for the average person is to replace stupid search engines. But with that comes the realization that AI responses must be treated with the same level of suspicion as random search results. Ironically, this process of identification becomes difficult when everything comes in a nice, prestigious little package. It looks like all the work is done. It’s much easier to treat it as done and move on to the next thing. Yes, even if it’s wrong. After all, we can’t really know when artificial intelligence has failed at its job. Not unless you are already an expert.


