Some thoughts about AI

By Caleb Gardner

Written on: 2025-01-01
Updated on: 2025-01-15

AI, just like Cryto & NFTs before it, is the new corporate obsession. Everywhere you look, someone is bragging about this or that having AI, or this AI being better then that AI. Unlike Crypto & NFTs, AI genuinely has already changed the world around us in a way that cannot be undone. The question: Is that change for the best, or is it ultimately the start of the end of human kind? I'm not full doom and gloom like some people, but I genuinely think the world we are entering into is worse then the one we are leaving. This is going to be almost entirely negative, but just know I think there is potential for a good outcome. That being said, I'm tired of the hype train and my annoyance is only growing. I need to get this off my chest. Sorry for the huge block of text.

Disclaimer: These are my personal observations and opinions. I am no expert on AI, machine learning, LLMs, or the law, but I believe I have done enough research to know more then the average person. This is meant to be fairly general, but will have example particular to programming, as that's where a lot of my experience with LLMs come from.

Ethics & Legality

If I were to download a movie through questionable sources, I am breaking the law. It's as simple as that, and yet Google, OpenAI, and every other AI company is doing this at a massive scale with (so far) zero consequences. Based on how the current state of the various lawsuits, there's not clear indication if these companies will actually be punished for the mass piracy they've been committing.

Some will say that the original source material isn't retained, that it gets fully integrated into the model. So if I were to illegally watch a movie and then delete it, then I did not break the law according to the same rules. They're argument is garbage and I genuinely think it should be punished.

Why did they do this in the first place? Any sort of machine learning (such as LLMs) is only as good as their data, and (for the most part) the more data the more better the model gets. If it weren't for this mass piracy the models available today would (probably) be barely functional. With that being said, the cat's out of the bag, and whether or not AI companies are punished doesn't change the fact that these models are out in the world and a metric ton of content has already been generated. The world, and especially the internet has been forever changed, whether we like it or not.

I think AI has genuine uses, but until this is resolved, using it will always feel bad until these issues are properly resolved (if they ever are).

Edit: Something I forgot to mention is that an interesting side effect of generative AI is that their output is not copyrighted due to the output not being made by a human. If you interested in some details, including the legal presidence, check out this video by DougDoug.

Image/Video AI

I'm largely ignoring image and video generation AIs. Not only have I not used them, but they are even more slimy. LLMs have the potential to mislead and take jobs, but Image generation only has the use case of taking jobs and misleading. I have not seen a single use of image/video generation that I would consider ethical, or as an alternative to real artists. LLMs of course have similar issues, but it at least has real uses.

A General Lack of Benefit

Every product and their dogs have integrated AI, and yet the vast majority of them gain zero benefit form said integration. No your fridge does not, in fact, need AI. No you are not going crazy that the feature that you've been using for years now has AI in it's title, acting like it's new. AI is more of a marketing term anymore and whether or not it's backed by something advanced like an LLM or if it's just a dozen if statements is largely opaque.

The term AI has been used for a very long time to mean a large variety of things. It's not that it's wrong to call them AI, it's just that they haven't been called AI for years, and now it's only called AI due to the recent hype cycle. Of course that's not considering the features powered by LLMs, which are their own can of worms

LLM powered AI features are largely problematic. Whether it be a chat bot that could very easily give wrong information or jail-broken, or summary features that ignores the nuance. Very few implementation of LLMs have been actually a good idea. I think you just need to look at the case of Air Canada's lying chatbot to realize why LLM chat bots are problematic.

An Alternate for Google?

Gemini and ChatGPT are both trying to use their LLMs as an alternative to a Google search. This is one use case for an LLM there is some real benefit, I've found myself more then once using LLMs instead of a Google search, but, for right now, you have to be very careful and double check the results. Hallucinations are very real and, worryingly, LLMs are confident. They don't really say no or seem unsure, so when it does give wrong info it's very hard to figure it out unless you do your own research in addition, in which case, why are you using the LLM in the first place.

This, though, is an surmountable problem and they could become the default for searching for information. That being said, I doubt we'll see this for another 5-10 years, and by that time other problems will have fully reared their ugly heads...

The Power Problem

I'm not referring to the power imbalance between individual users and the giant tech companies (though that is a problem), I'm referring to the massive amount of energy that AIs use on a daily basis. All you need to look at is the large tech companies sudden interest in owning their own nuclear power plants to understand the scale of the problem. Crypto and NFTs got tons of crap due to their energy usage, but it seems the general population either doesn't understand or doesn't care about the exact same thing when it comes to AI. That's not the only problem.

The Money Problem

LLMs are, largely, used for free (or fairly cheap). You just need to look at nearly every VC funded tech company (Uber, Doordash, Netflix) to know what's going to happen next. Over the next few year the free ride is going to end; prices are going to go up and they're going to go up hard. You can already start to see this with OpenAIs $200/month tier.

Right now companies are trying to make AIs an integral part of your life before they start making a profit, just like how Uber used to be a cheap alternative to taxis. Now taxis are largely gone and Uber has steadily increased their prices. They operate at a loss for many years to build up a user base, to become necessary in people's lives, only for the backers to start asking for their money back, causing a sudden and steep increase in prices. This is the Silicon Valley VC funded way, and it sucks.

Edit: Seems like I'm being pretty accurate: The $200 tier is losing OpenAI money.

The Mediocrity Problem

Every time OpenAI releases a new model, Twitter is flooded with people proclaiming the end of Software Engineers; this new model is so good at programming that they are no longer needed. Nearly all of these takes are from people who have an AI business, or from non-programmers. Simply put, it's from people who don't know how to code or how to code well.

LLMs are trained on allthe code. That means that, yes, there's a lot of good, high quality code in there, but there's far more bad, garbage code. At best, your getting exactly average code, and if you take a look at the amount of very avoidable security breaches and massive failures (I'm looking at you Crowdstrike) you'll know that average code isn't always good enough. Of course that's nothing compared to the hallucinations I personally have encountered when trying to use it for programming.

My Experiences

Back before it was public, I had access to the Github Copilot Beta, then once it released and I would need to pay for it, I ended up using Codeium for a while. I use no AI assistant now. 99% of the code suggestions were garbage, useless; all it did was take time away from me as I read through the suggestion to see if it would actually be useful. The couple of times I tried to use ChatGPT to help with issues I was having, it would spit out garbage code; I'd try to correct it only for it to spit out more garbage. I ended up dropping any AI to help me code.

Lately I was using a part of the Go standard library I'd never used before, html/template. I was having some issues where some HTML is getting escaped when I don't want it to. I've been using Zed lately, which has a builtin AI chat, so I decided to give it a go instead of looking at the docs (my usual go-to). Zed has a free, for now, option to use Claude 3.5 Sonnet, so I describe my problem and it gives 3 solutions. All 3 of them don't work. I tell it that none of them work and to try again. Again, 3 solutions, 3 failures.

This will not be the experience everyone has, but I have never had AI solve a complex issue and it often fails at simple issues. I've had success using the Zed chat for simple CSS and JS questions (though some of them still fail), but nothing that couldn't be solved only a little bit slower with a Google search. It's something I'm not upset having quickly available in my code editor, but it's by no means a necessity and it definitely doesn't make me a better coder.

The Learning vs Implementation Problem

One of the things I think AI is very good at is teaching. You can quickly learn many things via LLMs and, more importantly, since it's interactive you can ask questions when you don't fully understand something. Or you can ask it to explain it to you in a different way. All of this is great, but it has one fatal flaw. Most of the time when you ask it a question to learn, it not only will give you the info to learn, but will additionally give you the implementation. Suddenly, it become very easy to just copy the AIs implementation instead of using your own.

As an example, if you ask it how to do a merge sort, sure it'll probably tell you how it works, but it will almost always just give you the code to do so. Now we have ended up with people who have learned that they don't need to learn as long as they have an LLM on call. Why write your own merge sort when ChatGPT can do it for you. In case it isn't obvious, this is a bad mentality and will result in a lot of people who will confidently say they know what they're doing, but in reality they're just copy and pasting. Think about all the people who think they're experts after a single Google search and multiply that by 10. It's a very real issues that does not have a clean solution.

I implore anyone regularly using AI when coding to give it a break for a month. Read the docs instead of asking an AI. You may decide at the end that AI are worth it for you, but I think you'd be shocked by either how much knowledge you are lacking that is being filled in by the AI, or how little you actually need the AI.

A Parting Thought

Anyone who is being reasonable will say that AI is over-hyped and problematic. It has the potential to change the world, but I have not seen anything that convinces me that the world will be better. This is all done for greed, not for the betterment of mankind.

About the author:

Caleb Gardner's profile picture

Caleb Gardner

I love any thing to do with computers, from building them to programming them, it's been a passion since I was a child. My first foray into programming was on my Casio fx-9750GII graphing calculator in 5th grade after reading the user manual. Somehow, it would take me years to realize that I was programming.