A Terrible Controller

By Caleb Gardner

Written on: 2025-01-03
Updated on: 2025-01-07

Steam Controller Supremacy

Steam makes great controllers; between the Steam Deck and the Steam Controller they are the only ones that (I believe) truly understand what a PC controller should look like. Between their liking of touchpads as mouse alternatives, their inclusion of gyro, and the fantastic customization afforded via Steam Input, I truly believe they have the best controllers. Of course, there's some glaring issues.

The Steam Controller is amazing... as a mouse. Truly the only controller that you can very comfortably play games such as Civilization and The Sims. Of course when you try to use it as a controller it falls apart a bit. The face buttons are too far away, the "d-pad" touchpad is a pretty bad d-pad, the convex top on the left stick is not ideal, and while you can get used to the right touchpad as an alternate to a stick, it's not a true replacement. It's all things you can get used to, but it overall is just a bit off as a controller.

The Steam Deck is basically a nearly perfected Steam Controller, fixing nearly all it's flaw. But they just had to stick a whole computer into it. You can kind of use it as a controller, but it's definitely not an ideal situation.

My Ideal

Besides the basics, there's some nice to haves that I really want for a PC controller.

  • Analog triggers: This may seem pretty standard anymore, but there's some issues that means it has to be on this list. More on that later.
  • 2.4ghz connectivity: Bluetooth is fine, but it's definitely prone to more issues and input lag.
  • Gyro: As someone used to keyboard & mouse, I really want the precision that combo stick + gyro gives you.
  • Capacitive thumbsticks: The best way to enable & disable gyro.
  • Programmable back buttons: Extra inputs are never shunned and ones that are right where you fingers are already resting anyways is truly ideal. 4 Like the Steam Deck is nice, but I can definitely make do with 2.
  • Programmable: With back buttons, this is necessary. The absolute ideal is Steam Input as it give so many options, but I can make due with any semi-decent programmability.

As long as it's comfortable to use, with all the normal buttons in good-enough spots I'll be happy. I don't care too much if the sticks are symmetrical or staggered, I don't care about turbo or macro functionality, rumble is kinda of nice but I don't really miss it when I don't have it.

The Competition

My favorite brand right now is 8BitDo; they make great controllers that can connect to basically anything. Their flagship, the Ultimate Controller, has been my go-to ever since it was released. It's 2.4ghz connection mode and charging dock even makes it a fantastic for a desktop computer, using the Bluetooth (Switch) mode you can even get gyro. Unfortunately, it has the same problems nearly every other controller on PC has: Xinput.

I am no expert, but from my understanding Xinput is what's limiting controllers in exactly the ways that I want. Xinput is seemingly limited to just what an Xbox controller has, that means no gyro and no extra inputs. If an Xinput controller adds extra buttons, they must be programmed to one of the other standard buttons. Any controller that wants to add what I want, needs special software. Steam gets around this via Steam Input. Of course Xinput is the only protocol. Switch controllers (seemingly) use a separate protocol, but it's also annoyingly slightly limited. It has support for Gyro, but now we don't have support for analog triggers along with the same button limitation that Xinput has.

I have yet to use a DualSense controller, but I've heard that they'd check off more of my wants then most controllers on the market. Of course, they're pretty expensive and still aren't perfect. The DualSense edge is a bit closer, but it's $200. It also doesn't have capacitive sticks and 2.4ghz connectivity. It's definitely the closest as far as I can tell, but it's still not exactly what I want and the cost has kept me away.

Enter the Horipad

Hori announced in June 2024 that they were going to release a Horipad for Steam with capacitive thumbsticks, gyro, back buttons, and analog triggers, all configurable via Steam Input. I was understandably excited. Unfortunately it initially released only in Japan, so it wasn't until recently I got my hands on it; only to be immediately disappointed.

Unboxing

Every 8BitDo controller I've gotten has been packaged great with a nice, sturdy box. Even for their cheaper controllers it's a premium unboxing experience. The Horipad was put into a very thin foam pouch and stuck into a very cheap feeling cardboard box with plenty of room for it to move around. For $60 I definitely expected better. Additionally there's the thick instruction manual and inside the same pouch as the controller is a 3 meter USB A to C cable (probably the best thing in the box, lol). I should have known immediately the lack of care put into this product.

The Controller Itself

The first thing I noticed was just how light the controller was. In fact, I think the instruction manual (which is admittedly in 7 languages) is about the same weight and the included cable seems to be heavier. Weight doesn't necessarily mean quality, but it's not a bad indicator. Then I picked it up and knew just how cheap this controller truly is.

It's hard to quantify, but most people can tell when plastic is cheap, and in this case I can tell you that the plastics that make up the body are very cheap. Along with the cheap plastics comes a classic: the top and bottom halves of the shell don't quite match up, meaning the seam is a bit sharp and a bit uncomfortable. And then I saw the "analog" triggers.

The "analog" triggers have basically zero travel and, in fact, have a tactile bump at the top, and the tiny bit of travel after the bump required a ton of force to actuate. Technically they're analog, but in reality they are just digital triggers. No racing games for this controller.

I decided to give it a go, maybe in use it won't be that bad... right?

It's worse.

The thumbsticks suck; the tension on them is inconsistent. Both sticks have a section that feel file, but then a section that has noticeably higher tension; on the right stick going up is fine, but going down is significantly harder near the edge.

For some stupid reason Hori decided it would be a good idea to make every face button a different height; this makes switching buttons and hitting multiple buttons unnecessarily bad feeling.

The polling rate is noticeably bad under Bluetooth; when using gyro on Bluetooth there is a very obvious choppiness to the gyro movement. To make sure this wasn't just a Bluetooth issue, I put my Steam Controller into Bluetooth mode, and tested it on the same game with no noticeable choppiness.

As a general note, everything feels bad and cheap. Everything works, but nothing works good.

This controller is $60 dollars, and I would be disappointed in it if it was $30. This is truly unacceptable and Hori should be ashamed of the controller they made. Guess I'll just have to wait for the Steam Controller 2.

About the author:

Caleb Gardner's profile picture

Caleb Gardner

I love any thing to do with computers, from building them to programming them, it's been a passion since I was a child. My first foray into programming was on my Casio fx-9750GII graphing calculator in 5th grade after reading the user manual. Somehow, it would take me years to realize that I was programming.

Some thoughts about AI

By Caleb Gardner

Written on: 2025-01-01
Updated on: 2025-01-15

AI, just like Cryto & NFTs before it, is the new corporate obsession. Everywhere you look, someone is bragging about this or that having AI, or this AI being better then that AI. Unlike Crypto & NFTs, AI genuinely has already changed the world around us in a way that cannot be undone. The question: Is that change for the best, or is it ultimately the start of the end of human kind? I'm not full doom and gloom like some people, but I genuinely think the world we are entering into is worse then the one we are leaving. This is going to be almost entirely negative, but just know I think there is potential for a good outcome. That being said, I'm tired of the hype train and my annoyance is only growing. I need to get this off my chest. Sorry for the huge block of text.

Disclaimer: These are my personal observations and opinions. I am no expert on AI, machine learning, LLMs, or the law, but I believe I have done enough research to know more then the average person. This is meant to be fairly general, but will have example particular to programming, as that's where a lot of my experience with LLMs come from.

Ethics & Legality

If I were to download a movie through questionable sources, I am breaking the law. It's as simple as that, and yet Google, OpenAI, and every other AI company is doing this at a massive scale with (so far) zero consequences. Based on how the current state of the various lawsuits, there's not clear indication if these companies will actually be punished for the mass piracy they've been committing.

Some will say that the original source material isn't retained, that it gets fully integrated into the model. So if I were to illegally watch a movie and then delete it, then I did not break the law according to the same rules. They're argument is garbage and I genuinely think it should be punished.

Why did they do this in the first place? Any sort of machine learning (such as LLMs) is only as good as their data, and (for the most part) the more data the more better the model gets. If it weren't for this mass piracy the models available today would (probably) be barely functional. With that being said, the cat's out of the bag, and whether or not AI companies are punished doesn't change the fact that these models are out in the world and a metric ton of content has already been generated. The world, and especially the internet has been forever changed, whether we like it or not.

I think AI has genuine uses, but until this is resolved, using it will always feel bad until these issues are properly resolved (if they ever are).

Edit: Something I forgot to mention is that an interesting side effect of generative AI is that their output is not copyrighted due to the output not being made by a human. If you interested in some details, including the legal presidence, check out this video by DougDoug.

Image/Video AI

I'm largely ignoring image and video generation AIs. Not only have I not used them, but they are even more slimy. LLMs have the potential to mislead and take jobs, but Image generation only has the use case of taking jobs and misleading. I have not seen a single use of image/video generation that I would consider ethical, or as an alternative to real artists. LLMs of course have similar issues, but it at least has real uses.

A General Lack of Benefit

Every product and their dogs have integrated AI, and yet the vast majority of them gain zero benefit form said integration. No your fridge does not, in fact, need AI. No you are not going crazy that the feature that you've been using for years now has AI in it's title, acting like it's new. AI is more of a marketing term anymore and whether or not it's backed by something advanced like an LLM or if it's just a dozen if statements is largely opaque.

The term AI has been used for a very long time to mean a large variety of things. It's not that it's wrong to call them AI, it's just that they haven't been called AI for years, and now it's only called AI due to the recent hype cycle. Of course that's not considering the features powered by LLMs, which are their own can of worms

LLM powered AI features are largely problematic. Whether it be a chat bot that could very easily give wrong information or jail-broken, or summary features that ignores the nuance. Very few implementation of LLMs have been actually a good idea. I think you just need to look at the case of Air Canada's lying chatbot to realize why LLM chat bots are problematic.

An Alternate for Google?

Gemini and ChatGPT are both trying to use their LLMs as an alternative to a Google search. This is one use case for an LLM there is some real benefit, I've found myself more then once using LLMs instead of a Google search, but, for right now, you have to be very careful and double check the results. Hallucinations are very real and, worryingly, LLMs are confident. They don't really say no or seem unsure, so when it does give wrong info it's very hard to figure it out unless you do your own research in addition, in which case, why are you using the LLM in the first place.

This, though, is an surmountable problem and they could become the default for searching for information. That being said, I doubt we'll see this for another 5-10 years, and by that time other problems will have fully reared their ugly heads...

The Power Problem

I'm not referring to the power imbalance between individual users and the giant tech companies (though that is a problem), I'm referring to the massive amount of energy that AIs use on a daily basis. All you need to look at is the large tech companies sudden interest in owning their own nuclear power plants to understand the scale of the problem. Crypto and NFTs got tons of crap due to their energy usage, but it seems the general population either doesn't understand or doesn't care about the exact same thing when it comes to AI. That's not the only problem.

The Money Problem

LLMs are, largely, used for free (or fairly cheap). You just need to look at nearly every VC funded tech company (Uber, Doordash, Netflix) to know what's going to happen next. Over the next few year the free ride is going to end; prices are going to go up and they're going to go up hard. You can already start to see this with OpenAIs $200/month tier.

Right now companies are trying to make AIs an integral part of your life before they start making a profit, just like how Uber used to be a cheap alternative to taxis. Now taxis are largely gone and Uber has steadily increased their prices. They operate at a loss for many years to build up a user base, to become necessary in people's lives, only for the backers to start asking for their money back, causing a sudden and steep increase in prices. This is the Silicon Valley VC funded way, and it sucks.

Edit: Seems like I'm being pretty accurate: The $200 tier is losing OpenAI money.

The Mediocrity Problem

Every time OpenAI releases a new model, Twitter is flooded with people proclaiming the end of Software Engineers; this new model is so good at programming that they are no longer needed. Nearly all of these takes are from people who have an AI business, or from non-programmers. Simply put, it's from people who don't know how to code or how to code well.

LLMs are trained on allthe code. That means that, yes, there's a lot of good, high quality code in there, but there's far more bad, garbage code. At best, your getting exactly average code, and if you take a look at the amount of very avoidable security breaches and massive failures (I'm looking at you Crowdstrike) you'll know that average code isn't always good enough. Of course that's nothing compared to the hallucinations I personally have encountered when trying to use it for programming.

My Experiences

Back before it was public, I had access to the Github Copilot Beta, then once it released and I would need to pay for it, I ended up using Codeium for a while. I use no AI assistant now. 99% of the code suggestions were garbage, useless; all it did was take time away from me as I read through the suggestion to see if it would actually be useful. The couple of times I tried to use ChatGPT to help with issues I was having, it would spit out garbage code; I'd try to correct it only for it to spit out more garbage. I ended up dropping any AI to help me code.

Lately I was using a part of the Go standard library I'd never used before, html/template. I was having some issues where some HTML is getting escaped when I don't want it to. I've been using Zed lately, which has a builtin AI chat, so I decided to give it a go instead of looking at the docs (my usual go-to). Zed has a free, for now, option to use Claude 3.5 Sonnet, so I describe my problem and it gives 3 solutions. All 3 of them don't work. I tell it that none of them work and to try again. Again, 3 solutions, 3 failures.

This will not be the experience everyone has, but I have never had AI solve a complex issue and it often fails at simple issues. I've had success using the Zed chat for simple CSS and JS questions (though some of them still fail), but nothing that couldn't be solved only a little bit slower with a Google search. It's something I'm not upset having quickly available in my code editor, but it's by no means a necessity and it definitely doesn't make me a better coder.

The Learning vs Implementation Problem

One of the things I think AI is very good at is teaching. You can quickly learn many things via LLMs and, more importantly, since it's interactive you can ask questions when you don't fully understand something. Or you can ask it to explain it to you in a different way. All of this is great, but it has one fatal flaw. Most of the time when you ask it a question to learn, it not only will give you the info to learn, but will additionally give you the implementation. Suddenly, it become very easy to just copy the AIs implementation instead of using your own.

As an example, if you ask it how to do a merge sort, sure it'll probably tell you how it works, but it will almost always just give you the code to do so. Now we have ended up with people who have learned that they don't need to learn as long as they have an LLM on call. Why write your own merge sort when ChatGPT can do it for you. In case it isn't obvious, this is a bad mentality and will result in a lot of people who will confidently say they know what they're doing, but in reality they're just copy and pasting. Think about all the people who think they're experts after a single Google search and multiply that by 10. It's a very real issues that does not have a clean solution.

I implore anyone regularly using AI when coding to give it a break for a month. Read the docs instead of asking an AI. You may decide at the end that AI are worth it for you, but I think you'd be shocked by either how much knowledge you are lacking that is being filled in by the AI, or how little you actually need the AI.

A Parting Thought

Anyone who is being reasonable will say that AI is over-hyped and problematic. It has the potential to change the world, but I have not seen anything that convinces me that the world will be better. This is all done for greed, not for the betterment of mankind.

About the author:

Caleb Gardner's profile picture

Caleb Gardner

I love any thing to do with computers, from building them to programming them, it's been a passion since I was a child. My first foray into programming was on my Casio fx-9750GII graphing calculator in 5th grade after reading the user manual. Somehow, it would take me years to realize that I was programming.

Stop making stupid benchmarks

By Caleb Gardner

Written on: 2024-11-26
Updated on: 2025-01-12

Edit: Between writing this and fixing my markdown converter, ThePrimeagen and Casey Muratori did a fantastic breakdown on this particular benchmark and how it's not actually benchmarking for loops, but in fact it's benchmarking the modulo function. You can find the video here. If you don't want to watch the nearly hour and a half video, just know that Casey Muratori was able to improve the performance of the C code by 3.5x fairly easily.

Edit 2: The same person made another micro benchmark on the Levenshtein distance and it had a HUGE issue that cause Fortran to be significantly faster then it should have been. See Casey Muratori show why, once again, these benchmarks are garbage. Yes, It's Really Just That Bad.

I've recently gotten into the bad habit of looking at software dev twitter (no I'm never calling it X) and have been constantly annoyed at the amount of artificial benchmarks people share. The latest one to draw my ire (and spawn this post) is a bad "benchmark" that's basically just 1 BILLION iterations of a for loop.

Now I cannot talk to most of the languages shown, but I have significant experience in Go and have spent a not insignificant time optimizing Go code (in particular my squashfs library). The second I opened up the code for this "benchmark" I knew that whoever had written this code has never tried to write optimized Go code. First let's start with the results without any changes. For simplicity I'll only show the results of C and Go.

C = 1.29s
C = 1.29s
C = 1.29s

Go = 1.51s
Go = 1.51s
Go = 1.51s

This is fairly expected, as it's what's in line with the post and what is logical, Go's structure is fairly low level and similar to C, but it is garbage compiled meaning it will be slower in real world applications. Now let's look at the results of my optimized code:

C = 1.29s
C = 1.29s
C = 1.29s

Go = 1.29s
Go = 1.30s
Go = 1.29s

Suddenly, C's lead is gone! What black magic is this???. Well, if you actually look at the original code and you know Go, you'll probably notice it immediately: the "benchmark" is using int. That's right, my optimizations boiled down to making all int instances int32s. I'm honestly a bit surprised it basically ties C, but I suspect that, since this isn't a real world benchmark, the garbage collector never actually has to do anything, meaning Go's primary disadvantage is non-existent.

My gaps in knowledge

Let me be clear, I am no expert, I do not actually know why int32 is faster then int, I just know it is (I have theories, but that's all they are). Though I know many of the other languages, I haven't ever done any research on how to optimize them. It's possible all the other languages are perfectly optimized, but the fact such a simple optimization was overlooked invalidates the entire test in my mind.

The Point

Let me be clear, benchmarks are important and useful, but the most useful benchmarks I've seen are between code of the same language as it removes a lot of the compiler magic and skill issues. Funnily enough, my benchmark between the code using int and int32 is a useful benchmark. The problem arises when you try to benchmark between fundamentally different languages (or even frameworks), but do not give them all the same amount of time and attention. As an example, if I were to write the C code for this test we'd probably see Go with a lead, not because Go is faster, but because I know how to write optimized Go.

The real world is messy, and between DB calls, API requests, and IO, the actual performance gains/failure of any particular language becomes a lot more complex and their performance will largely depend on your needs. The vast majority of the time spending time optimizing code would be far better then re-writing in a different language. The only time I'd actually recommend switching languages is when you've already optimized and are still running into performance constraints or if you want to learn. Let me be clear: Ben Dicken is a better engineer then me, but that doesn't mean he can't be misled and mislead others.

About the author:

Caleb Gardner's profile picture

Caleb Gardner

I love any thing to do with computers, from building them to programming them, it's been a passion since I was a child. My first foray into programming was on my Casio fx-9750GII graphing calculator in 5th grade after reading the user manual. Somehow, it would take me years to realize that I was programming.