So, like everyone on the planet I’ve been playing around with artificial “intelligence” for a bit. It sat uneasy with me – it’s not actually “intelligent” but rather a glorified autocomplete, it has lots of issues with intellectual property, and – as anyone with a smartphone knows – autocomplete can really f*ck it up out of the ballpark.
But this week, late one evening, fuelled by copious amounts of white grape juice, I realised what my actual beef with it is.
I’m a tech-savvy guy. Heck, I’m pretty smart all-round. I usually figure out how to get stuff done, sometimes with a little help from Google (or some other source of wisdom). But every now and then, I run into something strange, something bizarre, something the general help can’t explain. So: I call the supplier to ask for help.
Now, what usually happens is, you reach the help desk (well, you’re lucky to do so these days, but assuming you do) and get to speak to a first tier monkey. These people (bless ’em) are underpaid grunt workers who just rattle off a script. Basically, The IT Crowd’s “have you tried turning it off and on?”. I’m sure that for 99% or more of the callers this works, but trust me: I’m not that guy. When I’m calling you, I’ve officially Admitted Defeat. Well, no problem, the grunt monkey will transfer me to second or third tier, or offer to send out a technician.
But here’s the point: AI is basically that first tier monkey. They’re trained on the common denominator, and will regurgitate that. It’s not going to offer valuable new insights. So to me, it’s eseentially as useful as the first 10 or so minutes on a support call: getting the first tier monkey to transfer me to someone who can actually do more than just rattle off a script. And after that? AI falls silent (or starts hallucinating, which might be amusing but is certainly not particularly helpful).
Does it have its uses? Sure – generate boilerplate code to your heart’s content (though I’ve been using scripts for that since, I dunno, 2018 or so). But even for the most trivial code, you cannot trust its output. For example, from what I’ve seen it insists on using TailwindCSS to style stuff. For the love of all that is holy: do not use TailwindCSS. It is the devil’s tool.
Anyway, I digress. My point is: use AI as a search engine with super powers, because that’s all it is. The first result on Google has never necessarily been the correct one. And since these models are trained on basically the same data as, ehm, Google et al, neither is their answer going to be correct per se.
I did a fun experiment last week: I asked ChatGPT what it thought of my own (niche, but available on Github) framework. It had no clue so it just started making stuff up. When I pointed that out, at least it had the common courtesy to admit that it was out of its depth. But it took an actual human to realise it was going bonkers in the first place. Not very “intelligent” if you ask me.


