Some thoughts on AI

Some thoughts on AI

Let's cut through the hype

I've been professionally writing software for nearly 40 years. Here's my thoughts:

  1. Large language models (LLMs) like ChatGPT are basically very large predictive text boxes. Instead of just guessing the next word, they can guess whole documents while burning enough energy to keep a family warm for a year.
  2. As a consequence of 1, and this cannot be stressed enough, they are not databases. You can ask questions, sure, but there are no facts as such. It will make things up. For example it's been known to cite academic papers which perhaps should exist, but don't.
  3. Machine learning AI as such now, has things like discriminator functions that can say whether a piece of data is likely to fall in a particular mathematical space. We now have the computing power to make millions of those calculations very quickly. This is where the self-driving car comes from, a very mechanical sifting of data very quickly. There's nothing intelligent about it except the algorithms, and they are an entirely human creation. These machines can drive very successfully in dry conditions in California - and as far as I know nowhere else.

To illustrate point 2, the Dilbert creator, Scott Adams, recently had a "conversation" with ChatGPT where it convinced him of various things that upset him. Well, it simply reflected his biases back at him, because of course it did. That's what it does. It's not a conversation, it's selecting the next thing in the predictive text box that your questions tune to your own biases. It doesn't know anything, it's an industrial finder and repeater of patterns.

The other thing about point 2, is the current tech should be nowhere near anything that involves people dying or any kind of life critical decisions. It makes stuff up, and doesn't understand causality. You may have heard the phrase the operation was a success, but the patient died. This will happen without a lot of safeguards in place and is a really stupid idea.

Point 3 - this means that these systems can't improvise in novel situations, which you really don't want when they're in charge of vehicles that weigh several tons and will squash you flat.

One AI startup made a system that looked at photographs and tried to identify "criminals". Yes, the fools reinvented phrenology and didn't have the wit to realise. Also ... are you more "criminal" if the photo was taken in dark lighting conditions? Who knows?

There's an interesting book called Weapons of Math Destruction by Cathy O'Neill that's worth a read. It looks in some detail about how some things in the carceral state have become self-fulfilling prophecies. It discusses the biases of training data really well, if I recall.