Tag Archives: AI Safety

The AI hype is clearly harmful

Today, I think we’d do well by distancing ourselves from the AI hype. The slop is real, and it’s now obvious that it has created massive problems, and is likely to continue to do so. I don’t want to associate myself with something this damaging. Do you?

Shallow ethics

The issues are many.

For one, AI is fundamentally hostile against anyone contributing to the digital commons, by the fact that AI companies are massively freeloading on published source code, articles, images and any other creative content, without any thought of license constraints or contributing back. If AI companies, for example, were funding the open source ecosystems they are DOS’ing, or paying the artist they are copying, the situation might be marginally better. They’re not. This means the training material these companies are misappropriating should lead to their models being considered ethically tainted.

Next, we have to remember that these models are “grown” on whatever data is fed to them. If this input contains bias, lies inaccuracies or omissions, then the resulting model will reflect this. Garbage in, garbage out.

And even worse, the resulting model is opaque by design. Any rules, corrections, filters or other efforts to compensate for “weaknesses”, are under the full control of the entity growing the model. This puts a massive amount of leverage into their hands; they can color, censor or emphasize any political, social, cultural or even religious agenda they wish! The only choice we have is to accept the models as they are delivered, or to try to polish these turds so that they are a little better for some narrow use-cases. But at the core, it is still a turd.

Lock-in economics

And then there’s the economic aspect. Let’s keep in mind that there are massive investments in AI companies (in the order of 100’s of Bln USD announced), all of which is expected to turn a profit at some point.

We know how expensive it is to train a model, and how error prone it’s inferred output is, and even if some of this can be compensated by massively increasing the energy costs (e.g. by “agentifying” their products or adding manual rules to catch the worst output), these expenses WILL ultimately be put on the end-user. This is where the Return on Investment is extracted.

How this happens is not a secret:

  1. Create a product and hype it up.
  2. Get as many customers into a “lock-in” situation, where the cost of migration is as large as possible.
  3. When you have enough locked-in customers, start increasing the price.

The problem with this picture is that we know that the initial investments have been insane, and are scheduled to increase. We know that enormous amounts of the costs of these models have been externalized.

(e.g. in the form or excessive use of water and fossil fuels needed to power these systems, or the societal damage that happens when people are replaced by LLMs, or the opportunity cost IT students pay when they realize they won’t find any work in the field they studied for, or the lost business for artists, or wasted time spent on compensating or second-guessing output that one is unsure includes any hallucinations).

We also know who is going to pay in the end – the users and businesses who decide to go “all-in”. At some point, these people will have to ask themselves:

How much am I – or my customers – willing to pay for this slop?

– Random Hapless Rube

Hostile rhetorics

AI proponents also tend to use cheap rhetoric to convince other to buy into their message. Why is that necessary? Pushing a panic-like FOMO messages onto unsuspecting techno-optimists is cruel and unnecessary. There’s no need for manipulative language like “Embrace it or get out“. People with good intentions don’t have to resort to hostile language like this!

The AI hype is clearly cruel, irrational and ignorant of the real consequences it creates, and therefore needs to be shut down, or at minimum, put AI on pause.

This particular lemon is NOT worth the squeeze.

If we continue to encourage this insanity, we’re complicit in the waste of resources, attention, life and humanity. THIS IS NOT OKAY.

Favorite resources

Here are some of the resources I’ve used to learn more.

Podcasts

Long form audio or books

Fun & Animation

Further resources