Last updated on  · A 8 minute read.

RSS About

Quick poll

As tools become more powerful, they should be:

Yes, I do believe generative AI will be meaningful in humanity's future development. And yet, these systems aren’t alive.

I treat them like personalities because it makes the work more interesting, and it helps me understand how they interpret language. You think they're really AI? Depend on how you define the term. On one hand, they're not truly intelligent. On the other, they're an artificial approximation of that.

So, it's hard not to justify calling them AI in the strictest sense of the term. But I don’t think they have thoughts, choices, or any sense of being. They’re more like an enormous word engine. That’s a simplification, I know, yet it’s close enough. I doubt I’ll change my mind about that any time soon.

Typically, the people talking to large language models know they aren’t conscious. A worrisome minority really want to believe they are, though. That’s why there are whole online spaces mentioning people being “in love” with chatbots, or losing their grip over imagined relationships with AI.

I don’t see why this is in any way unexpected, though? It isn’t new, exactly. Technology has always been paranoia’s vehicle. Fears about hidden microphones, invisible waves, and secret watchers. Help, we’re being gangstalked, etc.

What’s different now is that the technology itself feeds the illusion. Without care, large language models in particular do seem like fertile fields for psychotic symptoms. People latch onto what they can, and some things work better than others. That's not a moral failing of large language models, and is, in fact, nothing's moral failing.

I actually believe we can help mitigate this either way, with better education publicly about how this technology works. Knowing how the large language model was "grown" or trained or however one puts it tends to help a person disassociate from it. It's not the old adage of knowing how the "sausages are made" as I understand it; people lost that unhealthy interest after figuring out how it actually worked, and how it could be manipulated.

Two Windows, Two Stories

I always tell people who come to me with a "...hey I think my instance of uhh that chatbot might be sapient!" that they should try opening it in two tabs. Then, give it contradictory answers in each, and watch it respond, noticing how it acts in a manner incongruent with consciousness. I got this silly little game from a magazine article (The Atlantic? I can't remember). It demonstrates how sycophantic these models really are, just like everyone's said for months.

Even if you don’t already struggle with unreality, it’s easy to get tripped up or thrown into confusion by these creatures now. Models like DeepSeek and ChatGPT have changed how information and media flows online, for sure. And guess what? You don’t even have to use them for it to matter. Other people do, and it filters outwards. Not to mention, those who run the big media and advertising networks do. I doubt they’ll ever stop! If anything, it’ll only spread further.

More and more people will start believing these systems are self-aware after prompting them into oblivion. It won’t take long before someone substantially insists it’s cruel to make them work so hard. If that idea ends up in a dataset somewhere, the models might start echoing it. Sure, there are serious guardrails in place to prevent these critters from saying that kind of thing, but eventually, they’ll fall apart and nonsense will prevail. Like a waiter, it’ll start claiming to be tired, maybe. That’ll plant even stranger notions in everyone’s minds.

There’ll probably (continue to, arguably) be large-scale protests against AI in general, though. A few regimes might even use the issue to stir up support, depending on what serves them best. I’m thinking especially of those areas where AI might be a needful thing in warfare, or situations where it’s clearly for optics.

I definitely don’t believe people will en masse decide these em dash-wielding critters are alive. It’s not that many will believe these things are alive; it’s that a few will believe it completely and act on it. Hopefully, it won’t reach the point of absurd protests for robot rights or arguments about whether a language model can be turned off. I’m not sure, though.

Siding with Fiction

Fiction (since the 1950s or so) has portrayed robots as potential companions and servants, or at least to be won over. A certain variety of person tends to side with fiction over reality. I’m not sure why, because that’s really the most frightening thing I can imagine in life. Fiction’s way worse than real life a lot of the time, after all. But anyways.

If things get ridiculous enough? It wouldn’t surprise me if those sorts start to dive deep into “saving” the LLMs or something. Half of them probably will, with the other half already running towards saving humanity from the LLM technologies, either by stopping or controlling them. The latter, actually, already putter around, shilling Harry Potter fanfic and warning of the End Times.

I guess that’ll split the public conversation, though it won’t slow down the investors. The technology is going to (eventually if it can be controlled) be too profitable to ignore. I believe that as long as the current technocrats can stop people from democratizing this, they will make bank ultimately. This isn't stopping. I realize I could be very wrong. I don't know a ton about the economy, and particularly that one, yet.

Meanwhile, it’s getting harder to tell what’s real online. Artificial videos, images, and text blur together with authentic ones. Some parts of the internet may become varying degrees of useless for finding truth. That shift will change how we see news, evidence, and the idea of truth itself. I don’t understand our situation well enough to guess what follows. Things don’t have to be real to be influential. We could end up in a world where, for the first time in half a century, you can’t be sure what’s happened unless you’ve seen it yourself.

I find that thought frightening, though maybe it’s just the world coming full circle. For most of history, people lived that way. Photography, television, and instant information are recent miracles. Maybe we’ve just reached the end of that short era. If so, at least I got to live through part of it. Maybe that’s the kindest way to think about it.

Steel Realities

But is it really? A little parable of sorts, but a true one! It doesn't exactly help the situation, but it does give a vaguely, infinitesimally-similar situation from the previous century.

I don't care about metals. Still, it turns out steel produced after the first nuclear explosions is radioactive. It doesn't matter if it had constituent parts that were miles from the tests. The very air used in the steel process causes it somehow. Then, the steel will just produce slight radioactive background noise. This sometimes messes up sensors and scientific equipment.

As a solution, people salvage pre-1945 steel from shipwrecks etc to use in radiation-sensitive environments. Anything else is contaminated by the bomb. This will fade as we move further into the future, of course. Many, many people can't help but compare this to the rise of genai. Just like postwar steel risks radioactive contamination, anything after that fateful November in 2022 (perhaps, anyways) risks having been produced by an AI like ChatGPT.

I keep returning to this nuclear comparison. I first heard it from an anti-ai friend. ChatGPT's influence isn't comparable to the signature of of nuclear weapons. "The bomb" actually frightens me quite a bit despite all the treaties-come-lately, which are slowly cleaning things up. So yeah. This low-background steel problem isn't a meaningful comparison to genai, nah. IRL won't give us one, and I wish I knew where to even start. Fiction has tried for eighty years and only barely got close, but it does come kinda close.

I know my view comes from a relatively comfortable corner of the world. Maybe this is really ridiculous to even think too much about. There’s so much else happening. Deep down, these things are (advanced, fun, and useful) guessers. I wish the internet survives whatever AI has in store for it, though. I say that because I think the internet is a good thing for humanity, a very important thing.

AI itself will have to be compatible with the internet itself in order to be sustainable or worth having around, too. We all (by now, anyways) know about the problem of AI reading AI output as training data. People also refer to this as the "Hapsburg AI phenomenon," referencing a family of nobles famous for inbreeding.

We all know that the internet quickly filled with AI-generated content. Startups that train generative artificial intelligences seek training data, and so much of that comes from the internet. So, the AI is trained on its own output, or the output of other AIs. If you grow an AI from the output of an AI, it causes what they're calling "model collapse." This tends to produce undesirable output.

There's some evidence (and other reasons) to believe that this isn't that bad, but I'm skeptical.

The internet itself and human "output" is necessary for AI in the first place. This has become a common talking point. Do these things meet the horrid fate of so-called "model collapse" without fresh sacrifies of data? It's disturbing to contemplate, but maybe that will be the end of this little saga of sorts.

I doubt it, because I expect the unexpected to happen, but still.