Guess what? I'll likely update this article frequently, because the topic is so crucial! Expect more information, links, and thoughts as I learn more!
I’ve considered myself a transhumanist since roughly the late 1990s. If you want a much less personal, more comprehensive perspective on the movement, read the Wikipedia page.
Transhumanism begins with the assertion that we, as humans, ought not let biology stop us. In other words, the human race should embrace technology, merging with it figuratively or literally. Transhumanists tend to be (comparatively) optimistic and hopeful because of technology. Most other transhumanists I’ve met seem to all agree that humanity is somehow bound for positive things as a species. We almost all disagree to some degree about how we will or should get there. Pathways are harder to predict than outcomes.
In some ways, this might sound normal; after all, technology does seem to bring us neat improvements every so often, and medicine's doing great in some ways. Yet, merging with technology definitely sounds strange. How does it even work? The truth is, that idea means a lot of things to a lot of different people. Some believe it's already begun, or very far off, or that it's a metaphor but still extremely important, but anyways...
I'm sure there's a chance you've never ever heard of transhumanism before. The concept is, unfortunately, still a bit obscure. As a philosophical movement, it doesn’t receive much (positive) media coverage. Truth be told, transhumanism gets mentioned more in fiction than anywhere else, I'd say, and it's kind of a shame? By "gets mentioned" I mean in terms of general worldview, though, rather than the exact phrasing.
I swear it's not (just) Star Trek...
The science fiction narratives that chew on transhumanism can take many different forms. It's a really rich subject to explore, but a lot of narratives just tend towards the bleak. Such a long time spent fearing the Bomb certainly did a number on the West's concept of the future. But some stories do promote a hopeful, transhumanist vision of the future.
We do have a lot to choose from when it comes to future prognostications. I like to purposefully ignore the "religious" ones here; while I am religious, I believe any endtimes scenario will play out just as if it were secular, so it hardly matters my beliefs. Anyways.
Let's not assume every bit of science fiction is meant as a secular Revelations, though. Dystopias are warnings, and if you fight them, they might never come true, right? Furthermore, firstly fiction entertains more than anything else. I still can't see why so many narratives meant just as entertainment get taken too seriously; for example, why get traumatized by some of these "evil robot" movies?
It gets more complicated when you move beyond the bounds of fiction, because transhumanism does. The word itself frequently shows up in right-wing lies about chemtrails and vaccines, in which transhumanists are the villains, of course. In these narratives, we’re (usually) trying to kill a bunch of people and usher in some kind of technocratic dystopia. I’m not saying certain transhumanist agendas wouldn’t lead to something bad, but that’s typically because they’re stupid, not necessarily evil. update in April 2026: yes, they're evil, nevermind. their version of transhumanist ideology is evil. fuck them. I guess we really might slide towards transhumanist dys topia if we don't work at it...
Most people prefer excitement, so dystopias fit into popular media more than utopias. When the latter does show up, though, I think it's uniquely transhumanism. Amongst transhumanists, I can’t count the number of people who mentioned Star Trek and things with a similar post-scarcity flavor. Colloquially, noblebright tends to be the genre that fits the best. I see the transhumanist influence there more than anywhere else in fiction, even though it's often a fantasy genre.
Even if you’ve never heard the word transhumanism, the concept already diffused into our air supply by 2015 at the latest, I’d say. In the late 2000s, people began using the internet as the main course in their information diet. This shifted the way they felt about the future, I guess? Either way, people became a shade more optimistic (in America especially) around that time, and especially about technology itself. This might be in part because we'd clearly "survived" things like the millennium bug and the dotcom crash, etc...
As I was saying before, though, transhumanism still gets little positive media coverage, though. It's definitely, and it has always been one of those things that tend to rest on the tip of everyone's tongue. Not everyone knows this is a (quasi)-codified school of philosophy (I guess?). Right now, it feels like it's best to pick a side in general, or somewhere along this spectrum.
Optimistic about humanity?
My interest in transhumanism grew alongside technology. It certainly increased with the appearance of artificial intelligence. Even with little context, I realize that transhumanism had always anticipated something resembling it. Many transhumanists have been thinking about the implications of artificial intelligence of all kinds for a long time.
I ended up considering myself a transhumanist because I believe we should and must embrace technology as a species. I’ve always allowed myself to remain fuzzy on the details. I really hope my beliefs become refined over time, especially things being what they are. I’ve only interacted minimally with other transhumanists, and who knows if I agree with their interpretations, ultimately? Does it matter? I believe in the most general concepts of transhumanism fairly strongly, though.
I’m optimistic about humanity’s future, one way or another. I embrace technological progress as part of that, and am open to exploring all kinds of new technologies myself, even. Before last Autumn (2025, September, I think?) I wasn't involved, publicly or privately, with transhumanism as a movement. No particular strain of transhumanism drew me in, either. Some authors seemed a little neat from years ago, but. beyond that, I knew very little.
I never really thought to pin down the specifics of my own transhumanist beliefs. They seemed as presently irrelevant as believing SETI might someday find something. In that case, you might support SETI but too much layperson involvement would seem kind of pointless, right? In the mid-2020s, AI kind of shoved transhumanism into personal and public relevance, though. People are discussing (read: fighting over) these topics outside of those niche mailing lists that I’m possibly still banned from.
I get that there's some semblance of AI. I definitely get that it is changing the world a lot, actually. The situation is much more complicated than that, though, so I'll do my best to understand it.
Transhumanism has grown too entwined with artificial intelligence both as it currently stands and as it might actually exist someday. Many important topics often discussed by transhumanists over the past twenty years were kind of niche cocktail party things. Now, suddenly, due to ChatGPT, they’re in mainstream dialogue. Others appear, and suddenly the whole world has a veritable chorus of the things. This is a situation with a ton of variables, more than anyone could've predicted, and it's not like things will ever truly "settle" down, if that's what you hope for. And you have to worry about the people who are at the helm of all this.
I don't believe it's possible for them to truly "crash" the ship or "burst" the Bubble, leaving us all near-dead. Technology doesn't work that way. Just as the Dotcom Bubble Burst didn't spell the end of the World Wide Web, large language models et al (lol) will not vanish or become completely untenable if that happens. We should still be trying to prevent those asses from causing something like that, and towards more sustainable ways forwards.
Dear Dead Ends
Plenty of transhumanists seem to be saying that these current artificial intelligence paradigms are a “dead end” as far as the search for machine life goes.
I kind of think so, too. There is a small chance to me that they won't be, and I do believe that research coming from them might be really fruitful. My own opinion is that it doesn’t really matter, though, because they’re not going away. If real artificial intelligence (of the expected sort) develops, it will have access to LLMs et al either way. They are now part of our aggregate technological landscape in a big way. They have to exist in any discussion of transhumanism, and should exist in any technological plan for the future. That doesn't always mean a plan for usage of these current AI, but I don't see 'em going anywhere anytime soon.
To complicate things, people now associate the phrase “artificial intelligence” with them almost completely. This makes any kind of alternate path towards it difficult to fund and follow through, especially as generative AI raises controversies. There are a lot of valid controversies around it, but all know that by now. I can mention that the typical large language model’s lust for data centers objectively worsens the energy crisis. So do many other things, but the dialogue surrounding this one remains important, too. Modern so-called artificial intelligence has done a great deal of harm, and you can’t deny it!
If we ever really make an actual self-sustaining artificial intelligence, it’ll be like we’ve created an alien. It will be a species with silicon instead of flesh, but still almost like something bioengineered. But guess what? That wouldn’t be any more or less frightening than any other alien “first contact” scenarios, in my opinion! I’m also skeptical, yes, very skeptical that this would be possible any time soon. I know for sure that LLMs aren’t this hypothetical new silicon species. Isn’t this actually a bit disappointing to a transhumanist? It’s also a relief, so I don't know...