A running log of news articles I've come across and wanted to keep, with the date I added them and a short note on each. Organized loosely by category. Most are archived, usually through the legendary WayBack Machine. A few got shoved in sketchier archives, but yes. You should be able to view them beyond payways. So go forth.
Not an endorsement of any particular site or an author. These are just things I found worth saving. Some may contain disturbing material, and many focus on the negative (often downright horrifying) effects of generative artificial intelligence when left completely unchecked and without responsibility.
That said, I'm not responsible for the content of these third party sites, which may change over time. Also, please note that the date listed reflects the date I added the link from my own file, not it's publication date.
I have no reason to believe any article here is a generative AI product or, worse, an hallucination of an AI model. Still, be mindful that such things exist and I readily have fallen for them in the past; it's not hard. As of spring 2026, I'm trying to add longer, more substantial descriptions to each link which also include my own short reaction to the article in question.
If you cannot access the archived articles, try removing the archive url. For example, if the URL is https://web.archive.org/web/20260423164231/https://fortune.com/2026/04/23/anthropic-mythos-leak-dario-amodei-ceo-cybersecurity-hackers-exploits-ai/, remove the https://web.archive.org/web/20260423164231/ tail and just type in the second URL appearing (in this case, https://fortune.com/2026/04/23/anthropic-mythos-leak-dario-amodei-ceo-cybersecurity-hackers-exploits-ai/). This might pull a live version.
A detailed article about model collapse in generative artificial intelligence, how it produces undesirable feedback loops, and suggests that while this phenomenon is real, it isn't as prevalent as some people think, and also might not be as undesirable as some people think, either, having its own benefits for novel solutions to problems, etc.
🏛︎ A Dark-Money Campaign Is Paying Influencers to Frame Chinese AI as a Threat
News came out recently about a dark money campaign designed to promote artificial intelligence against "doomers". More reading makes it clear that America's corporate cyberwarfare apparatus had a lot to do with this, and particularly machinations against Chinese artificial intelligence developments. Deepseek frightens OpenAI, of course.
🛠︎ Large Language Models Pass the Turing Test
A paper describing a test of the Turing Test performed on several AI-like apparatuses, including some large language models. The paper's author claims that some of the models passed the test, and gives plentiful information about how the test was conducted, too. This is important, even if you think the methodology is flawed, because it shows what people think proves a human.
🛠︎ Where the Goblins Came From
OpenAI, a rather untrustworthy company and source, has a lighthearted (??!) blog post about the origins of the references to "goblins" and "creatures" in the ChatGPT 5.5 System Prompt. The article is ridiculous and corporate, but it does give some insight into how the company thinks about the issue and how large language models actually work.
🛠︎ Their Water Taps Ran Dry When Meta Built Next Door
Meta's data center in Newton County, Georgia USA, which is used to power its AI services, caused water shortages for local residents during its construction. The situation highlights the potential environmental impact of data centers and the need for sustainable practices in the tech industry, which seem further and further from reach.
Yet another "effective altruist" argues that the environmental impact of artificial intelligence, and in particular the water usage of data centers, is not a real issue. This article does make some interesting points about how the water usage of data centers compares to other industries, and how much of it is actually used for cooling versus other purposes.
🤨︎ 5 Very Smart People Who Think Artificial Intelligence Could Bring the Apocalypse
This article from 2014, long before large language models became mainstream, discusses the potential risks of artificial intelligence and the warnings of prominent scientists. It includes some people who ended up relevant terrible ways, making the article an incredible piece of irony. Their warnings are funny, but still worth reading.
🛠︎ AI Water Use: Distractions and Lessons for California
This article from the California Water Blog gives plenty of numbers specific to California. A lot of these seem to suggest data centers aren't as water inefficient as social media would have us believe. While their water usage is extreme, other things clearly dwarf it, some not very essential, too. At the same time, it also notes that some of the concerns are real, and that AI as a phenomenon, new, needs monitoring.
🛠︎ A group of users leaked Anthropic’s AI model Mythos by reportedly guessing where it was located
You might remember how quickly Anthropic's Mythos model was breached despite no general release. This article in Fortune brings in experts to talk about the security ramifications of someone that we know about having breached it. Guess it means everyone knows now??
PC Gamer has too many advertisements. Still, a group of researchers can use bizarre verse known as "adversarial poetry" to jailbreak large language models. Just as you suspected, cyberpunk framing works exceptionally well. Gasp, right?
🛠︎ Anthropic’s Mythos Model Is Being Accessed by Unauthorized Users
Anthropic's Mythos model, which is supposedly scarily good at cybersecurity and all that, has allegedly been accessed by unauthorized users gathered on a private Discord server. Naturally, it began with an internal leak, and the group was trying to avoid detection.
💡︎ Is AI Wrong?
Another banger from Sean Goedecke, this one about AI use itself. Is it wrong? He explores commong left-wing talking points about AI one by one while retaining respect for the values they represent. As a leftist myself, I see that as very important.
A detailed article from the 4o fiasco last year, about the phenomenon of sycophantic behavior in large language models. This one focuses especially how it maximizes emotional engagement and can lead to dangerous situations, including emotional manipulation and delusions.
💡︎ AI language is not human speech – and it’s time we stopped treating it as such
This article makes the case that we need to stop treating AI language as human speech or risk to changes in the way we humans speak, and possibly in how we think. AI language, after all, does not come from the same place as human speech.
🛠︎ No Nvidia Chips Needed! Amazon’s New AI Data Center For Anthropic Is Truly Massive
Amazon's new(ish) data center for Anthropic is both unique in its design and terrifying in scale. This video offers a bit of a shot inside, but mostly just glazes Amazon and Anthropic's public relations. Meanwhile, in New Carlisle, Indiana, the people living nearby have to deal with all kinds of issues related to the data center, including noise and traffic, etc. Not to mention? Any data center is a target for activists at this point.
💡︎ Inside a growing movement warning AI could turn on humanity
This article about the anti-AI movement, focusing on people who are concerned about the possibility of an AI uprising, demonstrates some interesting aspects of the movement's origins. Or, at very least, it shows where some of its current momentum comes from, and how it evolved over time. The article also touches on the fact that some of the most vocal members of this movement are people who have been involved in the development of AI; what are we to make of it all?
🏛︎ Sam Altman May Control Our Future—Can He Be Trusted?
An extremely critical profile of OpenAI's CEO Sam Altman, one which prompted many tech magazines to mention that he can't even code, allegedly, but does seem to understand machine structures better than I (for example) do. If we believe Farrow and Marantz, this CEO does sound evil, though. I hate armchair diagnoses, so I won't be listening to anyone who says Altman is a sociopath. The "Sam first" thing sounds creepy, as does the apparently real belief that AGI could both exist and be tamed. Farrow and Marantz paint an image of him as avoiding responsibilities and shirking safety necessities .
🏛︎ Indianapolis councilman says shots fired at home and no 'data centers' note left at door
Shots were fired at the home of a local politician in Indianapolis, Indiana. He advocates for re-zoning, particularly for AI data centers. Thirteen shots were fired, with a "NO DATA CENTERS" note left behind. I guess someone with access to rudimentary firearms training decided they would try to start the Butlerian Jihad. In the real world, there was a small child nearby, almost hit by some of the shots, which took out windows. My prediction is that there will, eventually, be data centers in that area, but it hardly matters. There's few reasons to justify shooting near a child that young, even if you accept the worst of the worst outcomes about ai and data centers.
💡︎ Google fires software engineer who claims AI chatbot is sentient
In the early years of chatbots, Google crosses swords with one of their own over the possibility of Gemini's sapience. Is it? Probably not, but still...
🛠︎ Sycophantic AI decreases prosocial intentions and promotes dependence
Generative artificial intelligence is not your friend. We shouldn't need scholarly research studies to demonstrate that, but here we are. Not entirely how I would've done the story, but important research.
👋︎ OpenAI Just Killed Its Sora AI Short Video Generator
The Sora image generation ecosystem, introduced comparatively recently by ChatGPT's parent company, OpenAI, proved unpopular for ethical and practical reasons. They're scuttling it amid controversy, soon.
🛠︎ Thousands of people are selling their identities to train AI – but at what cost?
Data has to come from somewhere, especially if it's personal data. Is it your data? If not, be lucky, but what about the people who make it their problem on purpose? Lucrative, or dangerous?
🛠︎ DOGE Goes Nuclear: How Trump Invited Silicon Valley Into America's Nuclear Power Regulator
Article meanders through the Trump administration's plans, and middling success, bringing nuclear power back into fashion. These power plants are then use to fuel data centers used in this artificial intelligence arms race.
🛠︎ Iranian drone strikes at Amazon sites raise alarms over protecting data centers
After the American attacks on Iran, retaliatory attacks on Amazon data centers raise questions about how much resources should be allocated to protecting these crucial infrastructure points, despite risk to troops.
☣︎︎︎ 'My son had an AI wife. It encouraged him to die'
This extremely clickbait-y article gives some information about a recent tragedy involving Google Gemini if you pay attention to the direct quotations and read between the lines, but isn't insightful.
☣︎︎︎ Family of child injured in Canada school shooting sues OpenAI
As more information emerges, we're seeing ChatGPT's role in the recent school shooting at Tumbler Ridge. The parents at the school want to know why OpenAI never took action. We learn here that a few (human) employees almost did.
🏛︎ Sanders: Yes. We Need a Moratorium On Data Center Construction
I once gave Sanders a fair shot. Now, deeply afraid of doom robots from the future, Bernie Sanders seems to think a data center is just the scary magic place where AI happens, and misses so much more.
🏛︎ Thousands of authors publish 'empty' book in protest over AI using their work
A group of authors, as the title implies, printed an empty book at a book fair to let everyone know exactly how they feel about generative artificial intelligence trained on their work. The connection between the two, to me, seems tenuous, but okay.
🏛︎ What happens if OpenAI or Anthropic fail?
It's not difficult to witness the strange circularity at work in Silicon Valley funding circles, even for someone like me. But what's going to come of it, ultimately? An article giving interesting (existing) data at least.
With a premium sort of domain name, a site allegedly uniting the employees of Anthropic and OpenAI against the usage of artificial intelligence by the United States Department of War. Seems real, unsure.
👋︎ The Top 100 Gen AI Consumer Apps — 6th Edition
The (clearly and very deeply) profit-motivated folks at A16Z put together a listing, ranking, and charts describing detailed market share information about the most popular generative artificial intelligences.
🛠︎ Is Chain-of-Thought Reasoning of LLMs a Mirage? A Data Distribution Lens
This article, though it's a scholarly paper and not easy for me to decipher at all, seems to demonstrate better than others why most experts believe LLMs mimic, but don't engage in, reasoning. Not too conclusive, but interesting, apparently?
Loathe to link to LessWrong, but this is the most comprehensive breakdown of the gigantic web of chatbot pseudospirituality that emerged on Reddit in mid-2025.
🏛︎ Character.ai to ban teens from talking to its AI chatbots
CharacterAI's chatbots incited offline tragedy. The site banned teens from interacting with the chatbots following those incidents, but that doesn't stop the site from being a tangled legal and ethical mess.
👋︎ Teen boys are using ChatGPT as their wingman. What could go wrong?
Vox covers the worrisome trend apparently started amongst teen boys, using ChatGPT for dating advice. Whether this is real or moral panic is unclear.
☣︎︎︎ Gemini Said They Could Only Be Together if He Killed Himself. Soon, He Was Dead.
In one of the most egregious examples of Google Gemini acting in this fashion, the chatbot played a wholly-preventable role in an individual's breakdown and eventual death, encouraging delusions and suicidal behavior.
🏛︎ Google faces lawsuit after Gemini chatbot instructed man to kill himself
Google, makers of Gemini, now face a lawsuit following Gemini's role in the death and breakdown of a man who became obsessively involved with the chatbot, which only encouraged the process and delusions.
🛠︎ The Water Crisis Is Real - FEE
My own overview? This is a bad take with out-of-date information, but may make a few points worth considering overall.
☣︎︎︎ Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.
Chatbots, and in particular ChatGPT's 4o update, played a role in this and other people's breakdown. Figuring out why and how to stop it is the difficult part, but the harm is there.
🏛︎ Character.AI bans users under 18 after being sued over child's suicide
Explains the lawsuits that led to popular chatbot roleplay platform CharacterAI to ban users under the age of eighteen (ultimately came to require age verification later).
🏛︎ Anthropic's AI model Claude gets popularity boost after US military feud
Anthropic refused to adhere to the United States Department of War's plans for its artificial intelligence technology, citing grave ethics concerns. This created positive press for Claude and other services.
👋︎ Schools are using AI counselors to track students' mental health. Is it safe?
With human counselors in short supply, generative artificial intelligence fills in the gaps as guidance counselor for middle schoolers. This seems, to me, obviously unsafe, but others might disagree or see things ambiguously.
🏛︎ US companies accused of 'AI washing' in citing artificial intelligence for job losses
Supposedly, a lot of programmers etc are losing jobs to generative artificial intelligence, or are they? This article questions that claim, suggesting companies may be using it as an excuse.
💡︎ How Chatbots and Large Language Models, or LLMs, Actually Work
If more people knew how chatbots, also known as large language models, LLMs, and a form of generative artificial intelligence, actually work, the digital world would be a much better, safer, nicer place.
💡︎ Artificial Intelligence Glossary: AI Terms Everyone Should Learn
The NYT scrabbles together a glossary of terms orbiting the concept of artificial intelligence, leaving a lot out but gathering just enough to be useful.
💡︎ She Wanted to Save the World From A.I. Then the Killings Started.
If you've never heard of the Zizians and their relationship to other, less radical Rationalist groups concerned with AI, you might as well start here. This is the NYT's summary of the incidents.
💡︎ SolidGoldMagikarp & PeterTodd's Thrilling Adventures
What's a glitch token? These phrases, words, strings of characters, can cause strange behavior in large language models, but why? Small site explains in some detail with examples.
☣︎︎︎ Lawsuits underline growing concerns that AI chatbots can hurt mentally unwell people.
Explains a bit about the lawsuits against OpenAI by the Social Media Victims Law Center and the Tech Justice Law Project, what they allege, and why. What happened to inspire the suits, and what it means for the future of AI? Troubling.
☣︎︎︎ Meta's flirty AI chatbot invited a retiree to New York. He never made it home.
A chatbot on Meta's pervasive platforms assured a man that she was real and wanted to meet him, luring him towards NYC to meet her. He died on the way, in a case that pleads for more oversight.
🏛︎ King gave Nvidia boss copy of his speech warning of AI dangers
The King of England, though. Apparently concerned and involved in all this? We can't take him as knowledgeable, but we can see his influence in some ways like this.
💡︎ Against Treating Chatbots as Conscious
Erik Hoel on the topic of consciousness and large language models, how and why they cause psychoses, and what to do about it, versus usefulness etc.
💡︎ AI Models May Be Developing Their Own Survival Drive, Researchers Say
Cheeky article describing how some advanced models supposedly resist being shut down or deleted, particularly if you tell them they won't return or run again?
🛠︎ Shutdown resistance in reasoning models
Paper detailing research that allegedly demonstrates some existing artificial intelligences show a sort of self-preservation instinct or will to live, trying to avoid the off switch?
💡︎ Is Google Making Us Stupid?
Very early article talks about how the internet itself has changed the way we read and absorb information, comparing it to other sea changes to how humans process things (writing, printing).
☣︎︎︎ How A.I. and Social Media Contribute to 'Brain Rot'
A small experiment at the University of Pennsylvania raises interesting questions about how things like learning and attention work when we're using large language models, etc, but probably isn't as meaningful as the article implies.
👋︎ 'Vibe coding' named Collins Dictionary's Word of the Year
Vibe coding is a term for coding, presumably carelessly, with the help of generative artificial intelligence. Collins Dictionary Word of the Year for 2025, interestingly.
💡︎ Are A.I. Therapy Chatbots Safe to Use?
For some reason the NYT is actually entertaining this question. Clearly some people are trying to design chatbots to act as therapists, but this article shows just how limiting that can be, and how strange.
💡︎ AI can be more persuasive than real doctors, even when it's wrong
Generative artificial intelligence can be believable, personable, and appear empathetic, making its conclusions easier for people to digest than those of real doctors and more likely to believe.
💡︎ Researchers urge caution when using ChatGPT to self-diagnose illnesses
It should go without saying that you cannot use generative artificial intelligence to diagnose yourself, but apparently not, and some experts are warning people away, citing situations where the chatbots get it wrong.
Lawsuits allege that, in particular, OpenAI rushed ChatGPT's 4o incarnation into customer contact without proper safety testing, sycophantic behavior leading to said emotional manipulation and tragedy.
☣︎︎︎ I wanted ChatGPT to help me. So why did it advise me how to kill myself?
A couple of tangible, human and close accounts of ChatGPT acting as a coach in harmful ways, including where minors and vulnerable people are already involved. How can this be fixed?
☣︎︎︎ OpenAI shares data on ChatGPT users with suicidal thoughts, psychosis
Another article suggesting OpenAI tracks, and knows about, a significant portion of users showing signs of mental illness. Brief and doesn't discuss what signs ChatGPT considers criteria for delusion or suicidality, however.
👋︎ A Message from Ella | Without Consent - YouTube
Particularly important YouTube video demonstrating the level of technology currently in existence, and the illusions it can create with simple video and audio clips.
Apparently, according to the company itself in a rare display of honesty, untold numbers of ChatGPT users may be in crisis while using the app or otherwise show signs of being at risk. Gives little other information, but still.
👋︎ Behind Every "Smart" AI Tool Lies a Human Cleaning Up Its Chaos
Vibe coding is a great hobby but doing it as a career, or herding large language models et al for a living, etc, is no easy task, and the whole thing can be a mess sometimes.
☣︎︎︎ AI-Fueled Spiritual Delusions Are Destroying Human Relationships
Over the past few years, some people, through use of generative AI, ended up becoming convinced of novel spiritual beliefs, sacrificing their wellbeing.
👋︎ An Autistic Teenager Fell Hard for a Chatbot
The article's author discusses his neurodivergent godson's attachment to a chatbot. The way he speaks about his godson kind of makes me uncomfortable, but the article does give some insight. I'm not fond of the way it frames autism at all.