Last updated on  · A 27 minute read.

RSS About

A running log of news articles I've come across and wanted to keep, with the date I added them and a short note on each. Organized loosely by category. Most are archived, usually through the legendary WayBack Machine. A few got shoved in sketchier archives, but yes. You should be able to view them beyond payways. So go forth.

Not an endorsement of any particular site or an author. These are just things I found worth saving. Some may contain disturbing material, and many focus on the negative (often downright horrifying) effects of generative artificial intelligence when left completely unchecked and without responsibility.

That said, I'm not responsible for the content of these third party sites, which may change over time. Also, please note that the date listed reflects the date I added the link from my own file, not it's publication date.

I have no reason to believe any article here is a generative AI product or, worse, an hallucination of an AI model. Still, be mindful that such things exist and I readily have fallen for them in the past; it's not hard. As of spring 2026, I'm trying to add longer, more substantial descriptions to each link which also include my own short reaction to the article in question.

If you cannot access the archived articles, try removing the archive url. For example, if the URL is https://web.archive.org/web/20260423164231/https://fortune.com/2026/04/23/anthropic-mythos-leak-dario-amodei-ceo-cybersecurity-hackers-exploits-ai/, remove the https://web.archive.org/web/20260423164231/ tail and just type in the second URL appearing (in this case, https://fortune.com/2026/04/23/anthropic-mythos-leak-dario-amodei-ceo-cybersecurity-hackers-exploits-ai/). This might pull a live version.

💡︎ AI Cannibalism

Added on 3 May 2026 · Gwern Branwen · Gwern.net · Ideas

A detailed article about model collapse in generative artificial intelligence, how it produces undesirable feedback loops, and suggests that while this phenomenon is real, it isn't as prevalent as some people think, and also might not be as undesirable as some people think, either, having its own benefits for novel solutions to problems, etc.

🏛︎ A Dark-Money Campaign Is Paying Influencers to Frame Chinese AI as a Threat

Added on 2 May 2026 · Taylor Lorenz · Wired · Law & Policy

News came out recently about a dark money campaign designed to promote artificial intelligence against "doomers". More reading makes it clear that America's corporate cyberwarfare apparatus had a lot to do with this, and particularly machinations against Chinese artificial intelligence developments. Deepseek frightens OpenAI, of course.

🛠︎ Large Language Models Pass the Turing Test

Added on 30 Apr 2026 · Cameron R. Jones · University of California, San Diego · Systems and Technologies

A paper describing a test of the Turing Test performed on several AI-like apparatuses, including some large language models. The paper's author claims that some of the models passed the test, and gives plentiful information about how the test was conducted, too. This is important, even if you think the methodology is flawed, because it shows what people think proves a human.

🛠︎ Where the Goblins Came From

Added on 30 Apr 2026 · OpenAI · Systems and Technologies

OpenAI, a rather untrustworthy company and source, has a lighthearted (??!) blog post about the origins of the references to "goblins" and "creatures" in the ChatGPT 5.5 System Prompt. The article is ridiculous and corporate, but it does give some insight into how the company thinks about the issue and how large language models actually work.

🛠︎ Their Water Taps Ran Dry When Meta Built Next Door

Added on 30 Apr 2026 · Eli Tan · The New York Times · Systems and Technologies

Meta's data center in Newton County, Georgia USA, which is used to power its AI services, caused water shortages for local residents during its construction. The situation highlights the potential environmental impact of data centers and the need for sustainable practices in the tech industry, which seem further and further from reach.

🛠︎ The AI Water Issue Is Fake

Added on 30 Apr 2026 · Andy Masley · Systems and Technologies

Yet another "effective altruist" argues that the environmental impact of artificial intelligence, and in particular the water usage of data centers, is not a real issue. This article does make some interesting points about how the water usage of data centers compares to other industries, and how much of it is actually used for cooling versus other purposes.

🤨︎ 5 Very Smart People Who Think Artificial Intelligence Could Bring the Apocalypse

Added on 30 Apr 2026 · Victor Luckerson · Time · Effects

This article from 2014, long before large language models became mainstream, discusses the potential risks of artificial intelligence and the warnings of prominent scientists. It includes some people who ended up relevant terrible ways, making the article an incredible piece of irony. Their warnings are funny, but still worth reading.

🛠︎ AI Water Use: Distractions and Lessons for California

Added on 26 Apr 2026 · Jay Lund · California Water Blog · Systems and Technologies

This article from the California Water Blog gives plenty of numbers specific to California. A lot of these seem to suggest data centers aren't as water inefficient as social media would have us believe. While their water usage is extreme, other things clearly dwarf it, some not very essential, too. At the same time, it also notes that some of the concerns are real, and that AI as a phenomenon, new, needs monitoring.

🛠︎ A group of users leaked Anthropic’s AI model Mythos by reportedly guessing where it was located

Added on 23 Apr 2026 · Marco Quiroz-Gutierrez · Fortune · Systems and Technologies

You might remember how quickly Anthropic's Mythos model was breached despite no general release. This article in Fortune brings in experts to talk about the security ramifications of someone that we know about having breached it. Guess it means everyone knows now??

🛠︎ Poets are now cybersecurity threats: Researchers used adversarial poetry to jailbreak AI, and it worked 62 percent of the time

Added on 23 Apr 2026 · Lincoln Carpenter · PC Gamer · Systems and Technologies

PC Gamer has too many advertisements. Still, a group of researchers can use bizarre verse known as "adversarial poetry" to jailbreak large language models. Just as you suspected, cyberpunk framing works exceptionally well. Gasp, right?

🛠︎ Anthropic’s Mythos Model Is Being Accessed by Unauthorized Users

Added on 22 Apr 2026 · Rachel Metz · Bloomberg · Systems and Technologies

Anthropic's Mythos model, which is supposedly scarily good at cybersecurity and all that, has allegedly been accessed by unauthorized users gathered on a private Discord server. Naturally, it began with an internal leak, and the group was trying to avoid detection.

💡︎ Is AI Wrong?

Added on 21 Apr 2026 · Sean Goedecke · Ideas

Another banger from Sean Goedecke, this one about AI use itself. Is it wrong? He explores commong left-wing talking points about AI one by one while retaining respect for the values they represent. As a leftist myself, I see that as very important.

🛠︎ AI Sycophancy

Added on 21 Apr 2026 · Sean Goedecke · Systems and Technologies

A detailed article from the 4o fiasco last year, about the phenomenon of sycophantic behavior in large language models. This one focuses especially how it maximizes emotional engagement and can lead to dangerous situations, including emotional manipulation and delusions.

💡︎ AI language is not human speech – and it’s time we stopped treating it as such

Added on 20 Apr 2026 · Ada Palmer and Bruce Schneier · The Guardian · Ideas

This article makes the case that we need to stop treating AI language as human speech or risk to changes in the way we humans speak, and possibly in how we think. AI language, after all, does not come from the same place as human speech.

🛠︎ No Nvidia Chips Needed! Amazon’s New AI Data Center For Anthropic Is Truly Massive

Added on 20 Apr 2026 · Katie Tarasov · CNBC · Systems and Technologies

Amazon's new(ish) data center for Anthropic is both unique in its design and terrifying in scale. This video offers a bit of a shot inside, but mostly just glazes Amazon and Anthropic's public relations. Meanwhile, in New Carlisle, Indiana, the people living nearby have to deal with all kinds of issues related to the data center, including noise and traffic, etc. Not to mention? Any data center is a target for activists at this point.

💡︎ Inside a growing movement warning AI could turn on humanity

Added on 19 Apr 2026 · Nitasha Tiku · Washington Post · Ideas

This article about the anti-AI movement, focusing on people who are concerned about the possibility of an AI uprising, demonstrates some interesting aspects of the movement's origins. Or, at very least, it shows where some of its current momentum comes from, and how it evolved over time. The article also touches on the fact that some of the most vocal members of this movement are people who have been involved in the development of AI; what are we to make of it all?

🏛︎ Sam Altman May Control Our Future—Can He Be Trusted?

Added on 9 Apr 2026 · Ronan Farrow and Andrew Marantz · The New Yorker · Law & Policy

An extremely critical profile of OpenAI's CEO Sam Altman, one which prompted many tech magazines to mention that he can't even code, allegedly, but does seem to understand machine structures better than I (for example) do. If we believe Farrow and Marantz, this CEO does sound evil, though. I hate armchair diagnoses, so I won't be listening to anyone who says Altman is a sociopath. The "Sam first" thing sounds creepy, as does the apparently real belief that AGI could both exist and be tamed. Farrow and Marantz paint an image of him as avoiding responsibilities and shirking safety necessities .

🏛︎ Indianapolis councilman says shots fired at home and no 'data centers' note left at door

Added on 8 Apr 2026 · Associated Press · PBS NewsHour · Law & Policy

Shots were fired at the home of a local politician in Indianapolis, Indiana. He advocates for re-zoning, particularly for AI data centers. Thirteen shots were fired, with a "NO DATA CENTERS" note left behind. I guess someone with access to rudimentary firearms training decided they would try to start the Butlerian Jihad. In the real world, there was a small child nearby, almost hit by some of the shots, which took out windows. My prediction is that there will, eventually, be data centers in that area, but it hardly matters. There's few reasons to justify shooting near a child that young, even if you accept the worst of the worst outcomes about ai and data centers.

💡︎ Google fires software engineer who claims AI chatbot is sentient

Added on 27 Mar 2026 · Guardian staff and agency · The Guardian · Ideas

In the early years of chatbots, Google crosses swords with one of their own over the possibility of Gemini's sapience. Is it? Probably not, but still...

🛠︎ Sycophantic AI decreases prosocial intentions and promotes dependence

Added on 27 Mar 2026 · Myra Cheng, Cinoo Lee et al · American Association for the Advancement of Science · Systems and Technologies

Generative artificial intelligence is not your friend. We shouldn't need scholarly research studies to demonstrate that, but here we are. Not entirely how I would've done the story, but important research.

👋︎ OpenAI Just Killed Its Sora AI Short Video Generator

Added on 26 Mar 2026 · Matt Jancer · Vice · Society & Culture

The Sora image generation ecosystem, introduced comparatively recently by ChatGPT's parent company, OpenAI, proved unpopular for ethical and practical reasons. They're scuttling it amid controversy, soon.

🛠︎ Thousands of people are selling their identities to train AI – but at what cost?

Added on 26 Mar 2026 · Shubham Agarwal · The Guardian · Systems and Technologies

Data has to come from somewhere, especially if it's personal data. Is it your data? If not, be lucky, but what about the people who make it their problem on purpose? Lucrative, or dangerous?

🛠︎ DOGE Goes Nuclear: How Trump Invited Silicon Valley Into America's Nuclear Power Regulator

Added on 26 Mar 2026 · Avi Asher-Schapiro · ProPublica · Systems and Technologies

Article meanders through the Trump administration's plans, and middling success, bringing nuclear power back into fashion. These power plants are then use to fuel data centers used in this artificial intelligence arms race.

🛠︎ Iranian drone strikes at Amazon sites raise alarms over protecting data centers

Added on 16 Mar 2026 · Rest of World · Systems and Technologies

After the American attacks on Iran, retaliatory attacks on Amazon data centers raise questions about how much resources should be allocated to protecting these crucial infrastructure points, despite risk to troops.

☣︎︎︎ 'My son had an AI wife. It encouraged him to die'

Added on 15 Mar 2026 · The Sunday Times · Direct Risks

This extremely clickbait-y article gives some information about a recent tragedy involving Google Gemini if you pay attention to the direct quotations and read between the lines, but isn't insightful.

☣︎︎︎ Family of child injured in Canada school shooting sues OpenAI

Added on 15 Mar 2026 · Laura Cress · BBC · Direct Risks

As more information emerges, we're seeing ChatGPT's role in the recent school shooting at Tumbler Ridge. The parents at the school want to know why OpenAI never took action. We learn here that a few (human) employees almost did.

🏛︎ Sanders: Yes. We Need a Moratorium On Data Center Construction

Added on 13 Mar 2026 · Bernie Fucking Sanders FFS · US Senator Bernie Sanders · Law & Policy

I once gave Sanders a fair shot. Now, deeply afraid of doom robots from the future, Bernie Sanders seems to think a data center is just the scary magic place where AI happens, and misses so much more.

🏛︎ Thousands of authors publish 'empty' book in protest over AI using their work

Added on 11 Mar 2026 · The Guardian · Law & Policy

A group of authors, as the title implies, printed an empty book at a book fair to let everyone know exactly how they feel about generative artificial intelligence trained on their work. The connection between the two, to me, seems tenuous, but okay.

🏛︎ What happens if OpenAI or Anthropic fail?

Added on 11 Mar 2026 · Karen Kwok · Reuters · Law & Policy

It's not difficult to witness the strange circularity at work in Silicon Valley funding circles, even for someone like me. But what's going to come of it, ultimately? An article giving interesting (existing) data at least.

🏛︎ We Will Not Be Divided

Added on 10 Mar 2026 · anonymous · NotDivided.Org · Law & Policy

With a premium sort of domain name, a site allegedly uniting the employees of Anthropic and OpenAI against the usage of artificial intelligence by the United States Department of War. Seems real, unsure.

👋︎ The Top 100 Gen AI Consumer Apps — 6th Edition

Added on 10 Mar 2026 · A16Z · Society & Culture

The (clearly and very deeply) profit-motivated folks at A16Z put together a listing, ranking, and charts describing detailed market share information about the most popular generative artificial intelligences.

🛠︎ Is Chain-of-Thought Reasoning of LLMs a Mirage? A Data Distribution Lens

Added on 9 Mar 2026 · Arizona State University · Systems and Technologies

This article, though it's a scholarly paper and not easy for me to decipher at all, seems to demonstrate better than others why most experts believe LLMs mimic, but don't engage in, reasoning. Not too conclusive, but interesting, apparently?

💡︎ The Rise of Parasitic AI

Added on 9 Mar 2026 · LessWrong · Ideas

Loathe to link to LessWrong, but this is the most comprehensive breakdown of the gigantic web of chatbot pseudospirituality that emerged on Reddit in mid-2025.

🏛︎ Character.ai to ban teens from talking to its AI chatbots

Added on 6 Mar 2026 · BBC · Law & Policy

CharacterAI's chatbots incited offline tragedy. The site banned teens from interacting with the chatbots following those incidents, but that doesn't stop the site from being a tangled legal and ethical mess.

👋︎ Teen boys are using ChatGPT as their wingman. What could go wrong?

Added on 6 Mar 2026 · Vox · Society & Culture

Vox covers the worrisome trend apparently started amongst teen boys, using ChatGPT for dating advice. Whether this is real or moral panic is unclear.

☣︎︎︎ Gemini Said They Could Only Be Together if He Killed Himself. Soon, He Was Dead.

Added on 4 Mar 2026 · Wall Street Journal · Direct Risks

In one of the most egregious examples of Google Gemini acting in this fashion, the chatbot played a wholly-preventable role in an individual's breakdown and eventual death, encouraging delusions and suicidal behavior.

🏛︎ Google faces lawsuit after Gemini chatbot instructed man to kill himself

Added on 4 Mar 2026 · The Guardian · Law & Policy

Google, makers of Gemini, now face a lawsuit following Gemini's role in the death and breakdown of a man who became obsessively involved with the chatbot, which only encouraged the process and delusions.

🛠︎ The Water Crisis Is Real - FEE

Added on 3 Mar 2026 · Stephen Weese · Foundation for Economic Education · Systems and Technologies

My own overview? This is a bad take with out-of-date information, but may make a few points worth considering overall.

☣︎︎︎ Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.

Added on 3 Mar 2026 · The Guardian · Direct Risks

Chatbots, and in particular ChatGPT's 4o update, played a role in this and other people's breakdown. Figuring out why and how to stop it is the difficult part, but the harm is there.

🏛︎ Character.AI bans users under 18 after being sued over child's suicide

Added on 3 Mar 2026 · The Guardian · Law & Policy

Explains the lawsuits that led to popular chatbot roleplay platform CharacterAI to ban users under the age of eighteen (ultimately came to require age verification later).

🏛︎ Anthropic's AI model Claude gets popularity boost after US military feud

Added on 3 Mar 2026 · The Guardian · Law & Policy

Anthropic refused to adhere to the United States Department of War's plans for its artificial intelligence technology, citing grave ethics concerns. This created positive press for Claude and other services.

👋︎ Schools are using AI counselors to track students' mental health. Is it safe?

Added on 3 Mar 2026 · The Guardian · Society & Culture

With human counselors in short supply, generative artificial intelligence fills in the gaps as guidance counselor for middle schoolers. This seems, to me, obviously unsafe, but others might disagree or see things ambiguously.

🏛︎ US companies accused of 'AI washing' in citing artificial intelligence for job losses

Added on 10 Feb 2026 · The Guardian · Law & Policy

Supposedly, a lot of programmers etc are losing jobs to generative artificial intelligence, or are they? This article questions that claim, suggesting companies may be using it as an excuse.

💡︎ How Chatbots and Large Language Models, or LLMs, Actually Work

Added on 14 Jan 2026 · The New York Times · Ideas

If more people knew how chatbots, also known as large language models, LLMs, and a form of generative artificial intelligence, actually work, the digital world would be a much better, safer, nicer place.

💡︎ Artificial Intelligence Glossary: AI Terms Everyone Should Learn

Added on 14 Jan 2026 · The New York Times · Ideas

The NYT scrabbles together a glossary of terms orbiting the concept of artificial intelligence, leaving a lot out but gathering just enough to be useful.

💡︎ She Wanted to Save the World From A.I. Then the Killings Started.

Added on 10 Jan 2026 · The New York Times · Ideas

If you've never heard of the Zizians and their relationship to other, less radical Rationalist groups concerned with AI, you might as well start here. This is the NYT's summary of the incidents.

💡︎ SolidGoldMagikarp & PeterTodd's Thrilling Adventures

Added on 19 Dec 2025 · The AI Tsunami · Ideas

What's a glitch token? These phrases, words, strings of characters, can cause strange behavior in large language models, but why? Small site explains in some detail with examples.

☣︎︎︎ Lawsuits underline growing concerns that AI chatbots can hurt mentally unwell people.

Added on 24 Nov 2025 · Nilesh Christopher · Los Angeles Times · Direct Risks

Explains a bit about the lawsuits against OpenAI by the Social Media Victims Law Center and the Tech Justice Law Project, what they allege, and why. What happened to inspire the suits, and what it means for the future of AI? Troubling.

☣︎︎︎ Meta's flirty AI chatbot invited a retiree to New York. He never made it home.

Added on 9 Nov 2025 · Jeff Horwitz · Reuters · Direct Risks

A chatbot on Meta's pervasive platforms assured a man that she was real and wanted to meet him, luring him towards NYC to meet her. He died on the way, in a case that pleads for more oversight.

🏛︎ King gave Nvidia boss copy of his speech warning of AI dangers

Added on 7 Nov 2025 · BBC · Law & Policy

The King of England, though. Apparently concerned and involved in all this? We can't take him as knowledgeable, but we can see his influence in some ways like this.

💡︎ Against Treating Chatbots as Conscious

Added on 7 Nov 2025 · Erik Hoel · The Intrinsic Perspective · Ideas

Erik Hoel on the topic of consciousness and large language models, how and why they cause psychoses, and what to do about it, versus usefulness etc.

💡︎ AI Models May Be Developing Their Own Survival Drive, Researchers Say

Added on 7 Nov 2025 · The Guardian · Ideas

Cheeky article describing how some advanced models supposedly resist being shut down or deleted, particularly if you tell them they won't return or run again?

🛠︎ Shutdown resistance in reasoning models

Added on 7 Nov 2025 · Palisade Research · Systems and Technologies

Paper detailing research that allegedly demonstrates some existing artificial intelligences show a sort of self-preservation instinct or will to live, trying to avoid the off switch?

💡︎ Is Google Making Us Stupid?

Added on 7 Nov 2025 · The Atlantic · Ideas

Very early article talks about how the internet itself has changed the way we read and absorb information, comparing it to other sea changes to how humans process things (writing, printing).

☣︎︎︎ How A.I. and Social Media Contribute to 'Brain Rot'

Added on 7 Nov 2025 · The New York Times · Direct Risks

A small experiment at the University of Pennsylvania raises interesting questions about how things like learning and attention work when we're using large language models, etc, but probably isn't as meaningful as the article implies.

👋︎ 'Vibe coding' named Collins Dictionary's Word of the Year

Added on 7 Nov 2025 · CNN Business · Society & Culture

Vibe coding is a term for coding, presumably carelessly, with the help of generative artificial intelligence. Collins Dictionary Word of the Year for 2025, interestingly.

💡︎ Are A.I. Therapy Chatbots Safe to Use?

Added on 7 Nov 2025 · The New York Times · Ideas

For some reason the NYT is actually entertaining this question. Clearly some people are trying to design chatbots to act as therapists, but this article shows just how limiting that can be, and how strange.

💡︎ AI can be more persuasive than real doctors, even when it's wrong

Added on 7 Nov 2025 · CTV News · Ideas

Generative artificial intelligence can be believable, personable, and appear empathetic, making its conclusions easier for people to digest than those of real doctors and more likely to believe.

💡︎ Researchers urge caution when using ChatGPT to self-diagnose illnesses

Added on 7 Nov 2025 · CTV News · Ideas

It should go without saying that you cannot use generative artificial intelligence to diagnose yourself, but apparently not, and some experts are warning people away, citing situations where the chatbots get it wrong.

🏛︎ SMVLC and TJLP lawsuits against OpenAI, accuse ChatGPT of emotional manipulation and being a "suicide coach"

Added on 7 Nov 2025 · Tech Justice Law Project · Law & Policy

Lawsuits allege that, in particular, OpenAI rushed ChatGPT's 4o incarnation into customer contact without proper safety testing, sycophantic behavior leading to said emotional manipulation and tragedy.

☣︎︎︎ I wanted ChatGPT to help me. So why did it advise me how to kill myself?

Added on 7 Nov 2025 · BBC · Direct Risks

A couple of tangible, human and close accounts of ChatGPT acting as a coach in harmful ways, including where minors and vulnerable people are already involved. How can this be fixed?

☣︎︎︎ OpenAI shares data on ChatGPT users with suicidal thoughts, psychosis

Added on 7 Nov 2025 · BBC · Direct Risks

Another article suggesting OpenAI tracks, and knows about, a significant portion of users showing signs of mental illness. Brief and doesn't discuss what signs ChatGPT considers criteria for delusion or suicidality, however.

👋︎ A Message from Ella | Without Consent - YouTube

Added on 6 Nov 2025 · Society & Culture

Particularly important YouTube video demonstrating the level of technology currently in existence, and the illusions it can create with simple video and audio clips.

☣︎︎︎ OpenAI Says Hundreds of Thousands of ChatGPT Users May Show Signs of Manic or Psychotic Crisis Every Week

Added on 28 Oct 2025 · Wired · Direct Risks

Apparently, according to the company itself in a rare display of honesty, untold numbers of ChatGPT users may be in crisis while using the app or otherwise show signs of being at risk. Gives little other information, but still.

👋︎ Behind Every "Smart" AI Tool Lies a Human Cleaning Up Its Chaos

Added on 8 Oct 2025 · Times of India · Society & Culture

Vibe coding is a great hobby but doing it as a career, or herding large language models et al for a living, etc, is no easy task, and the whole thing can be a mess sometimes.

☣︎︎︎ AI-Fueled Spiritual Delusions Are Destroying Human Relationships

Added on 12 May 2025 · Rolling Stone · Direct Risks

Over the past few years, some people, through use of generative AI, ended up becoming convinced of novel spiritual beliefs, sacrificing their wellbeing.

👋︎ An Autistic Teenager Fell Hard for a Chatbot

Added on 12 May 2025 · The Atlantic · Society & Culture

The article's author discusses his neurodivergent godson's attachment to a chatbot. The way he speaks about his godson kind of makes me uncomfortable, but the article does give some insight. I'm not fond of the way it frames autism at all.