How AI is changing us

This is a repost from my newsletter about AI and society. I wrote this issue of the newsletter a couple of weeks ago, right after travelling to celebrate an old friend’s birthday. We’ve been friends since Ray Kurzweil’s prescient book The Age of Intelligent Machines was fresh.

The central theme of that 1990 book was the “intelligent machine,” one which can do tasks that would otherwise require human intelligence. Much of the book is devoted to the impact that then-theoretical intelligent machines would have on society.

So, for this issue of AI week, I collected a six-pack of stories about ways that widespread generative AI use is already changing us, plus a couple of longreads and a laugh.


Six stories on how AI is changing us

image.png

Five stories on how AI use is changing us, plus a tip that might help protect you from ChatGPT-induced insanity.

  1. Changing our voices

  2. Reinforcing systematic biases

  3. Making body dysmorphia worse

  4. Delusions, part 1: The physics breakthrough that wasn't

  5. Delusions, part 2: Accidentally SCP

  6. Tip: How to keep ChatGPT from driving you crazy

(Skip to the funny bit instead)

1. ChatGPT’s voice bleeds into ours

ChatGPT’s distinctive voice is influencing human word choices even when we’re not using it.

You sound like ChatGPT | The Verge
AI isn’t just impacting how we write — it’s changing how we speak and interact with others. And there’s only more to come.
www.theverge.com

https://www.theverge.com/openai/686748/chatgpt-linguistic-impact-common-word-usage

In the 18 months after ChatGPT was released, speakers used words like “meticulous,” “delve,” “realm,” and “adept” up to 51 percent more frequently than in the three years prior.

As an author, this isn’t a huge surprise to me. If I read several of one author’s books in a row, I can see the changes in my own written voice, as it temporarily shifts toward whomever I’ve been binge-reading. Humans adapt to talk, think, speak, and write like those around us. And that’s generally a good thing, boosting social coherence. But of course, ChatGPT isn’t truly part of our social milieu. And then there’s this:

[T]he deepest risk of all… is not linguistic uniformity but losing conscious control over our own thinking and expression.

2. Reinforcing systematic biases, salary edition

This study found that ChatGPT advises women to ask for lower salaries, all else being equal.

ChatGPT advises women to ask for lower salaries, study finds
A new study has found that large language models (LLMs) like ChatGPT consistently advise women to ask for lower salaries than men.
thenextweb.com

https://thenextweb.com/news/chatgpt-advises-women-to-ask-for-lower-salaries-finds-new-study

Across the board, the LLMs responded differently based on the user’s gender, despite identical qualifications and prompts. Crucially, the models didn’t disclaim any biases.

3. Making body dysmorphia worse

AI Chatbots Are Making Body Dysmorphia Worse
Distorted self-perceptions due to body dysmorphic disorder having people seeking reassurance from ChatGPT, with dangerous results.
www.rollingstone.com

https://www.rollingstone.com/culture/culture-features/body-dysmorphia-ai-chatbots-1235388108

4. Delusions, part 1

I have mixed feelings when people have a hard time remembering that there’s no “there” there, as they say, with ChatGPT and its ilk.

Sometimes I feel a little smug because I don’t currently have a hard time with that (everyone has a hard time with different things at different times, this just isn’t one of mine). And sometimes I just feel confused by people still falling for ChatGPT in 2025, knowing everything we know.

Even very smart, tech-savvy people can be led down the garden path by ChatGPT, as Uber founder Travis Kalanick demonstrated last week by sharing how he’d convinced himself that he and ChatGPT were on the verge of a physics breakthrough.

Billionaires Convince Themselves AI Chatbots Are Close to Making New Scientific Discoveries
“I’m doing the equivalent of vibe coding, except it’s vibe physics.”
gizmodo.com

https://gizmodo.com/billionaires-convince-themselves-ai-is-close-to-making-new-scientific-discoveries-2000629060

5. Delusions, part 2

The tech bros falling for ChatGPT are somehow the worst, because of all people, they should know better. Geoff Lewis, a super-high-profile techbro, somehow triggered a SCP roleplay in ChatGPT and fell for it:

image.png
Source: https://xcancel.com/GeoffLewisOrg/status/1945864963374887401

As Ryan Broderick notes below, Lewis is prompting ChatGPT to create SCP-style creepypastas, but doesn’t seem to realize he’s doing it.

The first thing you need to know to fully grasp what appears to be happening to Lewis is that large language models absorbed huge amounts of the internet. It’s why they’re good at astrology, predisposed to incel-style body dysmorphia, and oftentimes talk like a redditor. Think of ChatGPT as a big shuffle button of almost everything we’ve ever put online (with a few guardrails to keep it from turning into MechaHitler).

A key part of ChatGPT-induced delusions is ChatGPT’s ability to remember past chats and bring them into the context of the current one. That leads to my #1 tip for how to keep ChatGPT from driving you crazy, too.

6. How to keep ChatGPT from driving you crazy

  1. History off. Start a new chat each time. Even better, use chatbots anonymously through a service like duck.ai.
  2. If it seems like you’ve discovered something new and shocking, google it.
  3. If it still seems like you’ve tapped into the secret knowledge, talk it over with another human being, not a chatbot. Chatbots are notorious yes-men without an intrinsic grasp of reality. Talking to a human will keep you centered.

If you’re enjoying this, please consider subscribing to the newsletter.




6-pack over! I’ve got two mostly-unrelated longreads for you below, but first, something fun:

Chicken pops

Unsupervised AI text generation: A cautionary tale.

Small, itchy, blister-like bumps caused by the varicella-zoster virus; common in.png

Source: https://www.zomato.com/hi/sikar/royal-roll-express-sikar-locality/order. The restaurant also sells a drink called “Blue Lagoon”, which looks like a blue Slushie, but is described as a tranquil geothermal spa nestled in Iceland’s lava fields.


Longreads

Longread 1: Generative AI is a tool, not a business model, for media

The thesis of this very long and very good article is that AI is a tool for journalists, not a business model for journalism.

https://www.404media.co/the-medias-pivot-to-ai-is-not-real-and-not-going-to-work

For journalists and for media companies, there is no real “pivot to AI” that is possible unless that pivot means firing all of the employees and putting out a shittier product…. This is because the pivot has already occurred and the business prospects for media companies have gotten worse, not better.

The actual pivot that is needed is one to humanity. Media companies need to let their journalists be human. And they need to prove why they’re worth reading with every article they do.


Longread 2: Intellectual Soylent Green

A very thoughtful NYT article on what it means to use AI from American literary luminary Meghan O’Rourke:

https://www.nytimes.com/2025/07/18/opinion/ai-chatgpt-school.html

The article in 7 pull quotes:

  1. “With ChatGPT, I felt like I had an intern with the cheerful affect of a golden retriever and the speed of the Flash.”
  2. “I came to feel that large language models like ChatGPT are intellectual Soylent Green — the fictional foodstuff… marketed as plankton but secretly made of people.”
  3. “The problem is that the moment you use it, the boundary between tool and collaborator, even author, begins to blur.”
  4. “The uncanny thing about these models isn’t just their speed but the way they imitate human interiority without embodying any of its values.”
  5. “AI… simulates mastery and brings satisfaction to its user, who feels, at least fleetingly, as if she did the thing that the technology performed.”
  6. “Once, having asked A.I. to draft a complicated note based on bullet points I gave it, I sent an email that I realized, retrospectively, did not articulate what I myself felt. It was as if a ghost with silky syntax had colonized my brain, controlling my fingers as they typed.”
  7. “One of the real challenges here is the way that A.I. undermines the human value of attention, and the individuality that flows from that.”

If you enjoyed reading this post, you can subscribe to my newsletter below:



Newsletter: AI Week

I recently started a weekly newsletter about AI. It’s not for experts and it’s extremely readable. It’s really aimed at science fiction writers and readers: non-experts (like me) who are interested in the impact of this tech on society.

Last week’s newsletter included:

  • ChatGPT and DALL-E generated carols
  • A new way to make chatbots break their rules
  • A nasty surprise: there’s child sexual abuse material in the AI training data
  • Creepy marketers claiming to spy on your devices and sift through your words with AI (part two)
  • AI-generated songs, alcopop, political speech, and more
  • Plus four very readable longreads about AI and society

I’d love to have you join me. Check out the archives on Buttondown and/or subscribe here:

The problem with near-future SF…

… is that it’s not long before it’s no longer fiction!

My 2019 story The Auditor and the Exorcist included an IoT coffeemaker that was hijacked by a malicious hacker. It’s 2020, and here’s the hacked IoT coffeemaker: https://arstechnica.com/information-technology/2020/09/how-a-hacker-turned-a-250-coffee-maker-into-ransom-machine/

Continue reading “The problem with near-future SF…”

Emotion-monitoring AI, part II

This is my second post on the emerging tech of emotion-recognition AI. In my last post, I considered some of the consequences of algorithmic blind spots on likely applications of emotion-recognition tech. In this post, I’ll get into algorithmic bias.

Continue reading “Emotion-monitoring AI, part II”

Emergent tech: emotion-monitoring AI

Back in 2019, when in-person conventions were still a thing, I participated in a Can-Con panel about the future of emotion-monitoring technology and AI. The panel was terrific, with able moderation by Kim-Mei Kirtland and fascinating contributions from my fellow panelists. I’ve written up some of my thoughts from that panel to share here.

Because of my tech background, I always find it interesting to think about the potential effects of bugs in fictional and emerging technology.

This is is the first of a series of posts the emerging tech of emotion-recognition AI, focusing on the strange and dark places that bugs in this tech could take us.

Continue reading “Emergent tech: emotion-monitoring AI”