The way we talk could quietly be reshaped by tools that were never trained on how we actually speak, warn Ada Palmer and Bruce Schneier in a piece for the Guardian. Large language models like ChatGPT learn mostly from polished writing and scripted dialogue, not the messy, half-finished conversations that dominate real life. That skew, they argue, could nudge people toward clipped, command-style phrasing and a smaller, blander vocabulary that lacks the quirks and interruptions through which we often convey feeling and figure out what we think. They share a potent example:
- "When told 'I hate Beth!,' ChatGPT replies with an [uninterruptible] three-part formula of affirmation ('That's completely valid'), invitation ('I'm here to listen'), and invitation ('What's going on?') far longer than any reply plausible in face-to-face [dialogue]. 'What's Beth's deal?!' elicits a bullet point list of queries that reads like a multiple-choice exam question ('Is Beth * a celebrity? * a friend from school? * a fictitious character?'). No human speaks that way, at least not yet. But meeting such formulas repeatedly in a speechlike context may teach us to accept and use them, much as a child absorbs new speech patterns from spending time with a new person."
Palmer and Schneier worry about a worsening cycle, where the writing the LLMs are producing is what the LLMs are being trained on, "creating a feedback loop in which they imitate their own inhuman patterns." The authors don't offer a neat fix, but they say one challenge is figuring out how to train AI on authentic, informal human speech without trampling privacy. Read their full piece here.