I’ve never claimed to be a good writer, but I’ve always enjoyed writing, so it actually pissed me off recently when I noticed how my last few blog posts were turning into AI slop. It sounds weird even to me now to say that I “noticed” it – as if it was super difficult to see or something. It’s not once you know what to look for. The last half a dozen posts on this blog are a bit shit…
Content vs Quality

It’s not the content; I actually do get quite excited about AI agents, enhancing their ‘memory’ and making AI feel even more magic than it already is. I’ve put a lot of effort into the various tools and systems I use to improve my productivity. It was the overall quality of the writing, and how much of the structure I’d farmed out to ol’mate Claude which is an issue.

The difference is stark if you go and read a few of my old posts (like 10-15 years ago) and compare that to the more recent series on AI. So you might be thinking to yourself “Well duh, if you’re going to be a lazy sod and get AI to write the whole thing then obviously it’ll be AI slop” – but the kicker is that I didn’t get AI to write it (well, not all of it).
My process
What I typically did was discuss the idea with Claude (via Claude Desktop), then I’d perhaps hop into Claude Code to get it to generate some summaries of what I’d done in various code bases, hop back into the desktop app to then get Claude to create a suggested outline and highlight useful facts to include, and then I wrote the first full draft, leaning heavily on the supplied outline.
I’d go through a couple of rounds of revision with Claude again; tweaking structure, adding or removing content, etc. It would point out cases where my sentences were running on, too long, too short, structured poorly, where I’d go on tangents and off-topic, etc. I thought it was awesome because I knew I could go down rabbit holes a bit, so appreciated the “helpful” feedback.
Perfect… ly shitty
What it meant though was that I was removing everything that made the writing mine, and substituting it with middle-of-the-road yet safe structure. More headers, more bullet points, clear opening and closing paragraphs – all the crap you’re supposed to do, but few humans do consistently.
Why does it matter?
The reason I started being critical of my own writing actually started because I was annoyed by how much of LinkedIn, newsletters, even YouTube content these days is so obviously perfectly crafted AI shit. It makes me feel gross, like I need a shower. It lacks authenticity and credibility.
It’s not that AI assisted writing is bad – I think some parts of my recent posts are actually reasonably good. The issue is in the “how” of the assistance. It’s a crutch which I too easily began relying on.
I started actively reading up on the problem. Before this I knew in my gut when something smelt AI-fishy, but after reading various papers and studies, the light went on in my head! I could identify the exact patterns. My gut was proven correct (i.e. being pissed off with AI slop on LinkedIn, then reading this which shows that >50% of all long form content on there is AI-generated!).
At the end of the day it matters because even though I don’t have a massive reader-base (in fact its probably just me and my partner. Hi babe! 👋) people still notice, and more importantly even I noticed it about my own writing (and it made me grumpy).
Going deeper
I still like understanding the nuts and bolts of these things, so of course I wanted to dig into the “why” a little more. Rather than blaming AI, or abandoning the tools entirely, I wanted to take a measured approach to understanding how I could combat this. But since I’m fundamentally lazy, I need to combat it in a repeatable way. We’ll get to that part later though… see, without AI we’re all over the shop!
Regression to the Mean
Warning, below are some bullet points and fancy words. Rest assured I wrote this. I do sometimes use bigger words, and I’m a database guy so I like structure. Sue me.
- LLMs (large language models like Claude, OpenAI, Gemini, etc) are prediction machines. They’re optimised to predict the most likely next word (token actually) in a series of words.
- They get trained on massive sets of data, which exposes them to all sorts of content, but the process of training them gets rid of the extremes so you’re left with the ‘most average’ behaviours (for lack of a better word).
- Human writers choose low-probability words with high informational/emotional weight
- AI avoids these “risky” choices because that’s not how it’s learned to be rewarded in the most consistent way.
This is very simplified, and I hope none of the actually smart people I work with in GenAI at Canva read this because it’ll give them eye-twitches!
This is a cool quote that stood out to me on the topic:
“While the literary author aims to transcend the probabilistic structures of common meaning, the LLM is optimized to reproduce them.”
Eryk Salvaggio
The RLHF factor
RLHF stands for “Reinforcement Learning from Human Feedback” – which, for a change in this field, is refreshingly self-explanatory and simple! The idea is that human raters look at model outputs and rank them. The model is then trained to produce whatever gets high marks.
Like all of us, human raters are biased and so tend to prefer certain types of output, like (and see if you can spot the patterns here):
- well structured content with lists
- lots of headers
- clear transitions 😉
We also prefer confident answers, because we equate confidence with authority or “correctness”, so you get the “Certainly! I can absolutely help with that!” type of response being rewarded. And of course we all like to be spoken to nicely, so the models get more and more polite and helpful.
This is called “mode collapse”, when it fixates on a narrow set of patterns that earn it the most reward. It prioritises being nice and predictable over being original. The irony is that RLHF doesn’t make AI write badly, just recognisably.
The Specific Tells
This part was really interesting to me, i.e. seeing examples of (and learning the names for) the various patterns that give you that “ick” feeling when something feels AI-sloppy.
Some common examples:
- Title patterns: “I Did X. Then Everything Changed!”, “X (And Y)”, “Blah blah: My story of Foo”
- The promises: “In this post, I’ll cover:” followed by numbered items
- Fake humility: “I’m going to be honest with you…”
- Bullet-point overload (it’s alright, this is a numbered list I’ll have you know!)
- Bold topic sentences
- Perfect three-act structure (problem, journey, resolution)
Lexical attractors
(No I didn’t know what this meant, I learned it by reading this.) 😂
- “Tapestry” appearing 100x more frequently than in human writing
- “Delve”, “vibrant”, “palpable”, “intricate”, “camaraderie”, “amidst”
- Conjunctive phrases: “Moreover”, “Furthermore”, “In addition”
Structural tells
- “It’s not X, it’s Y” (negative parallelism) – ooh this one really pisses me off!
- Rule of three everywhere
- Uniform paragraph length
- Every aside serving the narrative (no genuine tangents, which is my specialty!)
A couple of broader tells that made sense to me once I read about them are what’s referred to as “perplexity” (how unpredictable the writing is), and “burstiness” (how variable the writing is; humans are all over the place, AI has more uniform cadence). I would cite this but I’m a forgetful human who can’t remember… go ask your AI to tell you. 😛
So what does this mean for writing?
Where’s the line between the “tool” and the “author”? Not from a moral or, I dunno, “cultural” point of view (I’ll leave that to others to argue) – just from a practical one.
The reality
- Most writers now use AI somewhere in the process
- Detection tools show 99% accuracy on raw AI text but that drops a lot on human-edited content
In the reading I did I came across a regularly repeated approach to “dealing” with this issue: use the AI as an analyzer; i.e. use it to flag your own writing for clarity issues and check grammar, but then to suggest a few alternatives which address those issues that it raises. You still ultimately pick and massage the fix into your writing. So you’re still filtering AI’s input into your writing, and ultimately shaping the output more directly.
The original slippery slope came from treating AI as the writer (which I was guilty of too). You generate an idea, get AI to write it, you perhaps give it a light edit, add in some anecdotes, and publish it. This works for some people, clearly – perhaps when it comes to having to crank out huge volumes of work, and that’s likely where most LinkedIn content comes from, but it has a smell and more and more people are noticing and getting turned off by it.
What am I going to do about it?
Fuck knows. I’m figuring this out too, starting with researching for and then writing this post, more for myself than anyone else (so selfish I know!).
Keep on keeping on


I work in the AI field, plus as we established earlier I am lazy, so you can bet I’m still going to use AI all over the place! 😅
But, as an example, for this post I discussed the premise, then got it to recommend reading for me to do (which I did… most of it anyway, there was a lot), but I wrote 100% of this shit-show, which I’m quietly chuffed about actually. In fact a deserve another whisky – brb.
Before I published it, I still did run it by my buddy Claude to check for blatant errors (grammar, spelling, factual, links, etc). As a side effect of the reading I did on the topic, I gave Claude a long list of URLs and told it to read all the papers and articles and then to generate a comprehensive document for itself to act as a set of rules to check against to identify and avoid AI-slop patterns in writing that I do get AI to draft for me first (yes I still do that for day-to-day stuff at work – look, not everything can be the masterpiece that this is turning into, okay!?).
Putting a bow on it
This approach seems to be working better at least. Its funny when it confidently identifies some of my own writing as “definitely AI generated”, so maybe I am part robot underneath it all.
I’m taking the view here that transparency about my process matters more than “purity”. I don’t have anything else to say, and apparently an AI “tell” is endings that are too neatly tied up, so this is my concerted effort to revolt against that.
Oh, I did want to finish with this closing quote which I 100% stole from Claude, but I thought hit the nail right on the head:
The question isn’t whether to use AI – it’s how to not lose yourself in it
claude-opus-4-5-20251101
Cheers,
Dave