AI is killing creativity
And it's industrializing mediocrity
I’m currently vibe-coding a fitness app, which also comes with a website. I generated the website in about an hour of work. Because I’m not a designer, I cheated and got AI to design it for me.
The result?
Instantly average.
That sounds bad, but think about it. That’s actually a big win for me … average is way better than I could achieve on my own feeble artistic merits. But let’s be honest: it’s a net negative on human creativity.
To understand why, I chatted with Saeema Ahmed-Kristensen, the former head of design engineering research at Imperial College London Dyson School of Engineering who now oversees a £24 million research portfolio at University of Exeter.
Her take?
AI is killing creativity. Or starving it, if you let it.
Check out our convo below. You can also get a full transcript.
Fluency ≠ creativity
Generative AI is very good at one thing: fluency. Fluency is the academic term for how many ideas you can generate, which is often well-correlated with how many good ideas you can come up with.
Of course, LLMs can spit out 200 ideas before your coffee cools.
If you’re staring at a blank page, that’s magic.
But creativity has a second dimension: novelty. And this is where things get uncomfortable. Saeema and her team studied 600 humans and compared their output to 12,000 ideas generated by large language models. The result is that humans produced significantly more diverse ideas.
Not more. Different. Diverging.
AI output clustered around the same conceptual neighborhoods, but humans ideas wandered all over the conceptual landscape. This is the core difference between incremental optimization and breakthrough thinking. LLMs interpolate from training data while humans misinterpret, collide concepts, make serendipitous mistakes, and build on each other’s weirdness.
That weirdness matters.
If you want incremental options, AI can be fantastic. If you want a massive leap, you still need good old-fashioned meatbag humans.
The blank page problem (and how I use AI)
Let me be honest about how I use it. After every podcast, I:
Drop the transcript into ChatGPT
Ask for title ideas
Ask for YouTube chapters
Ask for summaries
It gives me 5–6 title options. I almost never use one, but they spark something. A word. An angle. A hook. This solves the blank page problem, but it isn’t often truly creative or out-of-the-box.
Saeema does the opposite of what many people do. She writes first, then uses AI to condense, summarize, or refine tone. As an academic, she says she can instantly see when students let AI do the thinking … and she hates it.
Lots of fluff, lots of padding, thin on novel ideas and deep substance.
That’s a pattern I’ve seen everywhere … including, increasingly, Substack, where ideally we hear somewhat unvarnished human voices.
Homogenization is the threat
The real risk here is sameness.
When everyone uses the same tools, trained on the same data, prompted with the same “write 5 catchy headlines” instructions, we get:
Same rhythm
Same structure
Same sanitized tone
Same Canva microphone podcast covers
I mean, look at LinkedIn. Look at Substack. You’ve seen it. I’ve seen it. We’re seeing more and more of it. It’s homogenizing human writing and maybe — scary version — even our thinking.
Interesting: the more recognizable it becomes, the more valuable true originality becomes. In that way, ironically, AI might just be increasing the premium we place on human differentiation.
Which would be super cool, in a way.
So what do we do about it?
Here’s the key takeaways from this episode:
Use AI to accelerate, not originate
Let AI kill friction, not originality
Think and write first, then refine
Suck it up. Do the hard work. Don’t outsource the quantum leap of creativity.
Develop taste
Ouch … but if you can’t recognize homogenizations, it’s pretty hard to avoid it
Build expertise
=AI is powerful in the hands of experts, but risky in the hands of novices
Reward divergence
Resist the generic … be a burst of color in a field of grey


