People ask why I don't use AI to write my essays.
The question usually comes with an assumption baked in: that writing is a means to an end. That the goal is the published artifact. That anything which produces that artifact faster is obviously better.
I understand the logic. I reject the premise.
What AI Writing Tools I Actually Use (And Don't)
Let me be specific about what I do and don't use.
I use Grammarly. It catches typos. It flags passive voice. It notices when I've written "their" instead of "there". This is mechanical. It's the equivalent of a spell-checker with opinions.
I use xAI (Grok) for research, for exploring ideas, for pressure-testing arguments. When I'm writing about Snowflake internals or ontology theory, I'll ask questions, probe edge cases, request counterarguments. This is conversation. It's what I'd do with a knowledgeable colleague if one happened to be sitting next to me at midnight.
What I don't do is ask AI to write my essays. Not the first draft. Not the structure. Not the sentences. Not the voice.
This is not a productivity hack I'm missing. It's a choice about what writing is for.
Why I Create My Own Illustrations (No AI Image Generators)
The same principle applies to visuals.
Every diagram, illustration, and visual in my deep dives and technical essays is created by me, with support from my team (shoutout to Catalina!). We don't use Midjourney. We don't use Nano Banana. We don't prompt an image generator and call it done.
This surprises people. AI image generation is fast. It's cheap. It produces visuals that look polished enough for a blog post. Why spend hours on custom illustrations when you could generate something "good enough" in seconds?
Because "good enough" isn't the point.
When I create an illustration for a piece about semantic drift or knowledge graph architecture, I'm not decorating the essay. I'm thinking through the concept visually. The act of drawing forces decisions: What's the core relationship here? What's primary, what's secondary? How do I show causality without cluttering the frame?
These decisions are part of the creative work. They clarify my own understanding. They shape how the reader will encounter the idea. A generated image can't do that. It can only produce a plausible visual that matches the statistical average of "what diagrams about this topic usually look like".
My illustrations aren't average. They're mine. They encode a specific perspective on the problem. That specificity is the value.
Martin Scorsese on Creativity: "The Most Personal Is the Most Creative"
Martin Scorsese once said something that I think about constantly:
"The most personal is the most creative"
He was talking about film. But the principle applies to any creative work, including the kind of technical writing I do here.
"That quote was from our great Martin Scorsese."
— ABC News (@ABC) February 10, 2020
In a heartwarming moment, Bong Joon-ho recognizes Martin Scorsese after winning the Oscar for Best Directing—and the crowd gives Scorsese a standing ovation. https://t.co/8kz7m5vtnF #Oscars pic.twitter.com/s4mOuwPKBQ
The insight is counterintuitive. You'd think creativity means inventing something nobody has seen before. Something universal. Something that transcends the individual. But Scorsese argues the opposite. The more specific you get, the more you draw from your own particular experience and perspective, the more creative the work becomes.
When I write about semantic drift, I'm not summarizing what others have said. I'm drawing on 15 years of watching data systems decay. On specific conversations with executives who didn't understand what I was telling them. On the particular frustration of seeing the same problem ignored across company after company. On my time at RelationalAI, where I saw what's possible when you treat knowledge representation seriously.
That specificity is the work. Strip it out, and you're left with a Wikipedia summary. Accurate, perhaps. But not creative. Not personal. Not mine.
How LLMs and AI Writing Tools Actually Work
Here's the technical reality that most AI discourse ignores: large language models are trained on averages.
Not a criticism.. It's a description of the architecture. LLMs learn statistical patterns across enormous corpora of text. They predict the most likely next token given what came before. When they generate text, they're producing something that resembles the aggregate of what they've seen.
The result is prose that sounds like everything and nothing. It has the cadence of competent writing. It hits the expected beats. It uses the vocabulary that typically appears in this type of content. But it lacks the specific gravity of a particular mind working through a particular problem.
I've read AI-generated essays about data engineering. They're fine. They cover the topics. They use the right terminology. They could pass an exam. But they don't surprise me. They don't reveal a point of view I haven't encountered. They don't make me feel like I'm in conversation with someone who has struggled with these problems and formed hard-won opinions.
They're averaging.
Why Writing Is a Creative Process, Not Just Communication
Here's what most people get wrong about technical writing: they think it's communication. You have ideas in your head, you translate them into words, the reader receives the ideas. Writing as transmission.
Writing is thinking. The essay doesn't exist before I write it. I don't have a fully formed argument that I then transcribe. I have intuitions, fragments, half-formed contrarian takes. The act of writing is the act of discovering what I actually believe.
When I wrote about semantic drift, I didn't start knowing the distinction between syntactic and semantic problems was the key frame. I started with a vague frustration: something is broken in how we handle data quality, and nobody is talking about it. The writing process forced clarity. Sentence by sentence, I had to commit. I had to choose words. Each choice constrained the next. By the end, I had an argument I couldn't have articulated at the beginning.
This is creative work. Not creative in the sense of fiction or poetry (though it shares more with those than most technical writers admit). Creative in the sense that something new comes into existence through the process. The output is not a rearrangement of existing pieces. It's a synthesis that didn't exist before.
If I outsource that process to a model trained on averages, I don't get my thinking. I get average thinking. I get the median take, polished to sound authoritative. I lose the thing that makes the work worth doing.
Engineers Are Artists: Why Technical Writing Is Creative Work
I've written before that engineers are artists who often don't realize it (in "The Work" section here). We compose systems the way composers arrange sound. We make choices about elegance, tension, resolution. The best code has rhythm. The best architectures have a point of view.
The same applies to technical writing. When I'm structuring an essay, I'm making aesthetic choices. Where do I put the turn? How long do I let tension build before releasing it? Which metaphor will make the abstract concrete? How do I earn the reader's attention for another paragraph?
I write to discover what I think. I write to impose order on chaos. I write because the discipline of putting words in sequence forces a clarity that thinking alone cannot achieve. These are personal acts. Outsourcing them defeats the purpose.
Will AI Writing Tools Get Better? The Future Is Uncertain
I want to be honest about uncertainty. LLMs in 2025 are not LLMs in 2030. The architecture might evolve. The training approaches might change. We might develop models that do something other than statistical averaging, that genuinely create rather than interpolate.
What I know is that today, the tools produce competent mediocrity. They're useful for the mechanical parts of writing: grammar, consistency, catching errors. They're useful for research and exploration. They're not useful for the part that matters: the personal, specific, creative act of a particular mind working through a particular problem.
Maybe that changes. I'll pay attention. I'll experiment. I'm not dogmatic about the tools.
But I am dogmatic about the goal. The work I care about is work that could only come from me. My experience. My frustrations. My specific angle on problems I've lived with for years. If a tool helps me express that more clearly, I'll use it. If a tool replaces that with average thinking, I won't.
Authenticity Over Efficiency: The Discipline of Personal Work
There's a connection here to everything else I write about. Entropy is the tendency toward disorder, toward averaging, toward the loss of specific structure. Against Entropy is not just a newsletter name. It's a stance.
The easy path is to let AI write your content. It's faster. It's cheaper. It produces something that looks professional. But it's also a surrender to averaging. You become one more voice in the statistical distribution, indistinguishable from the median.
The hard path is to do the work yourself. To sit with the blank page. To struggle through the unclear thinking until it becomes clear. To make choices that are yours, that reflect your particular experience, that could not have been produced by anyone else.
I choose the hard path. Not because I'm a purist. Not because I have something against efficiency. Because the hard path is the only one that produces work worth reading.
The most personal is the most creative. And the most creative is the only thing that cuts through the noise.
This is the kind of thinking I bring to Against Entropy. If you want essays about engineering craft, data architecture, and the problems no one else is willing to name, subscribe below.