PickyFox

technology

When NOT to Use AI (A List Nobody Wants to Write)

February 19, 2026

Everyone's writing about where to use AI. Almost nobody's writing about where to stop. Here are the situations where AI makes things worse, not better.

A bunch of different colored objects on a white surface
Photo by Google DeepMind / Unsplash

I use AI every day. I’ve written about how it fits into my workflow and which tools actually deliver for freelancers. I’m not anti-AI. I’m anti-using-AI-badly.

And there are a surprising number of situations where reaching for AI makes things worse. Not theoretically worse — measurably, practically worse. I’ve learned most of these the hard way.


When you need to actually learn the thing

If I use AI to skip the learning process, I get an answer I can’t defend, adapt, or build on. I had a client ask me to explain a strategy I’d had AI draft for a proposal. I couldn’t. Not because the strategy was bad — it was fine — but because I hadn’t done the thinking myself. I was presenting someone else’s homework.

Now I have a rule: if I’ll need to explain it, defend it, or build on it later, I do the thinking myself. AI can help me iterate after I’ve done the initial work, but it can’t replace the understanding.


When the relationship matters more than the output

AI-drafted emails to close friends feel wrong because they are wrong. The same applies to sensitive client conversations, difficult feedback, condolence notes, and anything where the recipient would care whether a human wrote it.

This isn’t a quality issue — AI can produce perfectly adequate text for these situations. It’s a trust issue. If a client discovers you AI-generated the message where you apologized for missing a deadline, you’ve lost something that no tool can rebuild.

I keep a hard boundary: anything emotionally loaded gets written by me. Period.


When you’re avoiding the hard part

This is the one that bites most often. AI is phenomenal at helping you feel productive while avoiding the actual work. Need to make a difficult decision? Let me ask ChatGPT for a pros and cons list first. Need to have a tough conversation with a client? Let me draft it in AI and revise it seventeen times instead.

The tool becomes a procrastination device dressed as productivity. If you notice yourself running the same prompt three times with slightly different wording, you’re probably avoiding something. I wrote about this pattern in how to build momentum when everything feels stuck — the block is rarely about the words.


When accuracy is non-negotiable

AI hallucinates. Confidently. With footnotes that don’t exist and statistics that sound perfect but are fabricated. For casual research or first-draft brainstorming, that’s manageable — you verify before you publish.

But for legal documents, financial calculations, medical information, contract terms, or anything where being wrong has real consequences? AI shouldn’t be in the driver’s seat. It can be a starting point, but the verification burden falls entirely on you — and if the verification takes as long as doing it yourself, you haven’t saved any time.


When originality is the point

AI is trained on existing content. It produces competent recombinations of things that already exist. For blog posts that synthesize known ideas, that’s fine. For work where the whole value is original thinking — a unique strategic angle, a creative concept, a perspective nobody’s articulated yet — AI actively pulls you toward the generic.

I’ve caught myself accepting “good enough” AI suggestions that smoothed out the interesting edges of my thinking. The weird, specific, slightly uncomfortable take I would have written myself got replaced by something polished and forgettable. That’s not assistance. That’s regression to the mean.


When your client is paying for you

Some clients hire you for your expertise. Some hire you for your voice. Some hire you because they trust your judgment. In all these cases, they’re paying for you — not for your ability to operate a chat interface.

This doesn’t mean you can’t use AI at all. But there’s a line between “AI helped me work faster” and “AI did the work and I put my name on it.” Where that line falls depends on the client, the project, and your own integrity threshold.

I don’t have a universal rule here. But I have a test: if the client asked “did you use AI for this?” would I feel comfortable saying yes? If not, I probably shouldn’t have.


The point isn’t to avoid AI

The point is to use it deliberately instead of reflexively. The best AI users I know aren’t the ones who use it for everything — they’re the ones who know exactly where it helps and where it doesn’t, and they switch between AI-assisted and human-only mode without thinking about it.

That discrimination — knowing when to reach for the tool and when to put it down — is itself a skill. And like most skills worth having, it only comes from getting it wrong a few times first.