How we write with AI

With LLMs, the marginal cost of creating docs has turned to 0. Shitty first draft, a term coined by Anne Lamott to encourage aspiring authors to write just to get started, doesn’t seem to apply anymore. Any text you dump to a LLM will get you a “This is a great idea” response plus a ‘shitty first draft’. The polished shape and confident tone of AI drafts, combined with (let's be honest) laziness, fools the author into letting bad drafts make it out into the world. Unfortunately, at this point in the AI timeline, most AI-generated content by default is bad – the much hated AI slop.

Most of my writing centers on product briefs: documents that synthesize user research, technical constraints, and strategy to align teams around what we're building and why. Getting them right matters because unclear product thinking cascades into months of confused execution. Getting them done quickly matters because product decisions have narrow windows. But there's also the simple reality that writing is hard work – translating messy thoughts into clear prose, word by word. LLM promised to eliminate that friction: why struggle with each sentence when AI can generate entire sections? But this creates a different problem: my reduced effort in creating documents can translate into increased effort for my teammates reviewing drafts that shouldn't have left my desk. This tension between desire for easier writing vs the need for quality became urgent once I started trying to generate product briefs. I quickly learned that making AI work well requires its own discipline.

The pitfalls of writing with AI

Reviewers have a visceral hate for docs that read like AI-generated. The response is one of anger – “you are sending me this garbage that you didn’t bother to read yourself, now I get to do the reading for you?!” It is much stronger than when we’re being handed an early (often shitty) draft of a doc, even though the shit-level is the same. Why? Because it shows carelessness: their time saved generating slop is causing the pain in your head trying to parse slop.

To make things worse, you’re now caught in the pickle: the slopgen ‘author’ sent you this ‘early draft’ for review, not an LLM. You don’t know if this is a human or AI generated slop! Maybe there are clues that point to AI (excessive em-dashes, weird title emojis, repeated bullet points of the same thing). In that case, your rage is justified. But maybe the author typed each keystroke with their sweat and tears. If you say “this looks like AI" to the author, your feedback is unactionable and unhelpful – something we desperately try to avoid as good colleagues.

Bound by these existing social norms of ‘being helpful’, you grumpily resort to treating the entire document as human written content, pointing out inaccuracies here and there as expected. You then make a mental note to avoid this Slop Generator like the plague in the future.

Learning to write well with AI

As I experimented with AI writing, I quickly discovered why LLM use is so seductive, and why that makes slop generation likely. LLM use creates variable reward schedules like slot machines: one time you jackpot a one-shot great result, other times you get slop. In operant conditioning, this reward schedule builds robust behaviors that are addictive and resistant to change. The occasional perfect draft keeps you coming back, making it easy to rush through the process or skip quality checks when you're chasing that next great prompt to fix it all. Through many abandoned threads and late-night hand-writing pivots, these are a couple practices that helped me:

  1. Capture enough context: I think of LLM as a contractor writer hired to write for me. Its effectiveness is by default capped by context completeness: if I didn’t relay takeaway from the meeting, it will not know. Worse than a human contractor, the LLM today will generate slop instead of asking for clarification. This mental model helps in 2 ways: if context is not captured in writing, I would use speech-to-text tools (e.g. Superwhisper) to capture it. If it’s too complex and requires high precision (e.g. screen-by-screen user experience table, incident comms), I would just give up and write it myself.
  2. Steer early and often: AI slops are annoying to the person prompting too: it is long, the badness is sometimes in the details, and giving comment-like feedback isn't possible[^1]. To minimize the likelihood of being served slop, I try to ensure correct understanding first before asking for a draft: after sending context, I ask it to repeat its understanding. I ask for an outline and check if it makes sense. Then I ask what additional context it needs to write a good draft. Once I have a draft, I can ask it to ‘roast’ the draft.
  3. Be explicit about prompting vs editing brain: when you’re prompting, your brain is on variable reward schedule like slot machines – more impulsive, on auto-pilot, can’t stop. I’ve found it hard sometimes to switch from that mode into the flow state that I need for editing in the couple seconds it took for LLM to generate a draft. What helped me was creating some distance between the initial LLM draft vs editing time by inserting a break or a different task in between. I need to be in a “writing” state of mind to properly evaluate whether the draft is good or not. Whenever I rushed that, it slopped on my face.

Even with these techniques, I knew they weren't foolproof. I was haunted by the fear of becoming that person – the Slop Generator that colleagues learn to avoid. Individual discipline could only get me so far, I needed to address the social dynamics of AI-assisted collaboration.

Writing with AI as a team

To not avoid becoming ‘that person’, I came up with some cultural practices to prevent my early reviewers from hating me:

  1. Share my thread with the LLM. This creates transparency, and allows the reviewer to engage with the content at many layers. When my reviewers thought the doc was good, it helped them see and learn how to better leverage LLM for themselves. When they think something is missing, they can skim through the thread to see where and what I’ve missed. Anecdotally, some of my reviewers felt more encouraged to try LLM doc gen after reading through my threads upon finishing reviewing a doc that didn’t slop.
  2. The ‘it slops’ feedback. Tell your reviewer that the docs are LLM generated, so that they can tell you it reads like AI slop. Constructive feedback is inherently hard to give. No one wants to give it, people hate receiving it. To overcome the barrier, we made the :it-slops: emoji. When the doc starts to read like slop, the reviewer is expected to stop reviewing and respond with :it-slops: – it minimizes the damage to the reviewers and fills me with the appropriate amount of slop shame to do much better next time.
  3. Collaborative context generation. Context is that which is scarce. My reviewers may have been in meetings that I wasn't a part of, or have deeper appreciation of technical nuances that I don't understand. If the doc passes the slop sniff test, then incorporating the context via traditional feedback mechanisms like comments and edit suggestions is sufficient. When framing itself contributes to sloppiness, it's usually a context problem. The solution is collaborative: either work together in real-time to steer the AI's direction through joint prompting sessions, or capture missing context systematically through meeting transcripts. Zoom transcripts are particularly valuable because they ensure the AI gets all the conversational context I received, with no additional work from me. The key is recognizing that context is scarce, and your collaborators often hold pieces you're missing.

Parting thoughts

My best estimate is that writing product briefs with LLM is not necessarily faster, much easier for the primary author (me), and sometimes creates more work for collaborators. The work is more distributed than in the past – what used to be solely done by the PM is now shared among the core team.

This isn't just about writing; it's about expertise itself becoming more fluid. When I can prompt a prototype in minutes, suddenly I'm doing design work. When my engineer can generate user research questions, they're doing product work. We're all becoming AI-powered generalists with specialist moments. The traditional boundaries where you stayed in your lane because crossing over required domain expertise have collapsed.

But this creates a new kind of dependency: we're more reliant on each other's judgment about what constitutes quality work, since we can all now produce passable versions of the work with LLM. Writing used to be how we evaluated clarity of mind. Now that AI can generate coherent-sounding text, trust becomes the scarce resource.

To pull this off, you need collaborators willing to invest in this new dynamic: people who trust that you're not just farming out your thinking to them. I try to earn my slop credits by doing SQL queries or prompting UI prototypes – essentially trading my time on their tasks for their time reviewing my LLM-generated drafts. Maybe this is the future of working with AI? It doesn't always make us individually more productive, but makes collaboration mandatory in entirely new ways. We dabble a bit in each other’s work, while together trying to keep slop at bay.