So I’ve written about AI-generated content before, most memorably, a Medium post about why the proliferation of AI-generated content feels to me like we’re shitting in the pool of human knowledge.
I feel like the internet pool of knowledge is becoming progressively more stinky because AI-writing tools make up fake facts. These ‘facts’ get published by idiot humans and then become facts for the next large language model to package up as information. Sigh.
I’ve wrestled with the tradeoffs between fact-checking and time-saving when using AI tools for research for my Substack publications. Spoiler alert: it takes me more time to fact-check AI’s research for science topics than it does for me to do my own research.
Earlier this year, I switched away from ChatGPT for research for my science publication.
My new AI research assistant gives me fewer fake ‘facts’, but they are still prevalent enough that I have to fact-check every source to make sure it actually says what the AI tool says it says. It’s scary how often the source itself is authoritative, but doesn’t actually contain the ‘fact’ that’s attributed to it.
My new assistant performs just fine when providing source attributions for generic facts like “Is Salmonella a bacteria or virus”, but is way less accurate with sources for rarely published facts, like how long Salmonella can persist in peanut butter (yes, that’s the type of stuff I research every week!)
ChatGPT once discovered a super-cool fact about a weird type of food poisoning for an article I was writing. It was amazing, it was going to make the article sing, I was so grateful to have learned about it.
The ‘fact’ turned out to be untrue (drats!) but I didn’t know it was until I had wasted 30 minutes trying - and failing - to find a genuine, peer-reviewed source for it.
During that debacle, at no point did ChatGPT admit that the ‘fact’ had never been published in a reputable source (or, in fact, anywhere as far as I could tell), it just kept coming up with progressively more arcane science papers which it falsely claimed were the source of the (non)fact (“🤖 I apologise for any confusion I might have caused. This is the correct source
”).
Never again, ChatGPT, never again.
What is an AI policy?
Before we go any further, let’s check. What (exactly) is an AI policy? I asked an AI writing tool to tell me (natch!) Here’s what it said:
« 🤖 An AI policy is a set of guidelines that defines how artificial intelligence tools are used responsibly and ethically within an organization or platform. It ensures transparency, human oversight, data privacy, and compliance with legal and quality standards in AI-assisted content creation or decision-making »
Anyone spot anything wrong with this description?
Since when did a policy ensure anything? Policies describe the way organizations intend to do things and provide frameworks for actions but they aren’t rock-solid guarantees of anything.
You can see where I’m going with this, right? AI-speak sounds all fine and dandy until you look closely. When you do, you discover much of it is meaningless jargon-rubble.
So what is an AI policy for a newsletter creator? (An actual human newsletter creator)?
It’s a way to tell your readers how you use AI in your newsletter business.
That’s it.
Why write an AI policy?
Readers are getting more savvy about AI-written content, and some of your readers will be curious about whether you use AI to write your posts.
They might even feel suspicious about other content they’ve seen that looks suspiciously similar to yours and want to find out who the original source is.
An AI policy exists to give readers information, of course, but its most important purpose is to enhance trust in you and your work.
Newsflash: You already have an AI policy
If you’re a content creator and you haven’t just landed in 2025 in the Delorian, you’ve probably already tried AI for content creation. You might be using it regularly, or you might have decided, like me, that it takes more time to rewrite AI content than it does to pull words straight out of your personal version of the most powerful super-computer ever discovered (the human brain).
Maybe you use AI for brainstorming, or headline tweaks or grammar checking.
Maybe you use it to create VBA code for your nerdy Excel spreadsheet projects and get cranky if the code doesn’t work perfectly the first time (guilty!)
Maybe you swing wildly between loving and hating your AI-powered writing buddy. I hear you.
No matter how you’ve chosen to use it, or whether you’ve chosen to use it at all, you have a policy. Your choices are your policy.
To create an AI policy for your newsletter, it’s simply a matter of writing those choices down.
Steps to writing your AI policy
Step 1: Quick audit
Start by doing a mental walkthrough of the tools, apps, websites and workflows you use for your newsletter. You’re looking for AI-powered and AI-assisted tools.
In addition to the obvious ones like ChatGPT, also include less obviously AI tools like Grammarly, Canva’s image generator, Unsplash, and Substack’s built-in image generator.
List them.
Step 2: Describe how you use the tools
Next to each tool, describe how you use it in your newsletter business.
Step 3: Write an intro
Introduce your AI policy in one or two sentences, paying attention to the scope.
Scope means being clear about which part of your business or newsletter the policy refers to. Is it just about the content you publish in your newsletter, or will the policy also cover other activities like how you edit video, do your bookkeeping or respond to emails?
Step 4: Explain what you do with AI in plain language
Use your list of tools and how you use them to craft a simple explanation about how you use AI. You can list the tasks you use AI for but there is no need to name the tools.
Step 5: Write a conclusion
Create a three-sentence conclusion summing up your policy. Here’s what my AI writing assistant produced for me, verbatim:
« 🤖 AI is a tool I use to enhance, not replace, my work. You can count on transparency, originality, and ethical standards in every issue of Pubstack Success. Thank you for trusting me as your guide on Substack. If you have questions or suggestions about this policy, please reach out: I value your input.»
Step 6: Publish your policy
Publish your policy in your Substack publication.
I haven’t decided where I’ll put my policy yet. I’ll probably make it a custom page, rather than a post, and link it on my About page.
Reminder: you can make a custom page from the Website section of your publication settings.
Publication dashboard > Settings > Website > Pages and navigation > Custom pages > Add [button]
Step 7: Set a reminder
AI-assisted content creation tools are evolving super fast. The way you use them is likely changing quickly too.
Set a calendar reminder to check in with your policy a couple of times per year to make sure it’s still a true reflection of how you use AI for your newsletter.
Final thoughts
As the swimming pool of the internet fills with murky garbage writing and starts to stink, readers get more and more excited about genuinely original and insightful content.
Like your content. Like mine.
A documented AI policy helps readers see how you use AI and sets you apart from your smelly competitors at the brownish end of the pool. It creates an environment of transparency and enhances your readers’ trust in you as a creator. And trust is the foundation of a successful newsletter.
So, will you publish an AI policy? And - more interestingly - will you use an AI language model like ChatGPT to write it for you? Let me know in the comments.
Karen
Update June 2025: I just published mine. You can see it here. What do you think?
I never use AI for research or for writing in any way and do not anticipate ever doing so, for all the reasons you discuss and then some. I cherish my own cognitive development and my carefully honed judgment and evaluative abilities as well as my choice to constantly work on honing my thinking and writing skills. For goodness sake, I refuse to use spellcheck or grammarcheck, both of which have flaws. I do my own editing and proofreading based on my knowledge (which I also work hard to refine), so any remaining errors of style, punctuation, and grammar are entirely my own. Using AI would only rob me of all that. Perhaps I should state on my newsletter that *that* is my policy: no AI at all.
I’m among those who use AI as a tool…best described as a brainstorming partner (because I am a solopreneur and other wise have no one to bounce ideas around with) to ensure I haven’t overlooked anything and to suggest other angles or way to improve something I’ve written myself. I have been frustrated with ChatGPT too and am preferring Perplexity.ai because it provides citations for everything. I appreciate your insight and you’ve made a great case for having an AI policy.