AI Policy
This page outlines how I use AI tools in making Reviewer, too and my general outlook on the use of AI tools in science journalism. It is updated regularly as I add (or drop) tools from my workflows and change my thinking on AI.
The golden rules
All of my decisions regarding AI use not just on this site, but in my work and life more broadly, are made in the light of the following four rules:
Rule 1: Creative work is human work
None of my creative work (writing, images, video, etc.) intended for public consumption or the creative work by others published on my own websites may be conceptualized or drafted using generative AI.
Rule 2: Complement don't replace
I do not use AI tools to replace anything I would realistically a) do myself or b) pay someone else to do in a world in which these tools did not exist. I use AI as a complement, not a replacement.
Rule 3: Time is respect
If I'm not willing to spend my time to communicate with someone else, then I can't expect them to spend their time on whatever I have to say. I therefore do not use AI tools to write emails or otherwise communicate with other people.
Rule 4: The absurdity exception
If a system or task disrespects my humanity, it does not necessarily deserve the respect of a human response. I may use generative AI tools to do "creative work" (writing) for absurd tasks, especially if the entity demanding and/or evaluating the task is itself an AI (e.g. hiring systems). Absurd tasks are those whose point is not to produce something high-quality, creative, meaningful, or valuable but rather to satisfy some arbitrary requirement or check a box. This also includes highly repetitive, non-creative tasks like re-formatting a list of citations.
How I use AI for Reviewer, too
What follows is an exhaustive list of how I am currently using AI in the production of content on this website. It is updated whenever I add a new tool to my workflow:
Note-taking
I use Claude Code to manage an Obsidian vault where I keep all of my reporting notes. It keeps the vault tidy, with standardized metadata and templates applied consistently throughout, and keeps my records of meetings with researchers and other sources up-to-date. (These are tasks I tried to do myself for years and just kept failing at; I am not an organized, methodical person). Claude Code also generates draft notes based on voice memos, interview transcripts, and chaotic "raw" notes I write by hand. AI-generated notes are always marked as unreviewed; before using them for anything serious, I fact-check them manually.
Importantly, all of the AI-generated content in my vault is aggressively and consistently labeled as such and kept separate from writing I produce myself. This ensures that no generative AI content sneaks into my writing, even though I do use the summaries Claude generates as a resource and reference when I'm writing my own notes. I treat Claude's generative content the way I would someone else's writing: I can read it, internalize it, and take inspiration from it, but I cannot directly copy or otherwise plagiarize it.
Voice transcription
I use Otter.ai and OpenAI's free open-source tool Whisper to transcribe voice memos and interviews. Transcripts intended for public consumption are checked and cleaned manually before publication.
Learning
I use Google's Gemini-powered tool NotebookLM to turn research papers into podcast episodes so I can learn about the niche topics I'm interested in while doing chores and walking. I don't treat this as a replacement for reporting and research — any document that I write about on this site, I have read myself in full.
A trend across my uses of AI for learning and note-taking is that I treat AI generative content as a secondary source that requires verification and cross-checking with a primary source if I'm going to use it for anything important.
Hobby programming
I use Claude Code to generate personal utility programs and scripts for small tasks like scraping data from the web and organizing files on my computer. None of this code is intended for anyone else to use. If it ever gets to that point, or if I ever base reporting on code generated with Claude, I will have a software-savvy person help me make sure it didn't produce nonsense. If I write blog posts based on data I collect using web scrapers coded by Claude, I'll note that and post the scraper and the data to GitHub so anyone can review it if they care to.
Fact-checking
I use Claude as a backup fact-checker on my newsletter posts. I check my own facts, but having an outside sanity check is very helpful for catching errors you become blind to as you write. I know this isn't perfect, but it is better than having no fact-check at all, which is what I would have without AI.
Research
I use Claude Code to help me with research tasks like screening preprints and abstracts for potentially relevant stories, identifying possible experts for stories (esp. when trying to increase source diversity), and filling in the basic biographical details of researchers in the "written-by-Claude" sections of my Obsidian Vault. I'm actively experimenting with ways to shape Claude into a useful research assistant for me. I plan to write more about where AI helps and does not help with my reporting and research in the future in the hopes of helping others get the most out of these tools — an article on using Claude Code to screen 20,000 abstracts from the EGU conference is in the works!
Highly repetitive tasks and absurd nonsense
Look. I am just not going to spend hours re-formatting and cleaning up lists of citations or figuring out how to write a shell script that'll get a text file formatted juuuussssst the way I need it for some project. For highly repetitive or programmatic tasks, I just have Claude do it. Ask my tax guy, who I used to torture with poorly-formatted spreadsheets of invoices and expenses every year: this is for the best. For this kind of repetitive stuff, I am more error-prone than an AI (and handling it manually frustrates and stresses me out).
On the use of AI in creative work
As outlined above, I do not use generative AI to conceptualize or draft creative work and I hold work by others published on this site to the same standard. Here's what I mean by that:
Conceptualizing: Prompting a generative AI to brainstorm or come up with ideas for creative projects, outline or plan a creative project (e.g. creating a story outline from notes), etc. The spirit of this rule is to ensure that the ideas featured here sprung from human minds, not from machines.
Drafting: Prompting an AI to produce the bulk or essential part of a creative work, especially as a starting point. Asking an AI tool to generate an image, text, video, website, or other creative project flat-out is clearly a violation.
However, I do allow limited use of generative AI to support, refine or enhance human-conceptualized, human-drafted creative work.
Examples of appropriate uses:
- Using an AI coding agent to quickly make changes to a data visualization, Scrollytelling story, or interactive dashboard
- Using an AI proofreading tool to ensure compliance with a style guide or simply check for grammar and spelling
- Using generative AI to quickly clean up a long, messy list of links or citations to make it consistent and readable
- Using a generative AI tool to fill in a simple background or repetitive texture in a digital painting that the artist otherwise conceptualized and drafted themselves
- Using phone camera to take a picture or video — basically all phones use generative AI to make images taken with tiny cameras look reasonable
- Using an AI assisted audio editor to clean up messy recordings
- Using an AI assisted video editing software to quickly cut and edit a simple formulaic video format like a talking head interview
- Using an AI transcription tool to transcribe an interview for your notes
- Using an AI translation tool to read a document in a language you don't speak
- Using AI to collect, present, or summarize information that might spark ideas (Example: asking an AI to collect and summarize all of your existing notes on frogs would be fine, but asking an AI to brainstorm ideas for stories on frogs would not, since that counts as conceptualization).
I commit to replacing AI with humans
Rule 2 states that I do not use AI to replace anything I wouldn't realistically do myself or pay someone else for. What realistic means is relative, though, and it'll change as this newsletter grows.
Currently, R2 costs more to run than I earn writing it. So I can only afford to pay a truly pitiful $25 honorarium for one-time contributions of art or writing to the site, and I can't afford to pay for ongoing help from a research assistant, copyeditor, fact-checker, or editorial assistant at all. So I use AI to fill in, doing some tasks that I wouldn't realistically pay a human to do in a world without AI (those tasks just wouldn't get done and the newsletter would be worse, in that alternate world) .
But if R2 grows enough, what is "realistic" will change. I therefore commit to replacing AI with humans if the site becomes profitable enough for me to pay people.
I'm not yet sure what that will look like, but will update this section accordingly once I get a better feel for how this newsletter fits into the rest of my freelance work and which tasks it'd be most constructive to hire someone for first. I anticipate research assistant help and editorial art (which I don't use AI for, anyways) will be among the first things I hire people for.
Ok, but isn't AI an ethical nightmare?
It certainly has its problems. But frankly, so do most things I do regularly, like eating chicken wings and using a cell phone and wearing clothes manufactured in a faraway sweatshop. Already being bad is no excuse for adding another bad thing to my portfolio, of course. I don't claim to be making the best moral choices, or even ethically or logically consistent ones.
However, given the things that matter most to me personally, I'm satisfied with the policy above for the time being. My primary goal with this AI policy is to avoid degrading human creativity and replacing the work of the people AI companies stole from to create LLMs and other generative models (incl. journalists like me), while also making use of AI to bring more value to readers and to persist in this job as the world changes and freelance journalism gets weirder and harder. I also write about AI, and using AI tools is very helpful context for that work.