// TRANSPARENCY

How We Write

AInxiety uses AI tools in its content creation process, and we think you should know that. Not buried in a footer. Not hedged in legalese. Right here, at the top.

Content Tiers

Not all content on this site is made the same way. We organize our output into three tiers, and each post is labeled so you always know what you're reading.

Tier 01

Fully AI-Generated

News roundups, listicles, and SEO-driven evergreen content. These pieces are generated with minimal human intervention. They are reviewed for accuracy before publishing, but the writing itself is machine output. Every post in this tier is labeled clearly.

News Roundups Listicles SEO Content
Tier 02

AI-Drafted, Human-Edited

Pillar articles and cultural commentary. AI drafts the structure and initial prose. A human then rewrites, restructures, and adds perspective before anything goes live. The final voice is human, even when the scaffolding is not.

Pillar Articles Cultural Commentary Analysis
Tier 03

Human-Written

First-person essays, the brand manifesto, and editorial opinions. These are written by humans, from scratch, without AI assistance. When a person puts their name on something here, it came from them.

Essays Manifesto Editorial Opinion
the_irony.txt

$ cat the_irony.txt


Yes, a brand about AI anxiety uses AI to write.

That is not a contradiction. That is the whole point.


The economic pressures that make AI tools attractive

for content creation are the same pressures that fuel

the anxiety we write about.


We are living the thesis.

The Irony, Addressed Directly

A reasonable person might look at this site and raise an eyebrow. You write about AI anxiety, and you use AI to do it? That seems like a problem.

We disagree. The tension is the point. Small publishers face a brutal attention economy. The tools that make it possible to compete at scale are the same tools displacing writers, designers, and knowledge workers across every industry. Using those tools while writing about them is not hypocrisy. It is honest participation in a system we are also critiquing.

The alternative is to write about AI anxiety from a position of artificial purity, pretending the pressures that created these tools do not apply to us. That would be the actual lie.

We are not above this. We are in it. The work is to think clearly about what that means.

Quality Standards

Transparency is not a substitute for quality. These standards apply to everything we publish, regardless of tier.

  • Citations verified by a human. Every AI-generated reference, statistic, or quote is checked against the source before publication. If we cannot verify it, it does not run.
  • No fabricated statistics. Hallucinated numbers are a known failure mode of large language models. We treat all AI-generated data claims as unverified until a human confirms them.
  • No fake bylines or invented personas. If no human wrote it, no human name is attached to it. AI-generated content is attributed to AInxiety, not to a fictional person.
  • Every post carries a disclosure note. The tier label appears on every article so readers know the production method before they start reading, not after.

Why This Matters

Trust is earned, not assumed. Readers who come here for honest thinking about AI deserve to know how the content they're reading was made. Anything less would be a contradiction we could not defend.

The conversation about AI's role in creative work is more productive when it starts with honesty. There is no version of that conversation where we get to hold back the part where we are also doing it.

So: this is how we write. We'll update this page if our process changes.

disclosure_policy.sh

$ ./disclose --format=human


TIER 01 — AI Generated — human-verified

TIER 02 — AI Drafted, Human Edited

TIER 03 — Human Written


Disclosure appears at the top of every post.

No exceptions.

// Last updated March 2026

Questions about our editorial process? Reach out at editorial@ainxiety.dev.