The AI Agents Gap in PR Isn't Adoption. It's Ambition.
- Apr 3
- 7 min read

Every comms professional I know uses AI now. Meltwater's 2026 State of PR Report put the adoption figure at 90%. They use it to draft press releases, clean up pitches, brainstorm headlines. That's fine. But it's also the least interesting thing AI can do for a communications practice.
The real shift isn't using AI to write faster. It's building AI agents that handle the operational work that take time away from strategic planning and relationship building. Building them is more accessible than most people assume.
The Misconception About "Building" AI Agents
When I tell people I built AI agents for my PR practice, they assume I learned to code. While it's true I can code, I would characterize my skill as that of a hobbyist, not a professional. I use Claude Code, Anthropic's coding environment, to build tools by describing what I want in plain English. The AI writes the implementation. I describe the workflow, test the output, and refine until it matches what I actually need.
Any PR professional with a clear understanding of their own workflows can build custom tools. You don't need a developer on retainer. You don't need a five-figure SaaS budget. You need to know what you actually do all day, and most communicators know that better than they think.
The tools I've built aren't replacements for judgment. They're systems that handle the repetitive, structured parts of my work so I can spend my time on the parts that require a human read.
What I Actually Built (And What Each One Does)
I'll walk through what's running in my practice right now. Not as a product list, because specifics are more useful than abstractions.
A prospect research agent
When I'm evaluating whether a company is a good fit for my firm manually, I used to spend 30 to 45 minutes per prospect: reading their site, scanning coverage, checking messaging consistency, looking for signals of brand drift. I built an agent that takes a company name, walks through my entire workflow, and returns a structured brief with qualification signals, positioning analysis, coverage gaps.
It runs in about two minutes. I do a QA pass by spot-checking before crafting my outreach. Time saved: ~35 minutes per prospect.
A brand voice enforcement engine
I built a Brand Fidelity Engine that checks any piece of content against a client's documented voice, messaging guardrails, and brand standards. It doesn't just flag "this doesn't sound right." It identifies which specific rule a sentence violates and suggests a fix that stays within the guardrails. The first working version took a weekend. It now runs as a deployed API that any member of a client's team can use. Time saved: ~15-45 minutes per asset.
A copy quality gate
Every outreach message, blog draft, and client deliverable runs through an automated quality check before I send it. The gate enforces rules I've codified from my own mistakes: no stacked modifiers, no prescriptive advice in cold outreach, no boilerplate language pasted into a context where it reads as canned. It catches the things I want to avoid 100% of the time, and might miss at 11 PM on a Tuesday. More usefully, it catches the things I'd miss at 2 PM when I'm sure I've already proofread carefully enough. Time saved: ~5-15 minutes per asset.
A daily operational briefing
Each morning, an agent compiles my prospect pipeline status, surfaces overdue follow-ups, checks what's on the calendar, pulls the latest articles about my sector niche and in the PR ecosystem and summarizes them (it also gives me the links so I can read for myself if it's particularly interesting or relevant), summarizes my emails, dms, and flags anything related to an ongoing task or that has an action item for me, and helps prioritize my day. It replaced a 20-minute ritual of cross-referencing multiple spreadsheets, multiple browser tabs, the sticky notes stuck around the edges of my monitor, and my own memory. Not flashy. Just useful. Time saved: ~30-45 minutes per day.
A content intelligence engine
This monitors sector conversations and maps what's already been published on a topic. This has two benefits: first, it lets me stay up to date on the sectors in which I need to be up to date. Second, it identifies positioning gaps and helps me draft content that fills a specific hole in the conversation rather than restating what already exists. It's the difference between writing a blog post and writing a blog post that earns its place in a crowded feed. Time saved: ~30-90 minutes per day.
What Works, What Doesn't, and What Takes Longer Than an Afternoon
The AI hype cycle has earned people's skepticism. Here's what months of daily use has actually taught me.
Where agents deliver:
Structured research with clear inputs and outputs. Prospect research, competitive audits, coverage analysis. These tasks follow patterns, and agents handle patterns well. The time savings are measured in hours per week, not minutes.
Quality enforcement against documented rules. AI is better than most writers I've worked with at checking copy against a style guide, because it doesn't get tired and doesn't develop blind spots for its own writing. When applied faithfully across a team of writers, the quality gate will catch a significant number of violations that there's simply no efficient mechanism for without it.
Compounding operational gains. The daily briefing sounds trivial — thirty minutes saved each morning. But that's roughly two full working days recovered per month. And you don't have to do the math on the rest to see we're talking about a huge time savings.
What could you do with that time? Take on another client or two. Write two more pieces of thought leadership. Call it early on Friday and spend time with the family. Small automations stack.
Where agents fall short
Strategic judgment. An agent can surface what a prospect's messaging looks like and where the drift is. It can't tell you whether that founder is actually ready to invest in strategic communications. It can't tell how your staff is going to react and what they will need to become comfortable when adopting a new workflow. That's a human read built on pattern recognition it's not easy to articulate into rules.
Relationship nuance. I experimented with follow-up messages drafted by AI after initial conversations. The output was consistently off. Too polished. Too structured. Missing the specific callback to something they said in passing. I draft those by hand now, and the difference is obvious.
Anything requiring taste. Which story angle for which journalist. Whether a piece of content is good or just technically correct. When to push back on a client's instinct versus when to follow it. Agents surface options. The decisions stay with me.
Everyone reading this has experienced the wild confidence with which an LLM has given them bad information or poor advice. Using agents can free you from tedium, but not from the need to exercise good judgment. You become a manager, but a bad manager with bad judgment will still ship suboptimal work.
The Setup Is Simpler Than You Expect
If you're wondering whether this is actually within reach, here's the breakdown.
Map one workflow
Pick something you do repeatedly that follows a predictable pattern. Prospect research is ideal because the inputs (company name) and outputs (structured brief) are well-defined.
Document it in writing
Write out what you actually do, step by step. What do you look at? What do you evaluate? What makes a good output versus a bad one? This step is where most people stall. Not because it's hard, but because they've never been forced to articulate their own process. It's also the most valuable step, with or without AI.
Build it with an AI coding tool
Claude Code, Cursor, or similar. You describe what you want, the tool writes the implementation, you test it, find the gaps, and iterate. The first version will be rough, the third version will surprise you.
Deploy it if you want persistence
I use Railway to host my tools as APIs so they're always available. You can start with agents that run locally. Don't let infrastructure decisions delay the first experiment.
My prospect research agent, the first tool I built, took an afternoon. Not a week. An afternoon. I've refined it through dozens of iterations since, but the initial working version was usable the same day I started.
Not sure how to get started? Open up a Claude session, tell it what you're trying to build, and it can guide you step-by-step on what tools to download, how to install any permissions or dependencies you may need, and even help you craft the prompts to paste into Claude Code.
The Real Divide
Every PR industry blog I've read on this topic is a product roundup. "Here are 15 AI tools for PR professionals." Subscribe to this, onboard to that. Those tools are fine. Some of them are genuinely useful. But the conversation has stalled at the subscription layer.
The shift that matters isn't subscribing to better monitoring software. It's recognizing that communications professionals can now build AI agents shaped to their exact practice, their specific clients, their specific quality standards, their specific way of working. Off-the-shelf platforms give you what a product team thinks PR professionals need. Custom agents give you what you actually need.
The "how" has been made trivial. If you can meaningfully articulate what you need and why it matters, the how writes itself.
The communicators who will set the standard for the next era of this industry aren't necessarily the ones who adopted AI agents the earliest. They're the ones who stopped treating AI as a writing shortcut and started treating it as infrastructure, infrastructure that reflects how they actually think about their work.
That's not a technology insight. It's a communications one. The question isn't whether AI belongs in your practice. It's whether you're going to shape it, or let someone else's product team shape it for you.


