Communicating AI risk: full case study
How an AI safety organization turned dense research into usable power for policymakers, journalists, and the public.
This case study shows how an AI safety organization improved its AI risk communication through audience mapping, message frameworks, media training, and rapid-response infrastructure.
Context and challenge
An AI safety organization wanted to establish itself as a movement leader: a trusted voice that could explain AI risks clearly to policymakers, journalists, and the broader public. The research was strong. The policy analysis was rigorous. But rigor alone doesn't build movements.
The organization was positioned as a research institution delivering findings, not a force creating pressure for action. Briefings were accurate and nuanced, but the framing kept AI risk at arm's length. Technical. Abstract. Easy to defer. Policymakers left meetings informed but unconvinced this belonged at the top of the pile. Journalists struggled to pull out a sharp quote that connected AI risk to visible issues like infrastructure, elections, or the economy.
The central challenge was the same one facing the broader field: AI risk competes with crises that feel more immediate. The timeline can seem speculative. Technical framing creates distance instead of urgency. The organization needed to shift from delivering findings to building the kind of resonance that moves people to act.
Audience and narrative mapping
Who needs to hear what, and why.
I began by mapping priority audiences and what "resonance" meant for each of them. For each audience, I defined what "success" looks like (for example, invitations to testify, quotes in stories, campaign partnerships), which misconceptions or sticking points kept recurring (for example, "AI safety is sci-fi" or "regulation will kill innovation"), and how much technical detail was helpful versus overwhelming.
This mapping grounded everything that followed. Instead of trying to "simplify AI safety" in the abstract, I focused on what each audience needed to understand well enough to make decisions in their own context.
Audience-message matrix
Who the organization was speaking to, and what "resonance" meant for each audience.
Select a tab to see the messaging work for each audience.
Clear asks and trade-offs, with links to existing policy priorities and timelines.
Limited time, competing crises, political incentives, and technical jargon that obscures the point.
Briefings with concrete actions and options, plus a one-page "asks and options" sheet that staffers could reuse internally.
"We're asking for three things: basic safety tests for frontier systems, incident reporting when things go wrong, and clear responsibility when AI is used in critical infrastructure."
Message houses and case-led storytelling
Structure for clarity and reuse.
Core message houses
I built message houses for recurring moments and themes, for example:
Frontier-model safety
Evaluations and capability thresholds
Critical infrastructure
Systemic risk and cascading failures
Election integrity
AI incidents in information ecosystems
Each message house contained:
This structure allowed spokespeople to stay on message while flexing to the context: a parliamentary briefing, a media interview, a panel, or a podcast.
Case-led narratives
I moved away from abstract risk alone and toward case-led storytelling:
These narratives did not replace technical detail, but they created an on-ramp for non-specialists, making it easier to grasp why frontier risks and systemic issues matter.
Media training and rapid-response infrastructure
Delivery and speed.
Media and spokesperson training
I ran training sessions with key spokespeople that focused on:
I practiced with them using realistic interview formats: live hits, pre-recorded segments, podcast conversations, and panel Q&As. Each spokesperson left with two or three "anchor narratives" they could return to in different contexts.
Journalist mapping and rapid-context notes
I curated a focused list of policy, technology, and science journalists tracking AI safety and adjacent beats. For each, I noted their recent coverage patterns, typical framing, and likely angles.
Then I created "rapid-context notes" for major moments, such as significant AI research releases, global policy developments or international declarations, and high-profile AI incidents or public debates.
Each note included:
This infrastructure meant the organization could respond in hours, not weeks, when windows opened.
From messaging to presence
Making it stick in everything the organization ships.
Aligning outputs
I reviewed and refreshed:
The goal was not to overhaul everything at once, but to make sure the most visible and frequently used materials reflected the new clarity and structure.
Building habits
I helped the team adopt a few simple habits:
Outcomes
Within a relatively short period, the organization saw qualitative and directional shifts:
Clearer coverage
Journalists began quoting spokespeople on concrete policy actions and governance steps, rather than only on abstract scenarios.
More usable briefings
Policymaker briefings moved from dense slide decks to crisp materials that staffers could use directly in their internal work.
Narrative uptake
Phrases and framings introduced by the organization started appearing in consultations, hearings, and media debates.
Greater confidence
Spokespeople reported feeling more prepared and less likely to "over-hedge" or lose their audience in technical detail.
Rigor doesn't build movements. Resonance does.