AI strategy & implementation
From communicating about AI to implementing AI, with responsible governance every step of the way.
My work sits at the intersection of three domains: helping organizations communicate about AI, using AI to expand what communications teams can create and test, and building the governance and literacy infrastructure that makes responsible AI adoption possible.
Communicating AI risk
An AI safety organization needed to translate technical research into clear advocacy for government, media, and the public. The analysis was rigorous, but rigor doesn't build movements. The organization was positioned as a research shop delivering findings, not a force creating pressure for action. I helped shift the posture: working with spokespeople on presence and pacing, building message house frameworks for rapid response, and connecting AI risk to priorities policymakers already had. The framing started showing up in policy discussions and media coverage.
Read full case study →Rigor doesn't build movements. Resonance does.
Blueprint for responsible AI adoption
An international advocacy organization needed to automate sensitive intake workflows without putting protected individuals at risk. For organizations adopting AI, the sequence matters: literacy before governance, governance before tools. I led the organization through a four-phase framework: Discover (map pain points, surface shadow AI, audit data flows), Establish (set interim guardrails, build a cross-functional governance group, run role-specific AI literacy training), Deploy (design intake automation that keeps sensitive data inside a secured database while routing tasks and notifications to the right teams), and Evolve (put oversight routines in place so guardrails and workflows keep improving).
Shadow-to-steward AI framework
From shadow AI to steward AI: literacy, governance, and implementation in one system.
Problem before tools.
Literacy before governance.
Governance before implementation.
Problem before tools.
Literacy before governance.
Governance before implementation.
DISCOVER
Problem diagnosis
Identify the core challenges AI could address in your organization.
Sentiment mapping
Understand staff attitudes, concerns, and readiness for AI adoption.
Shadow discovery
Uncover existing unauthorized AI use and data exposure risks.
Landscape audit
Map current tools, workflows, and integration opportunities.
ESTABLISH
Interim guardrails
Set immediate boundaries while full governance develops.
Governance architecture
Build cross-functional oversight and decision-making structures.
Literacy building
Develop role-specific training and reference materials.
DEPLOY
Workflow integration
Embed approved tools into existing processes and systems.
Pilot & iterate
Test with defined success criteria before organization-wide rollout.
EVOLVE
Sustained oversight
Monitor incidents, assess effectiveness, update guidelines.
Next phase readiness
Prepare for emerging capabilities and evolving organizational needs.
Intake processing dropped from roughly 25 minutes to under five minutes per submission. Case managers now receive instant notifications instead of periodically checking inboxes. Every step produces an audit trail, and no data incidents have been detected since implementation. Governance was not the obstacle to efficiency. It was the prerequisite.
Read full case study →Governance is not the obstacle to efficiency. It is the prerequisite.
Scaling institutional knowledge
A boutique consulting firm needed to turn years of scattered client knowledge into a system staff could actually use. Staff were making daily decisions about which clients to connect and where engagement was cooling, but the answers lived in individual heads and scattered documents. An internal survey revealed staff are already experimenting with tools like ChatGPT, Perplexity, Grammarly, and Canva AI. The question isn't whether AI is in use. It's whether it's governed. What tools are in play? What data is flowing into external systems? What's the gap between what leadership assumes and what's actually happening?
I built governance first: acceptable use policy, approved tools, data boundaries, and human oversight requirements. Then I designed a strategic insights assistant that could surface patterns across client records, flag engagement risks, and generate briefings ahead of conversations, all within the boundaries the policy had established. The pilot demonstrated that AI could enhance relationship work without compromising client trust. Staff saved hours previously spent searching for context.
Institutional memory should live in systems, not individual heads.
AI-powered strategic intelligence
A tech-for-good organization needed to understand its competitive position and communications effectiveness before a major pivot. Leadership wanted answers: How does our digital presence compare to peers? What content resonates and what doesn't? Where are the gaps? A comprehensive audit would typically require weeks of analyst time. I used AI to compress that timeline dramatically, analyzing 150+ website pages, 200+ social media posts across six platforms, and 100+ news mentions. Sentiment analysis identified which framing approaches drove engagement and which fell flat. Competitive benchmarking against nine peer organizations revealed positioning gaps and market white space. I developed audience personas, mapped the gap between how the organization described itself and how media covered it, and delivered implementation frameworks ready for immediate application. Analysis that would have taken a team weeks was completed in days.
Analysis that would have taken a team weeks was completed in days.
Select credentials
AI governance
AGI strategy
AI safety
Building an AI-first organization
AI for communicators
AI & agentic workflows