Blueprint for responsible AI adoption
How an advocacy organization moved from shadow AI to governed automation
This case study shows how an international advocacy organization moved from unstructured AI use to governed automation through the Shadow-to-Steward framework: problem diagnosis, interim guardrails, governance architecture, literacy building, workflow integration, and sustained oversight.
Context and challenge
An international advocacy organization needed to automate intake workflows that handled highly sensitive personal data. Staff were managing submissions manually, copying information across spreadsheets, with no audit trail and no consistent data handling standards. The pressure was to move fast: rising volume, limited staff time, and growing expectations from partners and funders.
The risk was clear. This organization worked with protected individuals whose personal information, if mishandled, could cause real harm. Before any automation could happen, leadership needed to understand what it was protecting, where data actually flowed, and what staff were already doing with AI.
The organization asked for a way to reduce manual work and latency in intake without increasing risk, and without asking staff to pause operations for months while a policy was written.
The Shadow-to-Steward AI framework
The Shadow-to-Steward AI Framework follows a deliberate sequence: problem before tools, literacy before governance, governance before implementation.
Shadow-to-steward AI framework
From shadow AI to steward AI: literacy, governance, and implementation in one system.
Problem before tools.
Literacy before governance.
Governance before implementation.
Problem before tools.
Literacy before governance.
Governance before implementation.
DISCOVER
Problem diagnosis
Identify the core challenges AI could address in your organization.
Sentiment mapping
Understand staff attitudes, concerns, and readiness for AI adoption.
Shadow discovery
Uncover existing unauthorized AI use and data exposure risks.
Landscape audit
Map current tools, workflows, and integration opportunities.
ESTABLISH
Interim guardrails
Set immediate boundaries while full governance develops.
Governance architecture
Build cross-functional oversight and decision-making structures.
Literacy building
Develop role-specific training and reference materials.
DEPLOY
Workflow integration
Embed approved tools into existing processes and systems.
Pilot & iterate
Test with defined success criteria before organization-wide rollout.
EVOLVE
Sustained oversight
Monitor incidents, assess effectiveness, update guidelines.
Next phase readiness
Prepare for emerging capabilities and evolving organizational needs.
DISCOVER
Listen before you leap
Problem diagnosis: I began by mapping the specific pain points in the intake process: how long submissions sat in shared inboxes, how many manual copy-paste steps existed, where errors were most likely to occur, and which parts of the process staff found most frustrating or risky. The goal was to define "success" in operational terms before naming any tool.
Sentiment mapping: Through interviews and short surveys, I assessed how staff felt about AI and automation. Some were enthusiastic about reducing drudge work; others were worried about job security, loss of judgment, or data exposure. I documented these sentiments and the language people used. This became the foundation for internal communications and training.
Shadow discovery: Staff were already experimenting. Some were using ChatGPT for drafting responses, AI writing tools for editing, and AI-enabled features inside existing SaaS tools. I catalogued which tools were in use, what kinds of data were being pasted into them, and where that clashed with the organization's obligations to protect personal information.
Landscape audit: I mapped the full ecosystem around intake: the website forms, email inboxes, spreadsheets, internal messaging tools, case-management systems, and handoffs between teams. I used AI to accelerate parts of this audit, analyzing existing documentation and identifying where sensitive fields and free-text content flowed.
Outputs from phase one
ESTABLISH
Foundation before tools
Interim guardrails: Because operations could not pause, I created immediate, plain-language guardrails. These covered: which AI tools could be used for low-risk tasks, a clear prohibition on entering personal or case-identifiable data into external tools, and an escalation path for questions. This interim guidance reduced risk quickly while the fuller framework was still being designed.
Governance architecture: I convened a cross-functional AI governance group that included leadership, IT/security, program leads, HR, and safeguarding. Together with leadership, I drafted AI principles grounded in the organization's duty of care, defined risk tiers for different kinds of data and use cases, and set expectations for human oversight.
Literacy building: Before selecting or configuring tools, I designed and delivered AI literacy training. Training was tailored to roles: intake staff, managers, IT, and leadership. Each session used realistic scenarios drawn from actual workflows. The focus was on practical judgment: understanding why certain data must stay within secured systems and how to use AI safely for low-risk tasks.
Outputs from phase two
DEPLOY
Build with purpose, within guardrails
Workflow integration: With boundaries defined, I designed an automated intake flow that respected them. Website submissions now trigger a series of controlled steps: sensitive fields written directly to a secured database behind multi-factor authentication, operational metadata flowing into task-management and notification tools, and case managers receiving structured notifications without unnecessary personal details.
Pilot and iterate: I piloted the new workflow with a defined group of staff over a set period. Success criteria included: reduction in processing time per submission, error rates in data entry, responsiveness to new intake, and staff satisfaction. Feedback loops were built in: weekly check-ins, a simple way to flag issues, and quick adjustments to routing rules.
Outputs from phase three
EVOLVE
Sustain and advance
Sustained oversight: I formalized routines for the governance group: regular reviews of incidents or near-misses, checks on data flows and access, and updates to guardrails as tools and needs evolved. I also established a simple reporting mechanism so staff could raise concerns or suggest improvements without friction.
Next-phase readiness: With the intake flow stabilized and governed, I identified potential next steps: deeper analytics on intake trends, AI-assisted triage within the secured environment, and improved knowledge management around cases. The idea was not to rush into new automation, but to confirm the organization had the mindset and structures to expand responsibly.
Outputs from phase four
"Governance is not the obstacle to efficiency. It is the prerequisite."
Outcomes
The combination of governance and automation produced both efficiency and safety gains:
Processing time
Notifications
Detected data incidents
Audit coverage