On a recent episode of Ctrl + Alt + Regulate, I had the incredible opportunity to sit down with Olivia Gambelin, a leading AI ethicist and responsible AI practitioner. Known for her work on ethics-by-design, her book Responsible AI, and the innovative Values Canvas framework, Olivia offered a masterclass on how organizations can responsibly build, govern, and scale AI solutions.
The Dual Role of an AI Ethicist and Practitioner
Olivia opened the conversation by explaining the two hats she wears: as an AI ethicist and as a responsible AI practitioner. As an ethicist, she focuses on embedding human values into AI development, using ethics as a driver for innovation rather than an afterthought. As a practitioner, she designs the operational structures—governance, training, processes—that support the implementation of AI within companies.
Her dual expertise is critical: one side shapes what AI should do, and the other side ensures AI actually operates safely and ethically at scale.
The Reality Behind Responsible AI Frameworks
Many companies have hired consultants to build responsible AI frameworks, often based on NIST or the EU AI Act. Yet, many end up with a checklist and little guidance on how to operationalize it. Olivia explained that Responsible AI frameworks aren’t one-size-fits-all—they must be customized based on an organization's size, industry, use cases, and maturity level.
Frameworks often focus on different aspects, such as:
Fairness and bias mitigation
Risk and use case assessment
Post-deployment monitoring for model drift and ethical drift
The key is to identify where friction or confusion exists and build frameworks that fit those real-world needs.
Who Should Be on the AI Committee?
Building a Responsible AI framework isn't just a technical problem—it’s a human one. Olivia emphasized that a successful AI committee must include a diverse set of stakeholders:
Chief AI Officer or Chief Technology Officer
Representatives from Legal and Risk Management
Engineers with technical expertise
A dedicated Responsible AI professional to guide best practices
A committee composed only of leadership or only of engineers tends to miss critical blind spots. Balance is essential.
Balancing Innovation and Accountability
A common question among CIOs and CTOs is: How do we innovate fast without compromising compliance and risk?
Olivia's advice: Start with the use case.
Before bringing in the AI technology, leaders must deeply understand the business value, user needs, and safeguards required. Without that, companies risk investing heavily in projects that don't drive measurable outcomes—or worse, introduce ethical and security risks.
The Values Canvas: Bridging People, Process, and Technology
To help leaders map out these complexities, Olivia created the Values Canvas. Much like a business model canvas, the Values Canvas identifies nine critical impact points across three pillars:
People
Process
Technology
Each element prompts questions around accountability, trust, and value generation—ensuring AI development remains grounded in real-world outcomes, not just technical capabilities.
Why Enterprises Are Stuck at AI Maturity Level 2 or 3
Despite all the hype around AI, many Fortune 500 companies remain stuck in the early phases of the AI maturity model. They’ve experimented with pilots and prototypes but haven’t scaled successfully.
According to Olivia, the missing link is Responsible AI. Responsible AI practices help organizations plan for scaling from day one—setting up strong data governance, documentation, MLOps processes, and adoption pathways to drive real business value.
The Battle of LLMs: Governance and Policy Are Essential
We also discussed the new challenges presented by LLMs (Large Language Models) like OpenAI, Anthropic’s Claude, and DeepSeek. Engineers are downloading open models, feeding them sensitive data, and inadvertently leaking company IP.
Olivia’s recommendation:
Implement company-wide LLM usage policies immediately.
Educate employees on what can—and cannot—be shared with external models.
Vet LLM platforms for security and privacy risks before use.
Prefer closed, enterprise-controlled environments whenever possible.
She pointed out that DeepSeek’s controversies highlight a larger cultural problem: innovation without accountability. Without a strong Responsible AI culture, even the best technical controls will eventually fail.
A Personal Story: When Responsible AI Became Training Data
In a moment of ironic humor, Olivia shared that her own book Responsible AI was pirated and included in Meta’s LLaMA training dataset. While she found the situation amusing rather than upsetting, it perfectly illustrated the importance of ethical practices in AI development—responsibility must be baked into the culture, not added as an afterthought.
Final Thoughts
Responsible AI is not a roadblock to innovation—it’s the bridge that turns pilots into production, experiments into value, and risk into trust.
If you want to connect with Olivia for conferences, workshops, or consulting, visit oliviagambelin.com or reach out to her via LinkedIn.
Share this post