From Policy to Practice: How to Operationalize AI Governance

by Kristie Grinnell
7 minutes read

Artificial intelligence is the defining platform shift of our time. For partners across the technology ecosystem, it represents a tremendous opportunity to drive innovation, unlock operational efficiencies and create new streams of recurring revenue.  

However, in our fourth annual Direction of Technology report, nearly half of partners shared that customer skepticism around transparency, security and accuracy is a major hurdle to AI adoption. 

IT leaders are experiencing similar challenges within their own organizations: Recent research from Gartner found that only 23% of IT leaders are very confident in their organization’s ability to manage security and governance components when rolling out GenAI tools. 

Without guardrails, AI can undermine long-term value and introduce legal, ethical and security risks. Strong day-to-day AI governance is essential, and the organizations that thrive will treat risk management as an ongoing operational discipline, not just a one-time effort. 

With that in mind, here are insights IT leaders can use to build and strengthen their own AI governance models. 

Pressing Start: Your Responsible AI Policy 

Responsible AI refers to using AI in a way that serves beneficial purposes within business operations while carefully identifying and mitigating associated risks. A strong responsible AI policy should guide how AI is used across your organization and include core sections that cover the following principles: 

  1. Values-Driven Approach to AI: Align AI use to organizational values, prioritizing ethical practices and human oversight. 
  2. AI Core Principles: Outline ethical standards for AI use to ensure secure and consistent performance. 
  3. Applying Organizational Values to AI Use: Set expectations for day-to-day AI use, including data protection, approved technologies and compliance with internal and external guidelines. 
  4. Evaluating and Managing AI Risks: Define how AI risks are assessed, mitigated and monitored. 
  5. AI Development Environment: Provide a secure environment to test and build GenAI solutions. 

Structuring a policy this way allows it to be scalable while remaining flexible enough to evolve as AI innovation accelerates.  

It’s essential to remember that responsible AI is every employee’s responsibility, starting at the top with visible executive sponsorship of policy initiatives. At TD SYNNEX, our CEO and I lead this charge by sharing actionable ways we use AI and how we manage its risk with co-workers. To build true AI fluency across an organization, employees should have a clear understanding of the policy and be required to acknowledge they can apply the guardrails in their work. 

RAMPing Up: Policy in Action 

With clear expectations outlined in the policy, the foundation for responsible AI is in place. To make those expectations operational, it’s essential to have a mechanism for evaluating which AI tools, solutions, platforms and LLMs align with the policy.  

At TD SYNNEX, we have our AI Risk Assessment Management Process (RAMP) to do just that. 

The AI RAMP involves a group of subject-matter experts from across the business, including representatives from IT infrastructure, cybersecurity, data privacy, legal, ethics and compliance, and other relevant risk areas, so AI resources are evaluated holistically. 

To initiate review, co-workers submit a risk assessment form that captures use case details and assigns an initial risk level, which determines next steps: 

  • Low-risk initiatives are approved automatically to keep safe innovation moving quickly.  
  • Medium-risk initiatives are reviewed by IT. They confirm the risk level and determine appropriate mitigations. 
  • High-risk initiatives are taken to the full group of cross-functional experts who evaluate the tool/solution against company requirements across their areas of consideration. 

No matter the outcome, all evaluated tools and solutions are added to an internal AI catalog, creating clarity and transparency for all co-workers around its approval status. 

By pairing a robust AI policy with a consistent review process, expectations around AI use are standardized while experimentation is still encouraged. 

The Test Drive: What to Keep in Mind When Guardrails Meet Creativity 

Responsible AI governance should be consistent enough to follow, flexible enough to support experimentation and adaptable to changes in technology and regulation. 

To strike this balance, we’re investing in security to keep experimentation safe as adoption scales. Our practical approach is to establish a clear “test-to-production” pathway that includes controlled environments where teams can evaluate emerging LLMs and AI tools before deploying them widely. One way we’re doing this is through our internal AI Lab environments, which are available to all TD SYNNEX co-workers and allow them to compare models side by side for performance, cost and capabilities with minimal resources and time invested. 

As technology and regulation evolve, AI governance has to evolve with them, and it’s a challenge we’re tackling together as an industry. It’s essential to stay adaptable and remember the strongest outcomes are achieved when governance is built into everyday decisions and processes, empowering organizations to move faster without sacrificing trust or security. 

Related Posts

Global Headquarters

44201 Nobel Drive

Fremont, CA 94538

16202 Bay Vista Drive

Clearwater, FL 33760

Media Inquiries

1-727-538-5864

CorpCommunications@tdsynnex.com

© 2023 TD SYNNEX Corporation. All rights reserved. TD SYNNEX, the TD SYNNEX Logo, TECH DATA, the TD Logo, SYNNEX, and the SYNNEX Logo are trademarks or registered trademarks of TD SYNNEX Corporation. Westcon, Comstor and GoldSeal are registered trademarks of WG Service Inc., used under license. Other names and marks are the property of their respective owners.