Just as a ship’s captain must navigate shifting currents, unpredictable weather, and complex routes to reach a safe harbor, AI governance is about steering AI initiatives through evolving ethical, legal, and operational challenges. More than technical expertise, it requires clear leadership, strong communication, and coordinated collaboration. Much like guiding a vessel across the open sea, governing AI calls for foresight, responsibility, and trust to ensure safe, ethical, and effective outcomes.
Defining AI governance
The term “governance” actually comes from the Greek word “κῠβερνᾰ ́ω,” which means exactly that: “to steer (a boat).” So, in the context of business, governance is about intentionally guiding an organization toward a certain destination.
When we apply this to AI or technology more broadly, we can answer the question what is AI governance: the coordinated actions an organization takes to responsibly and ethically guide and control the development and use of AI technologies.
And that covers a lot of ground.
AI governance isn’t just about risk management, compliance, rule-making, implementing a project or a standard. Instead, it involves:
- Setting a clear direction for your AI initiatives;
- Making informed decisions and actively steering those initiatives;
- Ensuring that everyone is accountable for their role;
- Providing clarity through well-defined policies and rules;
- Systematically and structurally managing potential risks;
- Viewing governance as an ongoing, dynamic and continuous process;
- Recognizing that regulations might necessitate specific governance practices; and
- Using standards to make implementation practical.
Foundational elements of AI Governance
No matter which approach or framework you use, effective AI governance is built on these essential elements:
Organizational Context
- Your AI strategy: What are you trying to achieve with AI?
- Your AI objectives: What are your specific goals?
- Relevant regulations: What laws and rules apply to you?
- Ethical guidelines: What are your principles for responsible AI?
Inventory and Status
- A clear inventory of your AI systems: What AI tools are you using or developing?
- Data sources: Where is your data coming from?
- Information assets: What data do you have?
- AI-related activities: What are you doing with AI?
- Data flows: How is data moving through your systems?
- Categorization: How are you classifying your AI systems and data?
Roles and Responsibilities:
- Accountability: Who is responsible for what?
- Governance roles: Who is in charge of AI governance?
- Internal stakeholders: Who within the organization needs to be involved?
- External stakeholders: Are there external parties you need to consider?
- Communication: How will you keep everyone informed?
- Training and awareness: How will you educate your staff about AI governance?
Policies and Procedures
- Transparency: How open and understandable are your AI systems?
- Rules and guidelines: What are the dos and don’ts of AI usage?
- Development and usage protocols: How should AI systems be built and used?
- Incident response plans: What will you do if something goes wrong?
- Terminology management: How will you ensure everyone is using the same language?
- Governance processes: How will you manage and oversee AI governance?
- Operational processes: How will AI governance fit into your daily operations?
- Documentation practices: How will you keep records of everything?
Risk and Opportunity Management
- Identifying and evaluating potential risks and opportunities: What could go wrong, and what are the potential benefits?
- Risk mitigation: How will you prevent or reduce risks?
- Opportunity anticipation: How can you maximize the benefits of AI?
- Planning: How will you plan your risk and opportunity management activities?
- Execution: How will you carry out your plans?
- Monitoring and updating: How will you track your progress and make changes as needed?
Monitoring, Evaluation, and Adjustment
- Governance oversight: Who is overseeing the AI governance process?
- Performance tracking: How will you measure the effectiveness of your AI systems?
- Impact assessment of AI systems: What are the effects of your AI systems?
- Auditing: How will you check that everything is being done correctly?
- Continuous improvement processes: How will you make your AI governance better over time?
Choosing the Right Framework
There are many frameworks available to help you implement AI governance and ensure ethical and responsible AI. Here are a few examples:
- EC HLEG Ethics Guidelines (2019): These are the Ethics Guidelines for Trustworthy Artificial Intelligence, developed by the European Commission’s High-Level Expert Group on AI.
- ISO 42001 (2023): This is an international standard that provides requirements for an Artificial Intelligence Management System (AIMS), created by the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC).
- NIST AI RMF (2023): This is the AI Risk Management Framework, developed by the National Institute of Standards and Technology (USA). Tip: Be sure to check out the Playbook.
- OECD AI Principles (2019, updated in 2024): These are principles for trustworthy AI, provided by the Organisation for Economic Co-operation and Development (OECD).
The best framework for you will depend on your specific needs, and some are designed to be more flexible and can be used as a set of options. When selecting a standard like ISO 42001 or NIST’s AI RMF, consider factors such as attestation, recognition, and legal implications. Or combine both frameworks as we explain in our ISO 42001 vs NIST AI RMF resource.
The EU AI Act and Harmonized Standards
What is AI governance under the EU AI Act you may ask. It’s crucial to understand that AI governance is important even if you don’t have direct obligations under the European AI Act. Nonetheless, you the AI Act may impose some governance requirements on you (check our EU AI Act Key Takeaways resource). Your responsibilities under the EU AI Act will vary depending on your role, e.g., whether you provide AI products or use/deploy them, and the risk level associated with a particular AI system and its use (check our blog post to decide if your system qualifies as an AI system).
For example, if you provide a high-risk AI system (we’ll have an article on AI system risk classification soon!), you’ll need to demonstrate compliance with the requirements from Chapter III, Section 2:
- Art. 9 Risk management system
- Art. 10 Data and data governance
- Art. 11 Technical documentation
- Art. 12 Record-keeping
- Art. 13 Transparency and provision of information to deployers
- Art. 14 Human oversight
- Art. 15 Accuracy, robustness, and cybersecurity
In this case, the AI Act mandates governance, but it doesn’t dictate a specific approach or framework. However, Article 40 states that if a provider of high-risk AI systems implements and is certified against a harmonized standard (which CEN/CENELEC is still developing), the system will be presumed to comply with the requirements of Chapter III, Section 2.
It is generally understood that the work that went into ISO 42001 feeds into the harmonized standard, and implementing it will address a significant portion of the requirements.
Ultimately, AI governance is about taking a proactive and strategic approach to AI. It’s about building a foundation for trust, accountability, and innovation. Organizations that prioritize AI governance are better positioned to not only avoid potential pitfalls but also to unlock the full potential of AI in a sustainable and ethical manner.
Ready to start your AI governance journey?
Contact us today to learn about AI governance.