Article by Stacey Varsani
Founder & Principal Consultant, Hadouken Consulting

I’ve recently published several articles that mention the critical importance of AI governance. Now I would like to address what this looks like in practice, drawing on guidance from ISO/IEC 42001:2023—the international standard for AI management.
AI projects hold immense promise, but they also come with unique challenges that need to be managed carefully. As organizations increasingly integrate AI into their workflows—whether to enhance efficiency or drive transformative change—it’s essential to recognize potential pitfalls and address them proactively.
1. Inappropriate Algorithm or System
One of the most critical decisions in an AI project is choosing the right algorithm or system. An ill-suited choice can lead to inefficiencies, misaligned outputs, or even complete project derailment.
Actions: Collaborate closely with both AI specialists and domain experts to thoroughly evaluate your options. This interdisciplinary approach ensures that the chosen AI system aligns with your organization’s needs and operational realities. ISO/IEC 42001 emphasizes the importance of clearly defining the objectives of the AI system and performing comprehensive risk and impact assessments, including ethical, regulatory, scalability and adaptability considerations.
2. Poor Data Quality
AI systems are only as good as the data they are trained on. Poor-quality or insufficient data can lead to biased or unreliable outputs, undermining the entire project.
Actions: Conduct a thorough analysis of your data needs before embarking on the project. Ensure that your data is not only abundant but also diverse, clean, and representative of the problem you aim to solve. This aligns with ISO/IEC 42001’s recommendation for robust data quality management practices, which are foundational to effective AI deployment.
3. Data Security and Compliance Issues
Ensuring that data used in AI systems is secure and compliant with regulations is vital to avoid legal and reputational risks.
Actions: Establish strong data governance practices, including encryption, access controls, and compliance checks. ISO/IEC 42001:2023 emphasizes the importance of protecting sensitive information and maintaining regulatory compliance throughout the AI project lifecycle.
4. Fears and False Expectations
AI is often surrounded by a mix of fear and overblown expectations. Employees may worry about job displacement, while leaders might expect AI to deliver instant, groundbreaking results.
Actions: Transparent communication is key. Educate your teams about the scope and limitations of AI, and position it as a tool to enhance, not replace, human roles. By fostering a realistic view of AI’s capabilities, you can mitigate fear and prevent the disillusionment that often accompanies unmet expectations. ISO/IEC 42001 underscores the importance of stakeholder engagement, particularly in relation to the functionality and limitations of AI systems.
5. Lack of Expertise
A gap in expertise—whether in AI, domain knowledge, or project management—is a recipe for failure. Projects often flounder when teams work in silos without cross-functional collaboration.
Actions: Build a multidisciplinary team where AI specialists and domain experts work hand in hand. Their collaboration ensures that technical capabilities are grounded in real-world relevance. ISO/IEC 42001 highlights the importance of team competency and continuous skill development, encouraging organizations to invest in training and knowledge sharing.
6. Performance Degradation
AI projects require ongoing oversight and management to ensure that they remain relevant, ethical, and effective over time. Neglecting this can lead to performance degradation or ethical lapses.
Actions: Implement a robust lifecycle management framework, as recommended by ISO/IEC 42001. Regularly monitor system performance and manage changes in AI behaviour through reviews, impact assessments, output validation, and updates to safely controls. Ensure alignment of the system with evolving business goals.
7. Lack of Alignment with Sustainability Goals
Organizations must address the impact of AI on the environment and ensure that systems align with broader sustainability goals. Neglecting the environmental impact of AI projects can lead to regulatory non-compliance, reputational damage, and long-term viability risks.
Actions: ISO/IEC 42001 encourages organizations to assess the resource consumption of AI systems, including energy and materials, and implement strategies to mitigate negative environmental impacts, such as optimizing algorithms for energy efficiency and disposing of AI hardware in a sustainable way.
Key Takeaways for Success
To set your AI project up for success, keep these principles in mind:
Prioritize thorough planning, including risk and impact assessments.
Focus on data quality and relevance.
Secure your data and maintain compliance with relevant regulations.
Foster open communication to manage expectations and alleviate fears.
Build a collaborative, skilled team that bridges technical and functional expertise.
Implement lifecycle management and continuous monitoring.
Proactively address the environmental impact of AI systems.
AI is a powerful tool, but its success depends on thoughtful implementation and adherence to best practices. By following the guidelines above and leveraging insights from ISO/IEC 42001:2023, you can turn potential pitfalls into success factors for a transformative project.
If your organization is embarking on an AI journey and needs expert guidance to navigate these challenges, we’re here to help. Reach out today to discuss how we can work together to make your AI initiatives deliver maximum value.
Comments