Article by Stacey Varsani
Founder & Principal Consultant, Hadouken Consulting
In today’s rapidly evolving technological landscape, the reliance on Large Language Models (LLMs) presents both opportunities and challenges. While these models can generate impressive outputs, they are not infallible; they often "hallucinate," producing patterns or information that do not exist or are imperceptible to humans. For instance, I recently asked AI to provide me with a few business case studies, along with sources. The results appeared credible at first glance but, when I looked at the provided sources, I couldn’t find the case studies described and the sources themselves were not well-known or reliable.
This underscores the importance of specifying the need for verifiable information when engaging with AI tools. Without clear instructions, these models may generate content that sounds authoritative but lacks factual accuracy. Even with specified requests, it is crucial to verify and cross-reference the information obtained.
Customer service AI chatbots, in particular, represent a high-risk area. A notable incident occurred in February when an Air Canada chatbot incorrectly informed a customer about a bereavement discount. The airline contended that it bore no responsibility for the erroneous information provided by its chatbot. However, a tribunal ruled against this assertion, emphasizing that the airline is accountable for all content on its website, regardless of whether the information is static or from chatbots. Although this was a singular event, consider the impact on the bottom line if an unauthorised discount was offered to hundreds or thousands of customers before being identified.
These examples highlight the critical importance of governance and risk management as organizations embrace AI technologies. The pace of AI adoption is unprecedented; as of January, ChatGPT’s website had received nearly half a billion visits worldwide. Poe, a platform launched by Quora which offers access to a range of AI chatbots, also has millions of users. The rapid advancement of AI has introduced a range of new security risks, such as data poisoning (the manipulation of model training data) and injection of commands that cause models to execute unauthorised tasks.
To navigate this landscape responsibly, organizations must take proactive steps. Accountability is paramount; companies need a thorough understanding of how their AI models function and make decisions, which can be particularly challenging when utilizing third-party tools. It is essential to be aware of high-risk areas and develop comprehensive action plans. Conducting due diligence on AI suppliers is vital to ascertain how models are trained, the data sources utilized, and measures in place to prevent copyright infringement.
Organizations can exercise greater control over data sources and outputs by creating their own AI models and "grounding" them by training them on specific, controlled datasets. Additionally, employing prompt libraries, which are curated sets of queries tailored to various user groups, can enhance the reliability of AI interactions. Ultimately, human oversight remains crucial in challenging the accuracy of AI-generated outputs.
As companies embark on their AI adoption journeys, they should consider integrating international standards such as ISO 42001 (AI Management System), ISO 23894 (AI Risk Management), and ISO 23053 (AI Systems Using Machine Learning). These standards provide a framework for the responsible and ethical use of AI technologies, addressing areas such as privacy, bias, transparency, and accountability. By adopting these standards, organizations can ensure that their AI systems operate fairly and transparently while upholding ethical principles. Furthermore, these standards facilitate interoperability and compatibility among AI systems, ensuring that technologies can work together seamlessly and exchange data effectively—a critical consideration as AI becomes increasingly embedded across various industries and applications.
Adoption of international standards can also help ensure compliance with regulations such as the EU’s AI Act. CEN and CENELEC, the European Standardization Organizations, have stated that companies that comply with international standards, such as ISO 42001, will be able to leverage their compliance to meet the relevant provisions of the AI Act.
It is important that AI suppliers themselves also integrate standards in their journey to expand globally and build trust. Certification to international standards demonstrates commitment to responsible AI development, which is crucial for gaining the confidence of consumers, regulators, and partners. Competing with tech giants in the AI space is challenging, but international standards are a way for smaller companies to differentiate themselves and build a reputation for reliability and integrity.
In conclusion, the integration of AI into business operations offers significant potential for innovation and efficiency. However, it also demands a rigorous approach to governance and risk management. By prioritizing accountability, conducting thorough due diligence, and adhering to international standards, organizations can navigate the complexities of AI adoption. This proactive stance not only mitigates risks but also positions companies to leverage AI technologies responsibly, fostering trust and enhancing their competitive edge.
To learn more about how your organization can effectively integrate AI while ensuring compliance and building trust, get in touch with us today. If you are an AI supplier, we can also help you differentiate your business through adoption of standards. Our team of experts is ready to help you implement best practices tailored to your needs.
Comments