Compliance doesn’t happen overnight. As technology evolves, it often comes with new risks and opportunities, in turn necessitating new standards, such as the recent case with artificial intelligence (AI). As organizations look to demonstrate their responsible use of AI, customers are asking how they can validate the checks and balances that the vendor organization has put into place. While some current reporting criteria offer ways to highlight the steps companies have taken in the responsible implementation and use of AI, however, those existing criteria in themselves may not address the risks related to AI completely.
For example, the new ISO 42001 certification is in its infancy, and organizations may not have reached the necessary maturity level to attain full compliance. This in-between stage can leave organizational leaders in a quandary when deciding how best to inform stakeholders and regulators on the current state of AI within their company. Still, even without attaining full maturity related to AI governance , companies can benefit from demonstrating to stakeholders how they are engaging in responsible AI development, deployment, and use.
The Need for Effective AI Controls
The rate of AI proliferation has prompted organizations to evaluate the risks that come with the technology, as well as ways to develop, deploy, and utilize it responsibly. In turn, this has led companies to assess areas that directly affect AI and implement guardrails around it. The general categories outlined in SOC 2 reports could be a starting point for companies to begin considering their AI controls, however, these areas — security, confidentiality, availability, processing integrity, and privacy — only provide a high-level glance at AI.
To better understand the necessary controls and how to properly establish guardrails around AI, organizations must dig deeper and ask pertinent questions about the technology’s use. Some of these questions include:
- Where is the AI being used in the organization, and does the organization have a full understanding of its exposure to AI?
- What are the risks related to AI that are deployed and in use?
- What is the state of the organization’s AI data governance?
- How is the organization measuring accuracy of the AI results based on the risk and impact?
- How is the organization monitoring drift in the model?
- How does the organization guard confidentiality and privacy risks?
- How is logical access within AI managed to ensure permissions are limited to the appropriate people, technologies, etc.?
- Who in the organization owns the responsibility of compliance and who owns the responsibility to mitigate risk?
It’s equally as imperative to assess the impact that an organization’s AI has on people and society at large. ISO 42001 addresses these issues by examining more nuanced criteria, such as bias within AI models, fairness of outputs, safety concerns, and the ethical use of AI.
Showing Responsible AI Use
Working toward achieving ISO 42001 certification is a process that many organizations may not be prepared for — yet. In many cases, companies are already on their way to reaching the new standard’s requirements, but those efforts aren’t reflected in current reporting metrics. Rather than taking an all-or-nothing reporting approach and waiting until the company is fully ISO 42001 compliant, companies can instead produce a SOC 2+ report, which follows the same broad categories of SOC 2 but allows for the presentation of additional information about AI controls in a more focused manner.
This can include information regarding data governance, privacy standards and impact assessments, and related controls the company has established for AI use. Additionally, the company can use elements of other frameworks as a guide in SOC 2+ reports to display their advances in responsible AI development, deployment, or use. For instance, if the organization has not reached a mature state of compliance yet with ISO 42001, it can list the criteria it has met when producing the report. Doing so sends a clear message to stakeholders and customers that the company is actively engaged in building and maintaining leading AI practices.
Building Toward Compliance
Generative AI technology itself is in its early stages, and the information regarding the technology will require ongoing education as the adoption of AI by enterprises grows. AI will lead to new challenges and concerns that enterprises and standard-setting bodies will need to address. As such, organizations understand that customers and stakeholders will hold them accountable to a more formal response in the forms of certifications and/or attestation reports.
Learn how BDO can help your organization display its responsible use of AI and address any questions about ISO 42001, SOC, and other compliance frameworks.