Understanding the Risks of AI
As with any new technology, we must be aware of the risks inherent to adopting innovative processes — making effective governance a vital component of any AI strategy. Planning how you will address AI risks now can help your function prepare for the future.
Here are three ethical concerns related to emerging generative AI that internal audit teams must consider.
Bias and Discrimination
While AI itself has no biases, it can adopt any implicit or explicit biases in its training data, including those related to race, sex, gender, age, religion, geographic location, and more. AI cannot understand other perspectives because it hasn’t been exposed to them. Training an AI using diverse data sets is therefore critical and can help reduce the risk of creating a biased algorithm.
To further manage bias and discrimination risks and maximize the benefits of incorporating AI, there are several approaches and criteria that organizations should keep in mind:
- Because AI bias originates from people, businesses can address it just as they address human bias — through training. Expanding existing employee bias training and awareness initiatives to include the potential impact of AI could go a long way in preventing AI bias.
- Routine testing of data sources and training modules can help identify undetected patterns that could cause AI bias.
- There is no set-and-forget solution. AI bias can be introduced at any point in the creation or integration process, and companies must establish a regular cadence of review as models are updated and inputs change.
Responsibility and Accountability
When using AI to help make decisions for your organization, who do you hold accountable if something goes wrong? In other words, is the person who trained the AI responsible for the decisions it makes? The internal audit team of the future will inevitably need to consider these questions.
Accountability considerations also encompass questions regarding the rationale behind an AI’s decisions. Generative LLMs are known to produce “hallucinations,” which are false or nonsensical claims or fabricated information.
Internal audit teams should approach these issues proactively by setting clear rules for who is responsible for what the AI does as a part of the internal audit process. Implementing an AI governance framework that defines each stakeholder's roles and responsibilities helps create a transparent, accountable system. Teams should keep in mind these key areas of Responsible AI when shaping their governance framework:
Privacy and Data Security
Because integrating AI requires enormous quantities of user data, many are raising concerns about user information security. Internal audit teams must establish robust security measures that prevent sensitive personal data from ending up in the wrong hands.
By incorporating privacy protections into the AI development process from the start, you can better protect your data at every stage. This precaution is especially important for applications like payroll, where user data cannot be anonymized.
By leveraging a secure in-house GPT system, you are operating within your existing, secure corporate IT environment, helping to safeguard against risk. Some organizations may also consider creating separate LLMs by department to keep data private.