The Top AI Risks in the Nonprofit Sector

Nonprofits are increasingly turning to artificial intelligence (AI) to help support understaffed teams, fill resource gaps, and streamline processes. While AI has demonstrated its value in back-office applications, such as finance, accounting, and HR, operational and programmatic initiatives are emerging areas for nonprofit AI. As more nonprofits explore AI’s capabilities, it’s imperative to balance AI innovation with careful risk management. Failure to do so can jeopardize an organization’s operations, stakeholder relationships, reputation, and overall mission.

Whether you’re in the early stages of adoption or have mature AI capabilities, it’s critical to understand the top AI risks facing the nonprofit sector, as well as how to use AI in a way that is ethical and mission-aligned. Nonprofits need to be aware of common vulnerabilities and how to protect beneficiaries, donors, and their organizations. 


Data Security

More nonprofits are leveraging AI for donor management and engagement. These use cases typically involve inputting donor data into AI tools that analyze first-party data and third-party data sources to understand which donor profiles to target, as well as donor communication patterns and preferences. 

While nonprofits may be eager to leverage free or low-cost AI tools that can provide more support to their teams, organizations must be aware that these tools often lack robust security protocols. It’s also critical that organizations obtain informed consent for any personal donor information they are feeding into these systems. To protect and secure the personal information of their donors, nonprofits need to familiarize themselves with the data security policies and procedures of third-party tools. Establishing a formal review policy for third-party tools, including vetting vendors for data security practices, ensuring compliance with privacy laws, and providing regular staff and volunteer training, is key to maintaining data security for the organization. 

Nonprofits that do not have access to AI fundraising tools must ensure their employees are not experimenting by putting donor information into open-source AI programs. Nonprofits must also maintain an up-to-date inventory of all AI tools in use and communicate clear guidelines about what data can and cannot be shared.

Remember: While more than half of nonprofits use generative AI daily, open-source models present significant security vulnerabilities. They can use any information the user provides to train the large language model (LLM) and often produce unreliable analytics that can compromise data security and decision making. 

As many nonprofits may lack dedicated cybersecurity and data governance personnel due to bandwidth constraints, it may be challenging to stop data leaks quickly when they do occur. These vulnerabilities can expose irretrievable confidential information about beneficiaries, volunteers, and other stakeholders.

It’s imperative that nonprofits prioritize data security and AI oversight at the executive and board levels. Regularly update and revisit policies about how your organization uses AI and communicate those guidelines to stakeholders. Ensure all policy updates are compliant with applicable data privacy laws.


Data Security & AI Oversight Questions for the Boardroom: A Checklist

  • Are our data security and AI policies up to date?
  • Do we regularly review our data security and AI policies?
  • Do we have clear guidelines for how we select, implement, and monitor our AI tools?
  • Are we compliant with all relevant data privacy laws and regulations?
  • How do we ensure informed consent when collecting and using personal data in AI systems?
  • What processes do we have in place to vet and review third-party vendors and their data management practices?
  • Do we regularly train our staff and volunteers on data security and responsible AI use?
  • How do we communicate AI and data security policies to stakeholders?
  • How do we handle updates and incidents related to AI and data security?
  • Do we have a designated individual or committee responsible for our AI oversight and data governance?


Bias

Mission-driven nonprofits must be diligent about where and how they are using AI to avoid introducing bias risk. 

Organizations that use AI to screen grant applications are especially at risk. Depending on how and on what data the AI was trained, it could favor specific individuals, groups, and institutions based on previous limitations or bias. For example, a new organization could apply for a grant, but the grantor may be using an AI program that inadvertently screens for applicants who have previously received a grant. These outputs may perpetuate unfair funding outcomes, reinforcing systemic bias.

To mitigate these risks, it’s critical for organizations to keep their data clean, complete, and accurate. Nonprofits must also prioritize employee training on how to responsibly use AI, including identifying, monitoring, and correcting biases. While AI tools can aid employees in their day-to-day tasks, the technology still requires human oversight. 

Practical Steps to Reduce AI Bias

  • Audit and Clean Data Regularly: Routinely review datasets for missing, outdated, or skewed information that could introduce bias. Remove or correct flawed data before using it to train or inform AI models.
  • Include Diverse Data Sources: Ensure training data represents the full spectrum of communities and stakeholders served by the organization, not just historical or majority groups.
  • Test for Bias: Run regular tests on AI outputs to identify patterns of unfairness or disparate impact. Use statistical methods to compare outcomes across different groups.
  • Ensure Human Review of Decisions: Require that key decisions, especially those affecting funding or services, are reviewed by staff to catch and correct potential bias before finalizing.
  • Document and Disclose AI Criteria: Maintain transparency by documenting how AI models make decisions and what criteria are used. Share this information with stakeholders when appropriate.
  • Update AI Models Frequently: Retrain AI models with new, more representative data as it becomes available to prevent outdated biases from persisting.
  • Consult External Advisors: When possible, consult with third-party professionals who have extensive knowledge in responsible AI, and who use open-source bias detection tools to independently assess and validate your AI systems.

Ethical Risk and Stakeholder Trust

Ethical concerns surrounding AI are top of mind for stakeholders. Employees, boards, donors, and beneficiaries want to make sure that nonprofits are using AI safely and responsibly. Those that don’t may jeopardize these relationships and cause irreparable damage to the organization. 

Questions to Inform an AI Policy: 

  • Does our AI use align with our mission and values? 
  • What measures do we have in place to safeguard Protected Personal Information (PPI)?
  • How do we prevent AI bias?  

Organizations should proactively address these questions with their stakeholders to assuage concerns about whether, when, where, and how they implement AI. Be open and transparent. Continue to communicate any changes you make to your AI policies and guidelines with stakeholders as soon as possible.


Vendor Sprawl

From automating expense tracking and reporting to recording meetings and processing invoices, it’s increasingly common for employees to adopt multiple AI tools to fit various needs across teams. However, many are doing so in haste, without any prior vetting or oversight from organizational leadership.

This phenomenon makes it difficult for the IT department to maintain and manage a full inventory of the AI tools used across the organization. Without oversight, IT teams may not be aware of the full scope of security and privacy threats. Depending on the information entered into each system, there may also be an increased risk of noncompliance with privacy laws. 

While nonprofits should leverage the enthusiasm to test and try out new technology, they should do so in a controlled and secure way. Establish organization-wide guidelines to vet, approve, and define what information can be shared with AI and other technology programs. Ensure relevant parties understand these guidelines, including employees and volunteers. For nonprofits using multiple AI systems, identify a clear purpose for each one or consider consolidating. Understand how AI systems are interoperable with the organization’s tech ecosystem, and conduct employee training and upskilling sessions on a frequent basis.

By putting people at the center of their AI strategy, nonprofits can set the stage for successful technology adoption. As with any new tool or program, organizations should always ask: How will this help us champion our mission? 

AI Academy

Want to help your organization leverage AI safely, effectively, and confidently? Learn more about BDO’s AI Academy.