The Promise and Pitfalls of AI in Philanthropy - Exponent Philanthropy
A post to Exponent Philanthropy's blog

The Promise and Pitfalls of AI in Philanthropy

Artificial intelligence—particularly large language models (LLMs), which are trained on large amounts of text to generate written responses to prompts —such as ChatGPT or Claude—is becoming an increasingly common part of nonprofit and philanthropic work. These tools offer new possibilities for research, communication, and efficiency, while also raising important questions about accuracy, bias, data privacy, and the role of human judgment.

These questions were front and center at Exponent Philanthropy’s 2025 Annual Conference, where attendees gathered for an interactive session exploring how AI can support philanthropic work and where funders should proceed with caution.

Instead of a traditional panel, participants worked in small groups, mapped real-world examples on flip charts, and debated both the opportunities and risks. What emerged was a grounded, practical picture of AI’s role in philanthropy, balancing optimism with reflection, and efficiency with humanity.

The Promise: Where AI Shows Potential

Funders and nonprofits are beginning to experiment with AI and LLMs in ways that can expand capacity and lighten workloads. Early conversations point to broad areas of promise, including:

  • Improving clarity and efficiency in day-to-day work
  • Helping organizations make sense of information more quickly
  • Offering a head start on tasks that often slow teams down
  • Supporting internal planning and learning in new ways

These early benefits are especially meaningful for lean organizations balancing many priorities with limited staff time.

The Pitfalls: Where We Need Caution

Alongside these opportunities, participants also raised thoughtful concerns. Questions consistently surfaced around:

  • The accuracy and reliability of AI-generated content
  • How tools handle sensitive, confidential, or contextual information
  • The impact on relationships, voice, and human judgment
  • The long-term implications for staff development and organizational culture

These concerns highlight the need for careful oversight and intentional use, rather than assuming AI is a simple solution.

Inside Organizations: Balancing Efficiency with Responsibility

As foundations explore using AI to support internal operations, early experimentation points to opportunities to streamline processes and improve access to information. Yet many organizations are also grappling with the operational, cultural, and ethical implications of relying on rapidly evolving tools.

One Funder’s Takeaways

Ahead of the conference, we asked funders how they are using AI to support foundation operations. Jennifer Manise, Executive Director of the Longview Foundation, shared several lessons from Longview’s early experimentation with AI:

  1. Establish clear guidelines and revisit them often. Define when and how AI can be used and update your guidance regularly as tools and risks evolve.
  2. Be intentional about data access. Build an internal library of documents you are comfortable sharing with AI tools and use AI to analyze this information for internal reporting, while protecting sensitive and confidential data.
  3. Set firm boundaries and enforce them. Be explicit about what AI is and is not allowed to do, and actively guard against misuse or over-reliance on these tools.
  4. Ground AI use in your values. Ask critical questions: Is your AI use human-centered? Are you protecting your foundation’s data and your grantee partners? Does your use of AI, including large language models, align with your commitments, such as climate justice?
  5. Treat AI governance as ongoing work. AI policies and practices should be regularly revisited and refreshed to keep pace with rapidly evolving technology and emerging risks.

These takeaways reflect Longview’s thoughtful approach to using AI safely and ethically, encouraging experimentation while prioritizing transparency, responsibility, and alignment with organizational values. To support this work, Longview has shared its Artificial Intelligence Guidelines for other foundations to learn from and adapt.

Where to Go from Here

Across all conversations, one theme stood out: AI can strengthen philanthropy, but it cannot replace the people, values, and relationships at the heart of the work. As Jennifer emphasized, this balance is especially important for small, lean foundations, where AI can help address everyday challenges and expand capacity when used thoughtfully, without replacing human judgment or care. She encourages funders to approach AI with curiosity and intention by:

  • Learning together through peer or learning communities where AI is part of ongoing professional development
  • Experimenting safely with large language models using non-confidential information to understand what they do well, and where they fall short
  • Understanding your tools by learning the difference between public and private LLMs, including who owns your data and how it may be used

These steps offer a practical and responsible way for funders to embrace innovation while keeping humanity, accountability, and mission at the center of philanthropic work. To dive deeper into the insights, real-world examples, and considerations that emerged from the conference session, read the full member article: How Funders and Nonprofits Are Using AI.


Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Leave a Comment

Your email address will not be published. Required fields are marked *