Why asking better questions at the top is the key to unlocking organizational intelligence
I spoke this past weekend to the board of the Jane Goodall Institute of Canada about AI and fundraising. An absolute tip of the cap to their leadership for creating space for the conversation (and for inviting me to join).
I arrived a few minutes early and had some time to sit on the edges of the meeting before my scheduled session began.
As I sat in the room, I couldn’t help but hear a number of active conversations happening between Sr leadership and Board Members on the topic of AI and how individuals were using it.
I overheard one person explaining a method they had discovered and how they were using AI in this use case to solve a specific problem. Over the course of just a few minutes, I observed a number of these types of conversations taking place. One person sharing their personal use case to a small group of colleagues who were listening intently.
It became apparent that I was watching something important happen in real time.
Because here’s the truth most organizations haven’t said out loud yet:
AI is already in the building.
It’s just not being used in a way that builds organizational intelligence.
Staff across the nonprofit sector are quietly using AI to brainstorm, draft, summarize, edit, analyze, and accelerate work. They’re doing it because they’re overloaded and want to be helpful. They’re doing it because it makes their work faster and sometimes better.
But almost none of this usage is visible, coordinated, or cumulative.
It’s a thousand individual shortcuts that never become shared capacity.
And this is the real problem.
This is where boards and executive leadership come in – not to mandate technology, but to elevate the questions that turn scattered experimentation into mission-aligned capability.
AI is happening informally. The risk is what organizations don’t see.
One of the most striking insights from my work with nonprofits is how normalized AI already is among staff, even when leadership believes the organization is “not really using AI yet.”
They are.
They’re using ChatGPT to revise donor emails.
They’re using Gemini to summarize reports.
They’re using Claude to help draft grant narratives.
They’re using AI inside Microsoft, Google Workspace, Canva, fundraising CRMs, and even their phones – often without realizing it.
But because AI use is happening privately, the organization gains no collective benefit:
No shared prompts.
No standardized workflows.
No visibility into risks or errors.
No accumulation of knowledge.
No improvement in organizational memory.
Everyone is learning in isolation, and the organization stays exactly as stretched as before.
Boards need to understand that the real risk is not misuse – it’s missed opportunity.
Senior leadership holds the key: normalize AI use, make it safe, make it shared.
If staff are going to use AI openly (instead of hiding it in browser tabs) senior leadership must create the conditions for that success.
People will not contribute to a shared AI practice unless they feel psychologically safe. And psychological safety comes from the very top.
Leaders need to say explicitly:
We want you to use AI.
We expect you to learn.
We expect you to use the precautions.
We expect you to share what you learn.
We will build this capability together.
When leaders model that behaviour, by sharing their own prompts, by talking openly about where AI helped, by normalizing experimentation over perfection, it creates organizational permission.
And permission is the gateway to transformation.
Without it, AI remains a collection of quiet hacks.
With it, AI becomes a shared asset.
Can the intranet “dream” finally become reality?
Nonprofits have long chased the idea of an intranet, a central hub of knowledge that people will actually use.
Most (if not almost all) fail because they require extra effort, extra documentation, and extra time from teams already stretched thin.
It’s layering on, instead of taking layers away.
AI changes the equation.
AI reduces the friction required to log, store and access organization knowledge
When staff are empowered and encouraged to use AI openly:
- Insights become assets
- Workflows become shareable
- Templates become standard
- Lessons become institutional memory
- Guardrails become clearer and more ethical
- Organizational intelligence compounds
- Knowledge becomes democratised
AI doesn’t just live in the intranet, it becomes the reason the intranet finally works.
It speeds up the access to information and democratises access to organizational knowledge.
Because what gets shared is not static documentation, but the real, evolving ways people actually do their work.
This is the foundation of a centralized AI knowledge repository: a living system of growing organizational data, decisions, and insights that grows stronger every time someone contributes.
The role of the board: ask the questions that turn scattered usage into strategy.
Boards do not need to be experts in AI to lead well in this moment.
They only need to ask questions that create clarity, accountability, and alignment.
Questions like:
Where is AI already being used across the organization?
How do we ensure that learning becomes shared, not siloed?
What cultural signals are senior leaders sending about AI?
How are we capturing institutional knowledge as it emerges?
What guardrails (ethical, privacy, quality) guide our use?
How do we measure whether AI is actually improving mission outcomes?
These questions shift AI from a technical conversation to a strategic one – where it belongs.
They also reinforce something essential: AI adoption is not about tools. It’s about leadership.
When boards ask thoughtful questions, they create room for leadership to act. When leadership models openness, staff follow. And when staff collaborate, the organization builds something rare: internal intelligence that compounds over time.
The opportunity ahead
AI will reshape how nonprofits fundraise, communicate, steward donors, analyze data, and measure impact. But the real transformation will happen inside organizations in how they learn, collaborate, and memorialize knowledge.
Most nonprofits are already using AI.
Very few are learning from it intentionally.
Almost none are capturing that learning in a shared way.
Boards and Leadership can change that – not by choosing tools, but by guiding culture.
In this moment, governance is not about restricting AI.
It’s about enabling the conditions for responsible, mission-aligned innovation.
And if nonprofits get this right, they will build something extraordinary: a smarter, more resilient organization where every insight, every experiment, and every bit of AI-enabled learning strengthens the whole.



