Charting the Uncharted: A Realistic Approach to AI Strategy
Three tools to help your organization be future ready
Some of my colleagues have declared their loathing of artificial intelligence to rival Kendrick Lamar’s feelings towards Drake. Understandably, as we work in an industry that values creativity, human expression and authoritative information, the threat of rapidly spreading disinformation and dehumanized creations sparks valid concern in the hearts and minds of those working in the information profession.
Therefore, there is a hesitancy to dig in and prepare. In this article, I want to acknowledge these reasons as valid but also stress that, like any other change before, it will not stop coming. I then offer tools to ensure you can build and implement a strong AI strategy for your library so that we can move forward responsibly, confidently, and resiliently.
History has proven time and time again that change and revolutions move forward despite resistance.
The Luddite movement, other anti-industrial movements, and even the renowned poets of the 18th and 19th centuries did not stop the spread of the so-called “dark Satanic mills” that accompanied industrialization.
Arguably, overall, the Industrial Revolution improved the quality of life for many people, and AI has the potential to do the same. However, the Industrial Revolution was not without its growing pains. These advancements came with many environmental, economic and humanitarian costs. We were shoving children down chimneys, for goodness' sake!
We can hate it all we like, but we won’t avoid any of the horrendous growing pains coming our way (already here) with this next revolution. We won’t avoid them all, but we do need to adjust and prepare so we’re not, at best, in a “who moved my cheese” situation or, at worst, completely lost.
Another reason there could be resistance to embracing AI is that our profession—and possibly all of society—is constantly stressed out. We honestly cannot handle one more thing. I often quote our national treasure, Moira Rose: "The world is falling apart around me, and I’m dying inside.” So, we may not feel like we have the capacity, or there are other more imminent issues in libraries that we need to handle right now.
Unfortunately, we are painfully overdue and no longer have time to place AI strategy on the back burner.
I’ve mentioned the importance of understanding the risks [insert link], so it is just as essential to have internal governance and rules around it. Privacy, cyber security, unchecked biases, and the unintentional spreading of misinformation are just the tip of the iceberg. We haven’t even considered issues that will blindside us if we’re not monitoring the landscape.
All these tangible reasons aside, we obviously can’t regulate what we don’t understand, and we can’t prepare for a future we’re ignoring.
If that alone isn’t enough to stress how much we need to be future-ready, even though it’s uncharted, we must set sail and just do it.
Unfortunately, we don’t have an AI-specific Indiana Jones - unless it’s AI itself. Again, given how it’s evolving, I don’t think today’s AI knows where tomorrow’s AI is heading.
We must establish fundamental tools and structures to help us plan and move. They have The tools to start and then grow.
I’ve buried the lead enough with why we need to act; I’ll now dig into how we should act.
AI Governance
An organization needs to determine who is responsible for monitoring AI usage and updating internal guidelines.
Start by developing a focused, cross-functional team with the authority and knowledge to lead this work—whether it's an official committee, a task force, or a working group. Ideally, it should include people in leadership or strategic planning roles alongside IT or digital services leads.
I recommend keeping the group lean and specialized—at least initially—to stay agile and respond confidently to new developments.
If you work in a smaller library or organization, consider partnering with a nearby expert, relying on guidance from a professional association, or adapting the work of others. Your “task force” might consist of just one person, but having a designated point of contact is essential.
This group should lead on drafting internal documentation, such as policies, usage guidelines, or best practices. They’ll also be responsible for staying current with emerging tools, risks, and opportunities, recommending staff training, and helping evaluate which AI technologies to test, implement, or avoid.
Think of them as your organization’s navigators - who will get to know the landscape and help the rest of the team move forward with clarity and care.
Internal Documentation
Once your team is identified and a mandate declared, they should establish guidelines for responsible, ethical and informed use of Artificial Intelligence (AI) tools within the organization. These guidelines aim to set the framework for AI used in the organization to enhance services and operations while safeguarding customer and staff privacy, ensuring ethical usage and promoting equitable access to technology.
I personally like guidelines for frontiers we’re not entirely familiar with. They allow flexibility and can exist as a living document if we suddenly need to change course. Some organizations adopt a broader but firmer policy, but ultimately, it should be whatever is best for your organization.
Your internal AI guidelines might include:
A statement of your organization’s principles or values around AI
Clear governance and accountability (who is responsible for what)
Privacy and data protection standards
Cybersecurity considerations
Transparency around AI-generated content and decision-making
There is no need to overcomplicate it—a one- or two-page document is a solid place to start.
Training & Education
The governance group should take the lead in keeping the organization informed. That starts with familiarizing with any existing or emerging regulations related to AI. While different governments are moving at different paces, it’s essential to understand what applies to your sector, region, and community.
Building foundational understanding across your organization is just as important. Offer simple, non-technical education for leaders, board members, staff, and other stakeholders. A high-level introduction to AI—what it is, what it isn’t, and how it might impact your work—can go a long way in reducing fear, sparking ideas, and supporting thoughtful decision-making.
This education might take the form of:
A short “AI 101” resource list (videos, articles, or webinars). These resources can become part of the organization’s
A short session or staff meeting presentation on how staff should or should not use AI
FAQs that address common myths and risks
Training on any implemented AI technology
The goal isn’t to turn everyone into an expert but to ensure everyone feels equipped to ask good questions, recognize potential, and raise concerns.
Not everyone has to be thrilled or knowledgeable about AI to take it seriously. We can acknowledge the discomfort, the overwhelm, even the existential dread—and still move forward. Because it’s coming, whether we’re ready or not. By taking a few thoughtful steps—assigning responsibility, setting some basic guidelines, and building shared understanding—we can face this new reality with clarity, confidence, and just enough structure to adapt. Change is inevitable. Resistance is human, but so is resilience.