Unveiling AI: Simple Definitions to Demystify Today's Tech Buzzword
A Beginner's Guide to Artificial Intelligence Definitions
Artificial intelligence has become increasingly prominent in headlines across all industries over the last decade. It existed long before it became a buzzword in the 2010s, dating back to the 1950s with its founding father, Alan Turning. You’re probably familiar with Turning from 2014’s The Imitation Game. The mathematician, inventor, and computer scientist theorized that the human brain could be organized into machines.
Many of his visions are quickly coming to fruition.
Fast-forward to January 2024. While attending the Ontario Library Association Superconference, Helen Kula of McMaster University surveyed her session attendees to gauge their engagement with AI. Out of 71 respondents, only a third (33%) reported being "Not at All" engaged with AI. A close 28% felt they were "a little engaged," and 29% considered themselves "somewhat engaged." Notably, no one felt their organization was "all in."
While acknowledging this discussion around AI provides more anecdotal evidence than scientific data, it does highlight a significant engagement gap. Despite the pressing need and the potential impact of AI on our field, because it is already established as a technology that is changing our world, much like its predecessors, the Internet and social media, many library and information professionals remain on the periphery of this technological revolution. However, it's time for libraries to deepen their engagement with AI and AI literacy. This isn't just about technology’s impact on our workflows — it's about enhancing our capabilities to serve our communities in an increasingly digital and rapidly changing world.
As stated in the introduction to this newsletter, this space is meant to spark discussion and community around the topic. I don’t have all the answers, but I'm using my librarian/information professional skills to direct us closer to solutions.
Today, we'll focus on definitions to ease us into the discussion and the de-mystification of AI. By understanding the uses, capabilities, and limitations of each area of AI, we can better appreciate its value in our workflows and operations, identify further applications for our profession and prepare ourselves for the potential risks involved in adopting AI.
Narrow Artificial Intelligence
Narrow AI, also known as weak AI, includes tools powered by learning algorithms designed to perform specific tasks. Unlike the broader, more adaptable general AI, narrow AI focuses on singular functions, making it less "intelligent" in the traditional sense as it functions based on instructions. These systems are primarily rule-based, which means they follow programmed instructions to deliver reliable, albeit sometimes imperfect, results.
We have used narrow AI in our daily applications for quite some time. For instance, when you ask Siri for the weather forecast or command Alexa to play your favourite song, you use narrow AI. Similarly, when Netflix recommends a series you might like or when Spotify picks the next song on your playlist. Even more mundane tasks, such as filtering out spam in your inbox or correcting typos, are all managed by narrow AI.
Developers usually power them, and while they’re becoming more sophisticated, they’ll often need supervision. For example, have you ever been expecting an email that got sent to your junk mail? That’s an error made on behalf of narrow AI. The same goes if you ever texted someone, “What the duck?” You have narrow AI to thank for that. Narrow AI follows the rules and usually does so well. (With the odd autocorrect misjudgement, which allegedly Apple has resolved.)
Machine Learning
Machine learning is more flexible and advanced than Narrow AI, offering a more dynamic and versatile approach to drawing conclusions based on data. Unlike narrow AI, designed to execute specific, pre-defined tasks, machine learning encompasses algorithms that adapt and improve over time. It achieves this by continuously analyzing labelled and unlabeled data and learning from successes and mistakes through feedback loops.
This process allows them to draw conclusions and make predictions or recommendations based on large and diverse datasets.
For a plain language analogy, imagine visiting a new city for the first time. You likely rely on your phone’s map app to navigate your way. Each time you head out to explore, you learn more about the streets and start needing your phone less and less. Machine learning is similar—it starts with guidance through data and algorithms, then learns over time and can make decisions and predictions independently.
Practical applications of machine learning range from detecting fraudulent transactions in finance to diagnosing diseases in healthcare to optimizing your route using GPS. In each case, machine learning systems observe patterns, learn from them, and apply this knowledge to real-world problems.
Deep Learning
Deep learning is a specific type of machine learning—though not all machine learning is deep learning. It relies on big data and applies algorithms called “artificial neural networks” that can provide more advanced image recognition and fraud detection. As the field progresses from Narrow AI to General AI, it aims to enable machines to think autonomously.
Generative AI
Generative AI, in contrast, is intended to think on its own and achieve or surpass the highest human accuracy in task resolution. One of the major delineators between narrow AI and GenAI is that GenAI models are meant to generate content. Widely known examples of these models include Generative Pre-Trained Transformer (Chat GPT) by OpenAI, Bidirectional Encoder Representations from Transformer (BERT) by Google, and DAVINCI, Azure, Co-Pilot and DALL-E by Microsoft to name a few in a rapidly expanding market.
These models have rapidly advanced over the past couple of years. Since being released in November 2022, ChatGPT is now in its fourth version, which has advanced to understanding highly complex questions, being aware of context, and correcting mistakes.
Beyond text, GenAI can create images, videos, stories, and songs. It can also code and solve mathematical problems. It can plan itineraries, brainstorm with you, and even give you financial advice (which I would take with a grain of salt). This is barely scratching the surface; if you’re interested in discovering more, check out There’s an AI for That.
From auto-correct to what may seem like a magical oracle, AI has come a long way and is only gaining more momentum and capacity over the next eighteen months. (Yes, eighteen months, not five years.)
The next post is about how we can incorporate GenAI into our lives and work, using examples of how I’ve personally and professionally.
I’ll provide a few helpful links to complement each post. If you want to dive deeper, many LinkedIn Learning, Universal Class, and Coursera modules online will help you gain a deeper understanding. Above all, I recommend chatting with Chat GPT and prompting it to explain things differently and give different analogies.