Algorithmic Oversight: The Need for Ethical Guardrails in AI Implementation
Delving into the tangible risks of AI
If we still needed proof that GenAI was a ubiquitous presence in our world, Meta’s recent announcement that it is pushing its AI chatbot onto billions of its Facebook and Instagram users should have settled things.
While some are annoyed and others will avoid it, it is now front and centre to anyone with a social media account.
My previous articles have outlined some of GenAI's positive and helpful features, and I stand firm that it offers notable benefits. However, it remains unregulated and flawed.
Much like social media, GenAI is a new frontier with a full-on race to develop the technology, prioritizing speed over safety and ethics. While the government has learned from the runaway issues caused by social media, public policies are surely, albeit slowly, being developed, and the public leaders will no doubt be slow to catch up.
As librarians and information professionals, we have a role in literacy and awareness. The following are three top reasons why we should approach GenAI with trepidation.
Hallucinations or Hard Facts
In a world where humans spread misinformation and disinformation en masse, we now have the opportunity to add machine-generated content to the wave.
There is a risk of large language models (LLMs) spreading disinformation via synthetic content. They are continuously being fed good and insufficient data, so they will sometimes make incorrect assumptions based on the data it has been trained with.
These are called hallucinations. The chatbot is not lying—it’s not malicious. We’re not living in a Terminator or Space Oddesy reality—it only has the brain of a worm. The LLM is just making up stuff based on the data it has been trained with.
This leaves us at risk of generating misleading content, which is already particularly prevalent on social media. Their content is not being fact-checked, and if used with unwavering trust from its users, it will create a larger gap between general technology users and accurate information.
Despite these challenges, LLMs' capabilities are advancing rapidly, and they can produce remarkably sophisticated content if used properly. All the while, fact-checking and AI literacy and awareness remain in high demand.
AI’s unconscious bias problem
Anyone using AI should anticipate legal and ethical controversies, including its biases, similar to its ability to hallucinate facts; it hallucinates stereotypes. GenAI is biased because it is made by human developers and data scientists who may also have unconscious biases. Therefore, it will not always offer a neutral and objective assessment and should not be used in certain circumstances.
Polishing a simple letter, generating event-themed ideas, or creating a content calendar are excellent uses to apply GenAI as long as one reviews and edits one's work.
However, performing sensitive, nuanced tasks like recruitment is not yet available in AI format.
For organizations that use algorithms to recruit, various issues have arisen at various hiring stages. A Harvard Business Review article explains how early-stage tools, for instance, may unintentionally skew how a job ad is distributed based on superficial data, potentially reinforcing gender or racial stereotypes. During the screening process, algorithms can automatically exclude candidates based on past hiring patterns, which might not align with current diversity, equity and inclusion goals. A Bloomberg experiment revealed biases in OpenAI’s Chat GPT in evaluating job candidates - favouring certain demographics such as names and addresses.
The hiring process is ultimately too important to the organization's culture and finances and should not be left to machines.
Privacy Concerns
According to a 2024 CISCO survey, more than a quarter of corporations have temporarily banned the use of GenAI because of these concerns. Others have established strict policies on what data is entered. Clear control over how you and your organization enter data into GenAI is key to avoiding losing confidentiality.
Because GenAI learns from the data users input, it can reuse this sensitive information in responses to other users. Further, this data might be stored in places you don’t control. For example, Canadian public libraries have strict data governance policies that demand that customer data not be stored outside of Canada. Inputting customer data into any GenAI bot puts the data at risk of being likely fair game to those who want to access it.
In the rapidly evolving world, it is important to remain informed, engaged, and critical of AI's various uses. As librarians and caretakers of information integrity, we have a role in promoting AI literacy and ensuring that our communities, colleagues, and customers navigate this new technology with skepticism and awareness.
Useful Links
Google - What are AI hallucinations
Government of Canada - Responsible use of artificial intelligence (AI)
Government of Canada - Guide on the use of generative AI
President Biden issues executive order on safe, secure, and trustworthy artificial intelligence.