AI Bias Examples and Strategies for Mitigating Bias
Whether you work in HR or marketing, the last thing you want to hear is someone saying (or proving) you’re biased. In fact, we know that 24% of U.S. consumers have stopped shopping at some stores due to politics, hinting at a new trend of customers voting with their wallets.
And yet, over 70% of enterprises across continents and industries have invited a new colleague into their ranks. He’s a hard worker, he never takes time off and we all give him more responsibilities than any sane employee could ever stand. He only has one weakness, and that’s his known bias. You guessed it. I’m talking about the almighty AI algorithm.
Now, given that we all use this catchy acronym to describe a sea of applications broader than a marketer’s definition of “engagement,” let’s objectively look at different types of AI bias and strategies you can implement to avoid it.
The Numbers Don’t Lie (But Sometimes, Generative AI Does)
At this point, even an AI model couldn’t keep up with changes in industry applications, regulations and customer sentiment. Yes, it’s true that more and more enterprises are using AI applications like ChatGPT for anything from content creation to data science.
And yes, those who approach any machine learning model with a healthy level of criticism already know that you need to consider adequate training phases, privacy or data sourcing issues.
But when we engage with technology in a way that mimics human interaction, some switch in our monkey brain seems to flip. So when we talk about biased AI, there are the hard facts related to discriminatory outcomes, and then there are users’ beliefs (a.k.a. human bias).
First, let’s discuss what we know about harmful bias in model performance. There’s no doubt that every AI system suffers from some level of implicit bias — be it related to a lack of fairness metrics, biased algorithms or user behavior and contextual factors.
Researchers have shown repeatedly that language models only act as mirrors, except we don’t always realize whose values they show us — as in the case of a study proving ChatGPT’s inherent bias toward US Democrats and the UK Labour Party. It’s not by chance, then, that governments around the globe are reacting.
In the EU, for instance, the recently enforced Artificial Intelligence Act is addressing bias, prohibiting AI technology using social scoring, compilations of facial recognition databases or those relying on the exploitation of vulnerabilities related to demographics, among other things.
And yet, nearly half (47%) of North Americans predict that gen AI will be less biased than humans in the future.
Now, unless you’re a crystal ball manufacturer, you won’t know any better than those folks. So let’s look at the damage AI outputs can be causing today, so your business can take precautions.
Subscribe to
The Content Marketer
Get weekly insights, advice and opinions about all things digital marketing.
Thanks for subscribing! Keep an eye out for a Welcome email from us shortly. If you don’t see it come through, check your spam folder and mark the email as “not spam.”
Picture Imperfect: Visual Examples of Algorithm Bias
If you’re active on social networks like LinkedIn, you know that brands aren’t just generating images. Their creations kick off entire waves, like the most recent action figure trend. And excited as your inner child may get about creating digital toys on autopilot, especially when it seems to relate to your marketing strategy, there’s a catch. Well, several, actually.
Besides the fact that a strategy based on generative AI tools alone will make it impossible for your brand to stick out, the bias pendulum can swing in directions you didn’t predict.
As many examples illustrate, you can’t always assume a model’s bias (or beliefs) will stay coherent the way a human’s would. One moment, you’ll get images loaded with sexual or racial bias. Next, you’re dealing with historical bias at the other end of the spectrum, like overcompensating models putting people of color in historical scenes without regard for factual accuracy.
No matter which model you’re using, cases like these show that you simply can’t use a generative AI model for image generation in a professional setting without proper guardrails and AI ethics training in place. But the problems don’t stop there.
The Human Cost of Automating Fairness: When AI Gets Personal
Maybe you’re already riding the AI development wave beyond marketing campaigns and storytelling. But when you’re baking automation into your core business offerings and services, the risk of unconscious bias is even higher. The problem is, even explainable AI can only address that one biased outcome in your niche to a certain degree.
These days, our robotic friends can feel like electricity or tap water — they’re everywhere. Sadly, that also means that stories of measurement bias are surfacing in a million different facets.
Examples range algorithms discriminating against millions of patients because of racial bias and recruiting tools categorically rejecting female applicants to tenant screening tools causing a new kind of housing crisis.
As bad as each of these may seem on an organizational level or from a PR perspective, the level of pain imposed on the individual client (or unconscious recipient of AI decisions) is even worse. It’s no surprise, then, that we see the first AI settlements.
And if you think that those sweet, sweet productivity gains are all worth it and we just need to plow through until AI overcomes its bias, do the math. If just one settlement costs you six figures, as it did for iTutorGroup and many others, that marketing ROI chart gets a big dent, not to mention your brand reputation.
When the AI Tool Gets It Wrong: Service Failures
It’s also worth noting how quickly AI systems can spiral out of control if your organization doesn’t have the proper guidelines or quality checks in place. Multiply the average settlement with your customer base, and you’ll get a sense of the problem’s scale.
Just over the last year, we, the public’s lab rats, have sued airlines over their chatbot misapplying fee policies, yelled angrily at ordering systems that were a bit too creative and received legal advice from information portals that was … let’s say questionable.
The takeaway here is not that AI is bad or that we should all go back to our trusted weaving looms. It’s that productivity scales right alongside errors, bias and unintentional discrimination. So if you do adopt AI, especially if it’s on an operational level, you better make sure that quality control matches the speed and control of your running system.
With AI bleeding into every part of your business, separating reputation management and marketing from the business is a luxury you can’t afford.
The Balance Sheet: Business Impact of Algorithmic Bias and Solutions
In case all of this does make you pine for the golden days of grandma’s trusted loom, worry thee not, brave soul. Our scroll of wisdom awaits, penned in the ink of experience (and maybe a touch of caffeine).
Among all the sad, frustrating or hilarious stories of AI getting it wrong, we can actually find a glimmer of hope — like the fact that AI is improving access to microfinance services for female entrepreneurs. Some critics will say that those are the exceptions to the rule and that we’re using AI without truly understanding the benefits they’re delivering.
Either way, you or I likely won’t make AI go away, and at least we can rest assured that some companies like JP Morgan are investing millions to fight bias.
But for the time being, what can you do?
- Educate yourself about the nature of bias: Really, this blog post should only be your starting point. With so many factors affecting potential bias, regulations changing daily and your industry relying on different data than the next, it’s clear that bias mitigation has to be part of healthy business management now.
- Understand how your AI tools operate: As we mentioned, AI is not inherently evil. Some tools, like Google NotebookLM, will mostly hand you a synthesized commentary of the documents you feed into the model. Others might pull from recent data, and others again might struggle with their latest algorithm update. Understanding the different origins of AI bias and discrimination is the first step in addressing it.
- Use AI to fight back: That’s right, you can actually use bias mitigation tools to identify and address biases in datasets and algorithms. Counterintuitive as it may seem, you have to consider the scale of the problem you’re trying to solve. You can’t hold up an avalanche with a folding spade, either, so it’s advisable to research tools that can help different departments address bias in your business.
- Ask around for cross-collaboration: In most cases, AI changes the very nature of the task it’s solving. For you, that means you need fresh expertise to watch over it. That can mean rethinking team structures to enable more regular exchanges between your departments. It can also mean bringing in expertise from outside ethicists, sociologists or other domain experts to help you develop solutions that do more good than harm.
- Pick a solution that matches your risk tolerance: If you’ve fallen in love with the Wild West of AI adoption, go for it. But if you can’t afford it, simply because of the industry in which you’re operating, put the right mitigation strategies in place, whether that’s a digital rights management system, anonymization techniques or user access controls. For content marketing, implement fact-checking workflows and promote digital literacy across your team to stop bias before it can affect the entire team.
If this all seems overwhelming, allow me to provide a last word of consolation and some perspective.
From Plato’s suspicion of writing to Gutenberg’s critics wringing their hands over the end of control, to Kodak shutterbugs being branded as societal pests — we’ve always been a little dramatic about new tools.
But here we are — still thinking, reading, snapping photos of our dogs. AI’s just the next guest at the table. Let’s teach it some manners.
Note: This article was originally published on contentmarketing.ai.
Post Comment