Future Friday: Do algorithms actually reduce biases?

by Michael Haberman on August 10, 2018 · 0 comments


Is AI decision making good or bad?

If you read anything about Artificial Intelligence (AI) and its use in the world of HR you quickly discover that there are two schools of thought. The first of these is that AI is biased and dehumanizing, the second is that it makes better decisions that humans and thus is an improvement.

The negative: Biased algorithms

Representative of the first point of view is that of Will Knight, author of an article in MIT Technology Review, called Biased Algorithms Are Everywhere, and No One Seems to Care. Knight talks about the camp that thinks algorithms go too far and are trusted too much. This group thinks that the widespread use of algorithms may disadvantage candidates, paroles, teachers, neighborhoods, loan applicants and more. Knight quotes Cathy O’Neil, a mathematician and the author of Weapons of Math Destruction, a book that highlights the risk of algorithmic bias in many contexts, who says people are often too willing to trust in mathematical models because they believe it will remove human bias. “[Algorithms] replace human processes, but they’re not held to the same standards,” she says. “People trust them too much.” The American Civil Liberties Union has teamed with experts and launched an effort to identify and highlight algorithmic bias.

The positives: Better than humans

The other side of the argument on AI says that the use of algorithms, while it may be flawed, still produces better results than human. Alex Miller, writing in the Harvard Business Review (HBR) says that the negative point of view asks the wrong question. Rather than focusing on whether or not AI or algorithms are biased, we should focus on how much worse human decision-making is compared to the algorithms. Miller says that “Algorithms are less biased and more accurate than the humans they are replacing”  and cites numerous studies and articles to prove his point. Click the link above to get to these studies.

Miller makes the point “A not-so-hidden secret behind the algorithms mentioned above is that they actually are biased. But the humans they are replacing are significantly more biased. After all, where do institutional biases come from if not the humans who have traditionally been in charge?” He goes on to say:

Unfortunately, decades of psychological research in judgment and decision making has demonstrated time and time again that humans are remarkably bad judges of quality in a wide range of contexts… the humans who used to make decisions were so remarkably bad that replacing them with algorithms both increased accuracy and reduced institutional biases.

Awareness

The point that I am making by offering this comparison is that the average HR person needs to be aware that algorithms are not necessarily the end-all and be-all of methodologies in HR. Awareness that there are issues with both human decision-making and decisions made by algorithms will work to improve both. Do not follow blindly. As Miller concludes:

This is not an argument for algorithmic absolutism or blind faith in the power of statistics. If we find in some instances that algorithms have an unacceptably high degree of bias in comparison with current decision-making processes, then there is no harm done by following the evidence and maintaining the existing paradigm. But a commitment to following the evidence cuts both ways, and we should to be willing to accept that — in some instances — algorithms will be part of the solution for reducing institutional biases.


Be Sociable, Share!

Sign up for free HR Solutions updates via email

Omega HR Solutions, Inc. uses creative human resource solutions to provide answers to time, money and service issues with employers and their employees. Visit our Products and Services page for more information or contact us to learn how we can help your organization.

{ 0 comments… add one now }

Leave a Comment

Previous post:

Next post: