AI Tech for Workforce Advances – But There’s No Substitute for the Human Touch

ai-artificial-intelligence-codes-1936299

Modern technology drives how we work and live. AI, industrial process automation, high speed internet, big data, cloud computing, blockchain, and scads of other emerging technologies are applied to challenges—business and otherwise—to theoretically improve on the results yielded by imperfect, human efforts. Much has been made of how artificial intelligence (AI) can help an organization govern staffing levels and identify the skill types needed (or not needed) to effectively compete. Before relinquishing direct control of your talent acquisition strategy to the robots, consider whether there aren’t serious limitations to what technology can do to assist in talent acquisition.

 A fascinating article from the Human Capital Institute (HCI) asks, “Are machines better at discrimination than human beings?” The piece casts a critical eye toward all the acclaim and praise being heaped upon artificial intelligence as a supposedly more neutral and unbiased arbiter when applied to traditionally human-directed workforce management tasks like hiring and performance management. It questions whether the breathless marketing of AI solutions for HR is overstating the benefits of applying technology in pursuit of equal opportunity.

While it is true that automation technologies, cloud computing, big data practices and other tech tools are undoubtedly having a positive effect, making workforce management more efficient, effective and predictive, there may be limits to what technology can ultimately deliver. Trust in AI to avoid the risks associated with bias in workforce management—discrimination claims based on race, gender, orientation, age or any other marker—may be misplaced. Here’s why.

A recent study cited in the HCI article mentioned earlier found that the algorithms used by internet search giant Google, more frequently served search results for salaried executive-level positions to search engine users identified as being male. The article also identified numerous other instances of AI and automation tools making seemingly biased choices in the complete absence of human intervention. Why is this?

HCI’s Ankita Poddar suggests, “One doesn’t have to look too hard for instances when AI has shown discriminatory trends. After all, AI is designed by human beings, it’s trained on algorithms that aren’t always public, and AI is modeled on past data that is likely highly biased.” Any technology produced by human minds and hands is bound to reflect the biases of those who wrote the programming. Even AI, with its vast potential to avoid the biased thinking that can plague humanity, is still bound to follow the broad directives of its makers.

Science tells us that one day soon, once computing power has grown sufficiently, artificial intelligence will reach the point that it achieves sentience on its own. At that point, AI may be able to break free of the confines of human bias and discrimination that allow injustice to be perpetrated in the workplace and in broader society. Until that time, the better bet is to avoid leaving such risky decision-making to the machines. While prejudice and bias are hard to uncover and remedy within your HR and workforce management personnel, strong diversity policies, effectively enforced are still your best bet for avoiding workforce discrimination and this is best accomplished by actual human beings.

Organizations will still, for the foreseeable future, depend on human discernment. Experience on the micro level in such areas as candidate screening level as well as on the macro level in business planning and the like. This is why it remains important to engage the proven effective guidance of a provider like nextSource.

To read more on this subject, turn to nextSource for expert guidance and visit our Managed Service Provider page.

More Articles