Global corporate investments in AI are projected to double to around $110 billion over the next two years. One of the main areas of application for organizational AI is recruitment (e.g., staffing, hiring, employee selection, etc.), not least because human recruiters waste a great deal of time entering data, sorting through resumes, and making imprecise inferences about candidates’ talent and potential. Across countries and industries, big firms, such as IKEA, Unilever, Intel and Vodafone rely on algorithmic decision-making in their recruitment processes.
It is noteworthy that many of the most popular traditional hiring methods have very low accuracy, and attempts to evaluate other people’s job suitability are typically contaminated by the usual stereotypes and prejudices that undermine human objectivity and reduce our ability to understand others. Unsurprisingly, many recruiters see technological innovations, such as machine-learning algorithms, as a big time saver, and emerging academic research suggests that AI can also increase organizations’ efforts to accurately predict employee job performance and select the right person for the right role, as well as increasing fairness, transparency, and consistency.
However, there is also a great deal of public concern and distrust with regards to the use of AI for hiring decisions, particularly compared to traditional hiring methods, just like there’s considerable fear that AI will eliminate human jobs, or enable governments to create a surveillance state. These worries make it even more crucial to ensure that hiring decisions aided or influenced by AI are a force for good, and have benevolent effects on candidates and job seekers.
To be sure, not everything ethical is trustworthy, just like not everything trustworthy is actually ethical (for example, many democratically elected politicians are unethical but trustworthy, a combination that tends to have tragic consequences). But any employer interested in leveraging AI or other emerging technologies to optimize their hiring and recruitment processes should be aware of the rising scrutiny – and underlying public, media, and regulatory skepticism – so as to mitigate the risks and unintended consequence of misusing AI. As a recent academic review summarized: “there is a discrepancy between the enthusiasm about algorithmic decision-making as a panacea for inefficiencies and labor shortages on one hand and the threat of discrimination and unfairness of algorithmic decision-making on the other side”.
Fortunately, the last few years have seen a proliferation of guidelines and frameworks to help organizations make ethical use of AI, including in hiring and staffing decisions. On top of these, there are well-established scientific principles for vetting different assessment and selection tools, which are still relevant and valid to inspect the ethics of AI, and any emerging innovation (as well as EEOC and related rules). While the average practitioner may be somewhat perplexed and overwhelmed by this vast literature and volume of of recommendations, the good news is that there is a great deal of convergence around the foundational principles for ethical AI in hiring, which can be boiled down to six main points:
(1) Candidate benefit: Are job seekers and candidates better off if this method is being used? Here, we want to see evidence that historically overlooked and neglected candidates are able to be correctly evaluated, as well as having their potential properly examined. Examples of how AI may help may include sourcing for talent outside the usual pools, and the ability to examine relevant signals of talent while ignoring irrelevant indicators, such as those that typically contribute to unfair and prejudiced decisions, as well as discrimination (age, gender, race, attractiveness, and social class). Companies can measure the degree to which deploying AI increases representation and contributes to enhancing their diversity and inclusion initiatives.
(2) Informed consent: Are job seekers and candidates agreeing to enter in an informed and transparent transaction with recruiters and employers, with sufficient understanding that they are having their potential job-fit evaluated? Informed consent has always represented a foundational cornerstone of personnel selection and assessment, as well as scientific research ethics. In fact, even if there are questions around candidate benefit (point 1), it is reasonable to let candidates and job seekers opt-in to the process of being considered for a job or role, but the hope is that they do so only after being fully briefed and informed about the process, so that they are able to make a rational cost-benefit analyses in order to make an intelligent decision.
(3) Improved accuracy: Any new hiring tool or method, including AI, should demonstrate incremental validity over established and proven tools. Ideally, this should take into account not just predictive accuracy (i.e., how well the new screening method predicts desired future outcomes relative to existing, known, and validated tools), but overall utility, which will also include time, cost, and overall ROI. Needless to say, it is not possible to make ethical use of any tool or method (including AI) unless it is accurate. If AI or any tool fails to accurately evaluate a candidate, or simply provides a more imperfect measure of a person’s talent or potential, it is reasonable to demand an ethical justification for using it, instead of better, more accurate, existing tools. In short, an important ethical question is that any new tool used can reduce measurement error, increase predictive accuracy, and be less biased than alternatives.
(4) Explainability: It is not sufficient that AI or any new hiring method accurately predicts an outcome – we also want to explain the nature of that prediction. Indeed, it is always preferable from an ethical standpoint to not just infer that if we select X person we increase the probability of Y happening, but also enhance our understanding of why this ought to occur. AI is mostly a pattern-matching machine, with the capability to improve the speed and accuracy of classifying an event as a desired outcome or not. However, pattern-matching alone is insufficient to explain or understand what goes on in a recruitment process. And while AI can learn and improve from data, we should ideally have theory to translate our data into insights (science is always data + theory). For example, an algorithm that predicts that certain patterns of language use/word frequency, or certain physical attributes of someone’s voice/speech, are relevant markers of future job fit, is ethically less defensible than an algorithm that explains that this specific pattern of communication or speech is indicative of higher anxiety, lower stress-resilience, or antisocial personality traits. As a general rule, explainable AI models are more ethical than black-box models.
(5) Feedback: If we are able to not just predict someone’s future behavior, but also understand their talent and potential, we should feel ethically compelled to explain this to them, in order to increase their own career self-awareness. In that sense, we should consider feedback a critical ingredient to any ethical use of AI for hiring. Even when we reject certain candidates and job applicants – or perhaps especially when we do – we should help them understand why. Historically, it was very expensive and time-consuming to debrief rejected job applicants, so even from a cost perspective it was just unfeasible for employers to explain to candidates why they were not selected. While some of this still stands, we can certainly automate the process of debriefing applicants and event pre-hire candidates about the AI portion of the assessment, which may also strengthen their future job searches. We live in a world in which most people are feedback-deprived about their talents and potential, which explains the large number of people who end up in the wrong careers.
(6) Privacy and anonymity: The final principle is usually covered under legal and regulatory principles, including cybersecurity laws, but it is also integral to making ethical use of AI or any hiring assessment tool. Candidates’ personal data should be protected and kept confidential, and their personal data and information should be optimized for maximum anonymity. The Right to Privacy is a fundamental Human Right and any system or method undermining is ethically questionable.
Finally, we should remember that for all the talk of “ethical AI” (or “unethical AI”), AI, just like any other technology, can never be ethical or unethical. It is only the humans who deploy it who can be deemed ethical or unethical, and who ought to be held accountable. And it is those same humans who are capable of acting ethically and unethically without any help from AI, and should be held accountable.
This article has been published from the source link without modifications to the text. Only the headline has been changed.