Reducing bias and the resulting adverse impact in candidate selection and hiring has been an ongoing challenge for talent acquisition professionals. With advances in artificial intelligence (AI), recruiting teams can gain unprecedented insight into their hiring process and reduce or eliminate adverse impact.
What is adverse impact in hiring?
Adverse impact refers to a negative result of taking a particular action. The Society of Human Resource Management (SHRM) describes adverse impact in the context of hiring this way:
“Adverse impact refers to employment practices that appear neutral but have a discriminatory effect on a protected group. Adverse impact may occur in hiring, promotion, training and development, transfer, layoff, and even performance appraisals. It may be found in an overall procedure or in any step in the overall procedure.”
How are protected groups defined?
Protected groups in the United States are defined by legislation starting with the Title VII of the Civil Rights Act of 1964 and continuing through the most recent legislation, the Americans with Disabilities Amendments Act in 2008. Under federal law, employers cannot discriminate on the basis of race, color, national origin, religion, sex, age, or disability. These are protected groups or classes.
How can the use of AI reduce adverse impact in hiring?
When AI technologies are applied to scientifically and legally designed pre-hire assessments, AI can effectively eliminate both protected-class adverse impact and other types of everyday human decision-making bias in the candidate selection process.
There are two vital aspects to this. First is design and validation of the pre-hire assessments. To eliminate bias, assessments must be built on informed and exacting methodology. They must be validated by multiple studies to ensure they accurately and fairly predict candidates’ performance on the job.
Part and parcel to this is the type of data the assessments collect and analyze. Pre-hire assessments that focus on the core competencies shown to be drivers of performance for a specific role can accurately predict candidates’ performance in an unbiased manner.
Second, the assessments must be thoroughly documented to ensure defensibility. Too often, with today’s AI-enabled hiring technology, recruiters and TA leaders can’t see or understand how the algorithms are making decisions. Modern Hire takes a different approach that provides visibility into how data is collected and used, so recruiting teams can understand and explain how their selection process reduces or eliminates adverse impact.
Are there applications where AI doesn’t reduce adverse impact?
Yes. Specific uses of AI, such as evaluating facial features or scraping social media profiles, are unreliable and potentially unfair, not to mention invasive for candidates. TA leaders should look for that validation, documentation, and defensibility discussed above to ensure the pre-hire assessments they deploy will indeed reduce adverse impact. They should also examine the AI technology provider’s position on the ethical use of AI in hiring. As AI technologies advance, it is incumbent upon the organizations creating them and using them to define and adhere to principles guiding the ethical use of AI in hiring. View Modern Hire’s Standards for the Ethical Application of AI in Hiring.
What else is essential to know about adverse impact in hiring?
The use of AI in validated pre-hire assessments can help recruiting teams adhere to the letter of employment discrimination laws and the spirit of eliminating bias and adverse impact in hiring. For instance, military veterans are not currently a protected class under federal law. However, research exploring the barriers to hiring veterans during the selection process demonstrates that veterans’ stereotypes may help increase their selection for certain types of positions, such as field technician jobs, and decrease it for others, such as customer service jobs. Read more about this study. Scientifically designed and validated pre-hire assessments that use AI may help combat these stereotypes’ influence on the rate at which veterans are hired. When data inform decision-making, bias, and adverse impact in hiring can be reduced.