By AI Trends StaffĀ Ā
WhileĀ AI in hiring is now widely used for writing job descriptions, screening candidates, and automating interviews, it poses a risk of wide discrimination if not implemented carefully.Ā

That was the message from Keith Sonderling, Commissioner with the US Equal Opportunity Commision, speaking at theĀ AI World GovernmentĀ event held live and virtually in Alexandria, Va., last week. Sonderling is responsible for enforcing federal laws that prohibit discrimination against job applicants because of race, color, religion, sex, national origin, age orĀ disability.Ā Ā Ā
āThe thought that AI would become mainstream in HR departments was closer to science fiction two year ago, but the pandemic has accelerated the rate at which AI is being used by employers,ā he said. āVirtual recruiting is now here to stay.āĀ Ā
Itās a busy time for HR professionals. āThe great resignation is leading to the great rehiring, and AI will play a role in that like we have not seen before,ā Sonderling said.Ā Ā
AI has been employed for years in hiringāāIt did not happen overnight.āāfor tasks including chatting with applications, predicting whether a candidate would take the job, projecting what type of employee they would be and mapping out upskilling and reskilling opportunities. āIn short, AI is now making all the decisions once made by HR personnel,ā which he did not characterize as good or bad.Ā Ā Ā
āCarefully designed and properly used, AI has the potential to make the workplace more fair,ā Sonderling said. āBut carelessly implemented, AI could discriminate on a scale we have never seen before by an HR professional.āĀ Ā
Training Datasets for AI Models Used for Hiring Need to Reflect DiversityĀ Ā
This is because AI models rely on training data. If the companyās current workforce is used as the basis for training, āIt will replicate the status quo. If itās one gender or one race primarily, it will replicate that,ā he said. Conversely, AI can help mitigate risks of hiring bias by race, ethnic background, or disability status. āI want to see AI improve on workplace discrimination,ā he said.Ā Ā
Amazon began building a hiring application in 2014, and found over time that it discriminated against women in its recommendations, because the AI model was trained on a dataset of the companyās own hiring record for the previous 10 years, which was primarily of males. Amazon developers tried to correct it but ultimately scrapped the system in 2017.Ā Ā Ā
Facebook has recently agreed to pay $14.25 million to settle civil claims by the US government that the social media company discriminated against American workers and violated federal recruitment rules, according to an account fromĀ Reuters. The case centered on Facebookās use of what it called its PERM program for labor certification. The government found that Facebook refused to hire American workers for jobs that had been reserved for temporary visa holders under the PERM program.Ā Ā Ā
āExcluding people from the hiring pool is a violation,ā Sonderling said.Ā If the AI program āwithholds the existence of the job opportunity to thatĀ class, soĀ they cannot exercise their rights, or if it downgrades a protected class, it is within our domain,ā he said.Ā Ā Ā
Employment assessments, which became more common after World War II, have providedĀ high value to HR managers and with help from AI they have the potential to minimize bias in hiring. āAt the same time, they are vulnerable to claims of discrimination, soĀ employersĀ need to be careful and cannot take a hands-off approach,ā Sonderling said. āInaccurate data will amplify bias in decision-making. Employers must be vigilant against discriminatory outcomes.āĀ Ā
He recommended researching solutions from vendors who vet data for risks of bias on the basis of race, sex, and other factors.Ā Ā Ā
One example is fromĀ HireVueĀ of South Jordan, Utah, which hasĀ built aĀ hiring platform predicated on the US Equal Opportunity Commissionās Uniform Guidelines, designed specifically to mitigate unfair hiring practices, according to an account fromĀ allWork.Ā Ā Ā
A post on AI ethical principles on its website states in part, āBecauseĀ HireVueĀ uses AI technology in our products, we actively work to prevent the introduction orĀ propagationĀ of bias against any group or individual. We will continue to carefully review the datasets we use in our work and ensure that they are as accurate and diverse as possible. We also continue to advance our abilities to monitor, detect, and mitigate bias. We strive to build teams from diverse backgrounds with diverse knowledge, experiences, and perspectives to best represent the people our systems serve.āĀ Ā
Also, āOur data scientists and IO psychologists buildĀ HireVueĀ Assessment algorithms in a way that removes data from consideration by the algorithm that contributes to adverse impact without significantly impacting the assessmentās predictive accuracy. The result is a highly valid, bias-mitigated assessment that helps to enhance human decision making while actively promoting diversity and equal opportunity regardless of gender, ethnicity, age, or disability status.āĀ Ā

The issue of bias in datasets used to train AI models is not confined to hiring. Dr. EdĀ Ikeguchi, CEO ofĀ AiCure, an AI analytics company working in the life sciences industry, stated in a recent account inĀ HealthcareITNews, āAI is only as strong as the data itās fed, and lately that data backboneās credibility is being increasingly called into question. Todayās AI developers lack access to large, diverse data sets on which to train and validate new tools.āĀ Ā
He added, āThey often need to leverage open-source datasets, but many of these were trained using computer programmer volunteers, which is a predominantly white population. Because algorithms are often trained on single-origin data samples with limited diversity, when applied in real-world scenarios to a broader population of different races, genders, ages, and more, tech that appeared highly accurate in research may prove unreliable.āĀ
Also, āThere needs to be an element of governance and peer review for all algorithms, as even the most solid and tested algorithm is bound to have unexpected results arise. An algorithm is never done learningāit must be constantly developed and fed more data to improve.āĀ
And, āAs an industry, we need to become more skeptical of AIās conclusions and encourage transparency in the industry. Companies should readily answer basic questions, such as āHow was the algorithm trained? On what basis did it draw this conclusion?āĀ
Read the source articles and information atĀ AI World Government, fromĀ ReutersĀ and fromĀ HealthcareITNews.Ā
Ā Ā