When technology aids in personnel selection
“Reduce 500 or more applications to a top 10 selection in minutes!” This promise should sound enticing to the ears of recruiters. However, the question is how efficient this type of software tool actually is, how fair the proposed ranking is and whether the category of diversity, to which many companies have now committed themselves, is taken into account. Prof. Dr. Helena Mihaljevic is an expert in machine learning. She has long been involved in technology-supported personnel selection, most recently in the context of the “FairTecHR” project. Her sobering conclusion: “There are currently no explicit legal or practical standards for fairness assessments, and a comprehensive evaluation of the discriminatory potential of these technologies is still lacking.” In the interview, the academic discusses the creation of a data trust model on the basis of which the overdue standards could be developed.
How widespread is the use of technology in personnel selection?
Prof. Dr. Helena Mihaljevic: There are no precise surveys on how many companies use such technologies and which ones they prefer. However, the number of tools on the market is growing constantly. These are frequently used to make the selection process more efficient. Others can be used to enrich existing information in a targeted manner - to create psychological profiles, for instance. Still others help to make the pool of applicants more diverse by supporting recruiters in writing job advertisements in a way that persons of all genders feel addressed or that there is no hidden age discrimination.
In which contexts are the tools used?
They can be used in all stages of the selection process. I have already mentioned the tools that aim to make tenders more inclusive. AI-supported technologies can be used for purposes of pre-selection. Tools that search CVs for skills, degrees and certifications and then create rankings exist, while others make degrees comparable, which is important for internationally operating companies. AI-supported technologies can also be helpful in putting applicants’ work experience into perspective and compensating for social disadvantages, such as parental leave for women. Psychological AI tools are also used for certain positions. They are supposed to better encode a person’s character so that their soft skills match those of the existing team as much as possible.
How effective are the tools?
Some studies have shown that AI-supported HR technologies can reinforce existing social inequalities and thus run counter to self-imposed diversity goals. However, it is difficult for HR departments to assess this in advance. They are usually dependent on information from sales meetings when procuring the software. Unfortunately, there is no equivalent to the “Stiftung Warentest” consumer survey (laughs), but it’s naturally impossible to compare personnel selection technologies to hoovers... The fact is that fairness evaluations are difficult to implement in practice due to legal, conceptual and organisational challenges, even for the manufacturers of such software. We realised this in the context of our project “Fairness auditing in technology-supported personnel selection”, or “FairTecHR” for short, in the course of several focus group discussions.
How transparent are the technologies?
Many technologies are developed by commercial providers who play their cards closer to their chests. At the same time, more and more start-ups on the market are incorporating scientific findings, e.g. from psychology and economics, into their tools. Among these, many are ethically minded, and make every effort to ensure scientific scrutiny and transparency. This is a clear trend.
But they all have the same problem: there are currently no explicit legal or practical standards for fairness assessment. There is also no comprehensive evaluation of discriminatory potential. Although the current draft of the European Union’s Artificial Intelligence Act categorises numerous applications in personnel selection as high-risk technologies and calls for high quality standards to protect the fundamental rights of those involved, particularly during development, the draft law does not give any concrete indications as to how this can be ensured, especially once the technologies are commercially available.
How could greater transparency be achieved in terms of fairness?
I understand the EU’s AI draft to mean that companies should be required to apply best practice in the development phase. But that is insufficient. In the past, many large companies with all the necessary resources have developed products with substantial weaknesses in terms of fairness. If it had been that easy to avoid the mistakes, they would have done so.
In my opinion, it is up to the legislator to set binding standards in this area. The basis for evaluating the technologies could be a type of data cooperative or a data trust model, the structures of which we outlined in the “FairTecHR” project. The data relevant for fairness analyses, such as gender, age and migration background, would be managed and stored independently and on a fiduciary basis. Both the technology providers and their customers, as well as advocacy groups and the scientific community should be involved in the professional review of the tools.
Incidentally, I am quite confident that things will go in this direction. I regard the latest call for proposals from the Federal Ministry of Education and Research in this area as a positive signal, for instance. Perhaps we will see some decent prototypes from German developers in the next five years.