The researchers say that the artificial intelligence employment systems are distinguished against the Anglo Sixoni names

Artificial intelligence changes how companies work and affect some practices that have often become a hot topic for discussion by ethics who believe that algorithms have restrictions and may lead to harmful results if they are not verified.
One of these controversial areas is the process of employment and employment within organizations, where recent studies have shown that although artificial intelligence is a promising tool that increases efficiency and objectivity, it may also create new forms of discrimination and perpetuation of current prejudices.
The study found that artificial intelligence systems distinguish candidates because of their name
Research conducted by the Royal Institute of Technology in Stockholm recently revealed some amazing patterns when analyzing the use of artificial intelligence. Contrary to what was expected, the study, led by Cilest de Nadayi, discovered that current models have some unexpected and clear biases that come to the surface when evaluating and choosing candidates.
The research project examined the output of the various LLMS models including Google Gemini-1.5-Flash and Mistral Ai Open-Mistral-Nemo-2407 and GPT4O-MINI from Openai.
The researchers found that the candidates with Anglo Sixoni’s names received lower classifications than the rest when evaluating the function of the software engineer.
Also read: 100+ Artificial Intelligence Statistics you need to know
The study was very strict and included 4,800 conclusion, which are requests for sending to the form for a conclusive answer. He also modified the temperature settings of the models and included 200 different candidates divided into equality between men and women and assembly in four different cultural groups.
De Nadaye looks that this bias against men who have Anglo-Saxon names may be an excessive correction in the previous prejudices specified in previous studies-although it admits that it is very early to know that this is the exact issue offered.
An increasing percentage of companies adopts artificial intelligence for employment purposes
The embrace of the employment of organizations from artificial intelligence has already gone to the extent that it was found data from a recent survey conducted by IBM that 42 % of the companies surveyed all over the world are already examining candidates using artificial intelligence technology, while 40 % are active in implementing employment tools that are transferred from artificial intelligence.
The type of tools used in this process vary widely and the transition from biographical scanners to the creation of examination tests.
Organizations hope to reduce human bias in the recruitment process as much as possible. However, the evidence now indicates that these systems may actually create new biases or amplify the existing ones.
Hilke Schellmann, assistant professor at New York University and author of the book “Al -Khwarizmia: How AI can kidnap your career and steal your future” that the greatest danger that these tools poses is not a displacement of jobs, but rather prevents qualified candidates from securing positions in the first place.
A study that included interviews with 22 different professionals in the field of talent sources and human resources management identified two prevailing bias: “stereotypes” and “similar bias to me”.
These two people may have leaked to the current artificial intelligence models, and they have now been included in the decision -making process and can create a vicious cycle that may be difficult to break.
Vehicles of the problem when artificial intelligence systems are required to make conclusions to identify certain variables.
For example, if the experiment is a positive factor, candidates who have more experience in a specific field in a specific field or function may be preferred, although the quality of that experience may not be good like other candidates who have less years in this field but a more wealthy background.
Four steps to reduce the bias of artificial intelligence in employment
Treating the bias of artificial intelligence will be very important to ensure the implementation of technology fairly and sufficiently. However, achieving this is not an easy task.
Sandra Washer, a professor of technology and organization at the Internet Institute at the University of Oxford, stresses that artificial intelligence is not only morally necessary, but also economically beneficial.
Wachter has developed a tool called the conditional demographic contrast test that aims to assess artificial intelligence models and define current biases. A handful of large companies already used the test including Amazon (Amzn) and IBM.
“There is a very clear opportunity to allow artificial intelligence to apply in some way, so it makes the decisions the most fair, and the most fairly equitable that also increases the company’s final result.”
Wachter and other researchers made the following recommendations to companies to ensure that their algorithms are free of bias or that any prevailing biases reduce the minimum as possible.
- HHRR training: Institutions need to implement organized training programs for human resources professionals that focus on developing the information system and AI. This training should cover the basics of artificial intelligence, identification of prejudice, and mitigation strategies.
- Excellent cooperation between human resources professionals and artificial intelligence developers: Companies must create integrated teams that include both human resources and AI specialists to bridge communication gaps and align their efforts.
- Using more specific data collections: Culturally relevant data groups development is vital to reducing biases in artificial intelligence systems. This requires an accurate diversification process of representative data that can help create more fair employment practices.
- Developing ethical standards for artificial intelligence employment: There is an urgent need for comprehensive guidelines and ethical standards that govern the use of artificial intelligence in employment. This transparency and accountability in the decision -making processes must be enhanced by artificial intelligence.
The survey founded that the candidates see Amnesty International more biased than humans
A survey from the American Employment Association I was conducted in 2023 I found that 43 % of the respondents believe that artificial intelligence can be more biased than humans. This perception highlights the increasing discontent of potential workers on the tools of artificial intelligence used to assess their skills and fitness in job functions.
Moreover, the United States Equal Opportunity Committee approved the issue and included the impact of artificial intelligence in employment as one important issue to deal with its four -year strategic enforcement plan.
Organizational attention is needed to ensure companies’ awareness and ready to make the required changes to ensure fair employment opportunities for all candidates. Companies are responsible for overseeing that the employment tools of artificial intelligence promote instead of obstructing diversity and integration in the workplace.
As institutions continue to adopt the integration of artificial intelligence in their employment processes, they should continue to implement solutions that enhance current practices rather than deepen existing issues such as ethnic or cultural biases. Treating these challenges will be very important to enhance the adoption of these tools on a large scale and increase productivity in the sections of talent sources.