Use of Artificial Intelligence in Recruitment Tools and Singapore's Workplace Fairness Act
This publication is issued by K&L Gates Straits Law LLC, a Singapore law firm with full Singapore law and representation capacity, and to whom any Singapore law queries should be addressed. K&L Gates Straits Law is the Singapore office of K&L Gates, a fully integrated global law firm with lawyers located on four continents.
Artificial intelligence (AI) and machine learning has developed rapidly over the past few years and its usage has also increased exponentially. Businesses are now more ready to tap these modern technologies with the view to augment existing processes and automate mechanical tasks.
In Singapore, the Economic Development Board has supported this shift in line with goals for Singapore to be a heavily AI-powered economy. However, the use of AI in functions that had traditionally required human input could raise various practical concerns, including that of potential discrimination. In Singapore, these concerns are particularly pertinent in the employment law context with the passing of the Workplace Fairness Act 2025 (WFA), which codifies the requirement that employers make employment decisions fairly and without engaging in discriminatory behaviour. To ensure compliance with the WFA, Singapore employers must ensure that they take steps to mitigate against the risks posed by AI before deploying AI in its employment decision processes.
This article provides an overview of the discrimination concerns posed by AI recruitment tools and highlights why Singapore employers looking to deploy AI in their recruitment or employment decision-making processes should only do so with the requirements under the WFA in mind. The article also discusses some safeguards that Singapore employers can consider implementing to strengthen their compliance with the WFA.
Problems with AI
Under Singapore law, there is no statutory definition for “artificial intelligence”. However, AI can generally be understood as any combination of machine capability aimed at approximating some aspect of human autonomous cognition.
One common difficulty with most current AI tools is the lack of transparency of what goes on between an input and an output—this is because the internal workings of a system are usually not fully traceable, hence making the system somewhat of a black box and making it difficult to explain with a measure of specificity how the output was derived.
Modern AI systems are largely trained on large data sets, and an AI system’s ultimate output is typically generated based on weightings and predictions that the system self-develops. While it is possible for developers to implement rules and provide a lawful objective for an AI system, flaws observed in existing AI systems have included the AI system employing a potentially “unlawful” approach to generating outputs. An AI developer will often not even be able to fully pinpoint how a system will actually generate an output for particular inputs.
Connected to the abovementioned “explainability” issue is the issue of discrimination by AI. While AI developers would typically try to implement rules for a system to avoid discriminatory behaviour, the difficulties in proving and tracing the exact weightings leading to an output can make it more difficult to prevent inadvertent discrimination. While computers and machine learning are not on their own discriminatory, they are trained on data sets that may contain hidden biases. When the algorithms of an AI system engage in machine learning, they develop their own processing patterns, which may inadvertently reinforce biases contained in historical datasets. For example, consider an AI system built to filter job candidate resumes for a software engineer role that is trained on resumes of existing software engineers, and assume that individuals comprising the pool of existing software engineers coincidentally comprises largely of males. An AI system may then interpret from the data that successful candidates are those with key phrases in their resume that happen to be more common for male applicants (e.g., boys’ swim team or boys’ school) and apply this as a criteria to identify individuals who are “suitable” candidates. This would mean that female candidates who do not have those key phrases more common for the male gender in their resumes may be filtered out, which would clearly be discriminatory. Care is needed when deploying AI tools to ensure this kind of inadvertent discrimination does not occur.
WFA
Businesses looking to introduce AI in their processes often start out with implementing AI into their human resource functions and while there have not been any publicly reported cases of such uses in Singapore, globally the deployment of AI in recruitment functions such as candidate resume sorting is becoming more common. In other jurisdictions, this process has involved AI models being trained to review and filter current job applicant resumes by observing patterns in resumes of historically successful job applicants and identifying current job applicants that are likely to be successful based on those patterns—similar to the example provided above.
The WFA, which has been passed into legislation by Parliament but which is only expected to commence in 2026 or 2027, provides that it would generally be discriminatory for an employer to make an employment decision that adversely affects an individual on the ground of a protected characteristic. “Employment decisions” are statutorily defined as covering all decisions made by an employer during the hiring, employment and dismissal or termination stages of an employee’s employment. Meanwhile, “protected characteristics” in the WFA refer to the following:
- Age
- Nationality
- Sex
- Marital status
- Pregnancy
- Caregiving responsibilities
- Race
- Religion
- Language ability
- Disability
- Mental health condition
Where discrimination is made out under the WFA, it would be for an employer to show that there was no discrimination or that one of the statutory exceptions apply. This means that the obligation to eliminate bias in employment decisions and prove the absence of bias has now been codified into law. Previously, there was no such legal obligation and the requirement that employers make employment decisions fairly was only set out in the Tripartite Guidelines on Fair Employment Practices, which does not have force of law and instead operates on a “comply-or-explain” basis.
Under the WFA, employers using AI in their employment processes would need to ensure that their AI system produces outputs that are traceable and that can be proofed and sufficiently checked by the employer before any employment decision is taken. Too much dependance on an AI model in employment decisions could cause issues for employers in light of the potential for unexplained reasoning and discriminatory outcomes as outlined above. As the WFA makes it the responsibility of an employer to ensure the fairness of an employment decision regardless of the tools or methods deployed in arriving at such decision, the use of AI in employment processes must be approached with caution and (at this stage) the ultimate decision should still involve human oversight or review, and the AI system should merely serve as a guide for the ultimate human decision makers to consider in their decision making process.
What We Can Do to Assist
We can assist with reviewing the use of proposed AI tools employers may be looking to test or deploy in their Singapore recruitment processes, and advise on safeguards and best practices employers should implement to ensure compliance with the WFA ahead of its commencement.
This publication/newsletter is for informational purposes and does not contain or convey legal advice. The information herein should not be used or relied upon in regard to any particular facts or circumstances without first consulting a lawyer. Any views expressed herein are those of the author(s) and not necessarily those of the law firm's clients.