This article was originally published in Law360 on September 6, 2024, and is republished here with permission.
The integration of artificial intelligence into the workplace has sparked a flurry of legal and regulatory discussions in recent months.
Judges are instituting bans and other regulations on the use of AI in courts. States are passing laws designed to curb or control the use of AI in employment-related policies and decision making. And employers are grappling with how existing employment laws apply to a rapidly evolving and diverse offering of AI tools.
While AI introduces new technological dimensions to the employment landscape, the core legal issues raised by the use of AI are not new. Rather, AI is the metaphorical remake on a classic, where familiar employment law concerns are simply repackaged and recontextualized within a shiny new technological framework.
Employers — particularly those that not that long ago may have had to Google what AI even was — should take comfort that AI, and the burgeoning laws and regulations that surround it, most often reflect familiar and long-standing legal issues, albeit with a modern twist.
Discrimination and Fair Employment Practices
Whether AI is involved or not, the heart of many employment law concerns is the issue of discrimination. Traditional employment laws, such as Title VII of the Civil Rights Act, prohibit employers from discriminating in the hiring and selection process based on race, color, religion, sex and national origin.
With the advent and rapid implementation of AI in companies across the country, these principles continue to be relevant, but with slightly increased complexity where employers are now integrating AI tools and data to support the recruitment and hiring decision-making process. But what really has changed?
As with the risk for human error in decision making generally, AI systems are made by humans, and the AI tools used in hiring and recruitment have the potential to perpetuate or even exacerbate human biases if the AI tools are not carefully designed, monitored and validated.
For instance, if an AI system is trained on historical data that reflects past hiring biases, it can replicate and reinforce these biases in its decision-making processes despite facially appearing to be based purely on objective metrics and data. Without validation, this can result in discriminatory outcomes that unfairly disadvantage certain protected groups of persons, leading to potential violations of antidiscrimination laws.
AI merely reinforces the eternal need for employers to trust but verify.
The U.S. Equal Employment Opportunity Commission and other enforcement agencies have recently warned that AI selection tools may have implicit or actual bias built into their systems, but this should not be news to employers.[1] In 1978, the EEOC, U.S. Department of Labor and U.S. Department of Justice jointly issued a very comprehensive set of regulations called the Uniform Guidelines on Employee Selection Procedures, or UGESP, which was designed to ensure that all selection devices and procedures are not used in a discriminatory manner.[2]
The reach of the UGESP is extremely broad. At the time, it was focused on paper and pencil tests used in making any employment-related decisions because there was a long history of many of these tests having a disparate impact on protected groups without much scientific proof they actually predicted successful performance on the job. AI doesn’t change this dimension of employment law, it merely expands it to a new frontier, as the UGESP remains in effect today.
Disparate impact concerns underpin AI-related regulations across the country, which often seek transparency with respect to the data and inputs used to create or guide the AI in making decisions. But instead of looking at the wide-ranging regulations and seeing a paradigm shift in the way employee recruitment works, employers can take heart that what’s going on is really just a new way of looking at an old problem.
Essentially, most of the new AI regulations ask questions that previously needed to be asked of hiring managers: Where is the source data coming from? Who is doing the interpretation? Is the decision, and underlying data, valid and objective?
Answers to these questions do not depend on or change based on whether AI was involved at some point in the decision-making process. In short, employers don’t need to do anything conceptually different, they just need to learn the new tools — and how the same old problems can arise with those new tools — to ensure those problems are addressed beforehand.
Privacy and Data Protection
Privacy concerns are another area where traditional employment laws intersect with AI technology.
Historically, employment laws have mandated the protection of employees’ personal information. With the proliferation of AI, which often relies on large datasets to function effectively, the volume and sensitivity of data collected have increased significantly. But the increase in volume and sensitivity changes nothing about the underlying legal risks and concerns that are addressed by prevailing employer conduct today.
Just as data breaches can occur from the general storage and maintenance of personal information, so too can AI be breached. The underlying concerns don’t change. In essence, while the technology has changed, the fundamental issue remains: ensuring that employees’ personal information is protected and used in a manner consistent with established privacy laws. Employers just have to keep on trucking by adding AI-related apps and tools to the list of data sources they already monitor for compliance.
The same goes for protecting an employer’s confidential and proprietary information.
Employers have historically protected their own sensitive data by obligating employees to enter into confidentiality agreements, or by promulgating employment policies regarding the same, then notifying employees regarding the employer’s active or random monitoring of employee activity while using electronic equipment. Employee use of AI technologies for work-related reasons, particularly those AI tools that are web-based and not internally captive to the employers, merely increases the risk that sensitive confidential and proprietary information is leaked to the public.
Reviewing, updating, and training on existing policies and agreements to warn employees of these risks, and the consequences for not being mindful of their use of AI for work, is key to mitigating the odds of an unfortunate and costly leak. Such training should already be ongoing outside the AI paradigm.
Employment Classification and Job Security
Employment classification, which determines whether workers are classified as employees or independent contractors, has also been a long-standing issue in employment law. This classification affects workers’ rights to benefits, job security and protections under labor laws. The rise of AI and automation introduces new dimensions to this topic.
Specifically, AI and automation can lead to shifts in job roles and functions, raising questions about how workers should be classified. For example, if an AI system performs tasks traditionally done by employees, does this change the nature of the employee’s employment or the primary duty they perform as regulated by established classification tests? There are also concerns about job displacement and the need for new types of worker protections as AI systems become more prevalent.
But “Death of Jobs in America Based on Advent of New Technology” is not a new headline in 2024 and has not been a new addition to the employment landscape at any point in the last century. AI may work differently from certain past technological innovations, but the fundamental challenges it poses do not. AI encompasses a wide-ranging set of tools employers can use to aid their operations, but those tools still require human operation, monitoring and validation.
Employers should think carefully about how best to integrate AI into their existing operations, keeping in mind all the same worker classification issues that they had to worry about well before the term “AI” reached their ears.
Workplace Health and Safety
Workplace health and safety regulations have traditionally focused on protecting employees from physical harm. As AI systems become more integrated into the workplace, new safety considerations emerge. For instance, the deployment of robots and automated machinery requires rigorous safety standards to prevent accidents and injuries.
Furthermore, the use of AI in the monitoring and managing of workplace conditions, such as ergonomics or environmental factors, raises questions about how these technologies affect workers’ well-being. Ensuring that AI systems are designed and implemented with safety in mind is crucial for maintaining a safe work environment, but it does not fundamentally alter the legal landscape and risk factors embedded in this common litigation risk factor in the workplace.
Takeaways
In sum, while the rise of AI introduces new challenges and considerations into employment law, many of these issues are simply existing legal concerns repackaged within a technological framework. Discrimination, privacy, employment classification and workplace safety have always been central to employment law, and AI does not fundamentally alter these issues but rather highlights their continued relevance with a new technological spin.
Employers should carefully scrutinize any use of AI in selection, promotion or other employment decisions, which should be a mere continuation of such scrutiny with respect to the validity of the test as it applies to the employer’s workplace and the specific jobs for which the test is applicable. The consequences of using a selection device that has an adverse impact without an appropriate validation study can be severe, which was the case long before AI entered the mainstream.
As AI technology continues to evolve, it is essential for legal frameworks to adapt and address the specific nuances introduced by AI, but the underlying principles of fairness, privacy and protection that have guided employment law for decades remain as pertinent as ever. The challenge for regulators and employers will be to ensure that these principles are upheld in the face of new technological advancements, ensuring that the benefits of AI can be realized while maintaining robust protections for workers.
Employers, you can breathe a sigh of relief. Technology is changing, but the legal compliance regimes you have established do not have to be fundamentally rebuilt from the ground up. What’s old is just new again, and everybody loves a classic.
[1] https://www.eeoc.gov/newsroom/eeoc-releases-new-resource-artificial-intelligence-and-title-vii.
[2] https://www.ecfr.gov/current/title-41/subtitle-B/chapter-60/part-60-3.