Artificial intelligence (AI) has revolutionized the way things are conducted in many sectors of the world’s economy. AI is particularly suited for otherwise routinized tasks that may be done more efficiently by using machine learning technology. However, its use in areas that have traditionally involved human subjective interpretation of information is now growing more prevalent as AI technology improves and becomes more mainstream.
While AI offers numerous benefits, it also poses potential unintended risks, particularly in terms of employment discrimination.
One of the most pressing concerns is disparate impact discrimination, where seemingly neutral AI systems disproportionately affect certain protected groups. However, with the proper implementation and safeguards, AI may be an effective adjunct to human-based decisions in the employment context.
Disparate Impact Discrimination Theory
Both the California Fair Employment & Housing Act (“FEHA”) and the federal Title VII of the Civil Rights Act of 1964 (Title VII”) recognize the disparate impact theory of establishing illegal employment discrimination. Disparate impact discrimination occurs when a policy or practice that appears neutral on its face has a disproportionately adverse effect on members of a protected group, such as those based on race, gender, or age. Unlike disparate treatment, which involves intentional discrimination, disparate impact focuses on the outcomes of a policy or practice, regardless of intent
AI and Disparate Impact
AI systems, particularly those based on machine learning, rely on large datasets to make predictions and decisions. These systems can inadvertently perpetuate or amplify existing biases that exist in the data. For example, if an AI system is trained on historical hiring data that reflects racial bias, it may continue to favor certain racial groups over others. In that way, although the AI analysis of prospective employee data may appear neutral and even-handed on it face, its application may violate California and federal law because it has an unintended disparate impact on individuals in protected classifications protected by law.
Case studies conducted to date have identified, in particular, that companies that use AI-driven tools to screen job applicants may in some cases inadvertently discriminate against certain groups. For instance, a hiring algorithm might favor candidates from certain universities, which could disproportionately exclude minority applicants who are less likely to attend those institutions
Legal Framework
The legal framework for addressing AI-related discrimination includes both disparate treatment and disparate impact doctrines. Disparate impact is particularly relevant for AI, as it allows for legal recourse even when there is no intent to discriminate
California’s FEHA and federal Title VII both prohibit employment discrimination based on race, color, religion, sex, or national origin, among other characteristics. Each statute covers both disparate treatment and disparate impact discrimination. Employers using AI for hiring or other employment decisions must ensure that these tools do not result in discriminatory outcomes
Challenges in Addressing AI Discrimination
Workplace discrimination on account of using AI-based tools present certain inherent challenges:
1. Lack of Transparency: AI systems, especially those using deep learning, are often described as "black boxes" because their decision-making processes are not easily interpretable. This lack of transparency makes it difficult to identify and address discriminatory practices;
2. Bias in Training Data: AI systems are only as good as the data they are trained on. If the training data contains biases, the AI system is likely to replicate those biases. Ensuring diverse and representative training data is a significant challenge; and
3. Regulatory Gaps: Current regulations may not be sufficient to address the unique challenges posed by AI. There is a need for updated legal frameworks that specifically address AI-related discrimination.
Potential Solutions
Despite the inherent challenges to using AI-based tools in an employment context, steps may be taken to mitigate the risk of unintentional disparate treatment discrimination:
1. Algorithmic Audits: Regular audits of AI systems can help identify and mitigate biases. These audits should be conducted by independent third parties to ensure objectivity;
2. Transparency and Explainability: Developing AI systems that are transparent and explainable can help stakeholders understand how decisions are made and identify potential biases. Techniques such as explainable AI (“XAI”) are being developed to address this issue;
3. Inclusive Design: Involving diverse teams in the design and development of AI systems can help ensure that these systems are fair and unbiased. This includes considering the perspectives of underrepresented groups; and
4. Regulatory Oversight: Governments and regulatory bodies should establish guidelines and standards for the ethical use of AI. This includes ensuring that AI systems comply with existing anti-discrimination laws and developing new regulations as needed.
Artificial intelligence has the potential to transform society positively, but it also poses significant risks of discrimination, particularly disparate impact discrimination. Addressing these risks requires a multifaceted approach, including algorithmic audits, transparency, inclusive design, and robust regulatory oversight. By taking these steps, we can harness the benefits of AI while ensuring that it is used ethically and fairly.