OUR PARTNERS

California Civil Rights Council Proposes Amendments To Combat Automated Discrimination


19 June, 2024

In a proactive move to address emerging issues of fairness and equality in the workplace, the California Civil Rights Council is spearheading changes to the Fair Employment and Housing Act (FEHA). These proposed changes are a direct response to increasing worries over bias in automated decision systems, which are now frequently employed in employment practices. This step reflects a rising global trend to closely examine how advanced technologies, including artificial intelligence, can affect traditional hiring and workplace norms.

At the core of the matter, these amendments seek to define and regulate the use of “automated decision systems” within the scope of employment. These systems encompass a range of computational methods, such as machine learning, algorithms, statistical models, and AI images generator technologies. They are utilized for a variety of hiring-related tasks, from screening resumes to predictive assessments of potential employees’ abilities and even determining the cultural fit of candidates.

One could argue that these tools, which include highly sophisticated ai text generator software and AI video generator applications, have revolutionized the way businesses approach recruitment and employee management. However, amid the latest ai news & ai tools advancements, there’s an underlying fear that these automated processes could unintentionally reinforce existing prejudices or establish new discriminatory barriers.

The California Council is looking beyond just the most obvious uses of these technologies, setting forth examples that cover a wide array of automated activities. The operational scope under surveillance ranges from ads targeting specific demographics to the prioritization of job seekers depending on their work schedule availability. This comprehensive outlook underscores the Council’s dedication to ensuring equity that keeps pace with technological innovations.

All organizations employing five or more individuals fall under the umbrella of these amendments, as do their agents and employment agencies—which now also include service providers utilizing automated decision systems for hiring and employee management purposes. The definition of “agent” casts a wide net, including external parties that manage employment-related tasks, implicitly acknowledging the intertwined nature of third-party services and AI in today’s working environment.

One pertinent topic highlighted in the proposed amendments is the lawful use of selection criteria by employers. Under the modifications, employers must validate that any automated decision-making systems used do not generate adverse outcomes or disparate treatment based on characteristics secured by FEHA. To defend their usage, employers must demonstrate the necessity of these tools for business operations and the absence of less discriminatory alternatives. Proactively taking steps to counteract bias, such as running anti-bias tests on these systems, would be considered favorably in an employer’s defense.

Adding depth to these amendments, the proposals provide clarify the roles of these systems regarding an applicant’s criminal history. A tilt towards fair chance hiring is evident, as employers employing automated systems can only consider such history after extending a conditional job offer and are required to share any findings with the applicant, maintaining transparency.

With cases of discrimination based on sex, pregnancy, childbirth, marital status, religious creed, disability, and age under scrutiny, these changes also address the complex issue of medical and psychological evaluations. The use of artificial intelligence generated images or interactive scenarios for evaluating personality traits or abilities may unintentionally violate existing regulations if not strictly job-related or failing to accommodate disabilities.

Moreover, the Council is advancing the notion of accountability, urging transparency in record-keeping practices related to these automated decision systems. Specifying that records must be maintained for at least four years post-utilization of any such system enforces a tangible standard for covered entities.

In conclusion, the unfolding story of amendments to FEHA serves as a blueprint for how legislation can evolve alongside rapid advancements in AI-driven employment tools. It foreshadows a regulatory framework designed to uphold civil rights, even as machine learning and AI continue to redefine the way we work. For our readers who are at the forefront of acquiring the latest ai news & ai tools, these developments remind us of the importance of ethical vigilance amid the relentless march of technology.