EMPIRE STATE OF MIND: The use of AI tools in hiring decisions is still in its infancy but policymakers and businesses are starting to build up some support structures to prevent abuses and civil rights violations. California, New Jersey and Washington, D.C., are among those that have introduced bills related to AI employment issues in recent years. New York City got out to an early lead, passing a local law in 2018 creating an automated decision systems task force to study the issue and present recommendations, which it did in late 2019. City lawmakers in 2021 then followed up with legislation barring employers from using automated employment decisions tools unless they have passed annual audits to screen for potential bias and make job applicants aware that this software is being utilized. Those requirements took effect earlier this year, though the Department of Consumer and Worker Protection only began enforcing it on July 5. To date the agency has received one complaint that is under investigation, spokesperson Michael Lanza told POLITICO. “As soon as New York starts to send out those enforcement letters … you will see companies get in line on this really quickly,” said John Rood, CEO of Proceptual, an AI-compliance firm with several dozen clients that operate in the city. Failure to comply with these rules could expose employers to fines of up to $1,500 per day, though groups like the New York Civil Liberties Union have raised a number of shortcomings in the law’s design and what it does and doesn’t cover. For instance there’s no direct requirement that an employer stop using a given tool if it fails a bias audit, according to a DCWP guidance document released in late June. But existing municipal, state and federal anti-discrimination laws could all still come into play and deter a company from continuing to use that technology. To that end, the Equal Employment Opportunity Commission last week reached a settlement in its first lawsuit involving AI hiring tools against a company accused of illegally used software to screen out older applicants, Reuters reports. But a primary concern of algorithms and AI-based technology is the opaque nature of how they work and reach the results that they do. The NYC law’s bias audit requirement is an attempt to address that, though the industry trying to service those needs is in early days of its own. There’s not yet an equivalent to the Big Four accounting firms that major corporations rely on to verify their books — an admittedly flawed system that is nonetheless a lynchpin of financial trust. “There’s not any sort of regulation around who can be an auditor,” Rood said. “My guess is that changes … but that doesn’t exist today.” GOOD MORNING. It’s Monday, Aug. 14. Welcome back to Morning Shift, your go-to tipsheet on labor and employment-related immigration. It’s been 153 days since the Senate received Julie Su’s nomination. Send feedback, tips, and exclusives to NNiedzwiadek@politico.com and OOlander@politico.com. Follow us on X at @nickniedz and @oliviaolanderr.
|
No comments:
Post a Comment