NYC bans AI hiring tools

New York City has a new bill on the books, one of the boldest moves of its kind in the country, aimed at curbing the hiring bias that can occur when companies use artificial intelligence tools to screen candidates for positions.

City employers are prohibited from using automated hiring decision tools to screen job applicants unless the technology has been bias audited one year prior to the tool being used.

The New York City Council approved the measure on November 10th without the signature of Mayor Bill de Blasio, which “expires” after 30 days, ie Friday. The mayor said he supported the law. It takes effect January 2, 2023.

Companies also need to notify employees or candidates when the tool has been used to make hiring decisions. Failure to comply could face a fine of $500 for initial violations and up to $1,500 for subsequent violations.

The use of artificial intelligence for recruiting, resume selection, automated video interviews, and other job duties has been on the radar of federal regulators and lawmakers for years as workers began filing complaints of discrimination related to artificial intelligence to U.S. Equal’s employment commission. The EEOC recently stated that it will look at artificial intelligence tools and how they contribute to bias.

Illinois had previously adopted a measure similar to New York City to crack down on the use of such technologies in employment decisions. Maryland also passed a measure banning employers from using facial recognition technology without applicants’ consent.

The attorney general in Washington, DC on Thursday announced a bill that would address “algorithmic discrimination” and require companies to undergo annual audits of their technology.

Automated hiring decision tools refer to technologies that use machine learning, statistical modeling, data analysis, or artificial intelligence to qualify or rank applicants and replace some discretionary decisions made by employers, according to New York lawmakers.

A bias audit is an unbiased assessment by an independent auditor that would test the uneven impact of the instrument, a neutral policy that could discriminate against protected groups based on race, age, religion, gender or origin.

Source link