New York City will set law to regulate use of AI employment tools

rawpixel.com

rawpixel.com

Melani Bonilla, Multimedia Editor

Artificial intelligence has spread into the employment sector. It is not uncommon to see employers using tools crafted by artificial intelligence to aid their hiring decisions.

This type of artificial intelligence is classified as Automated Employment Decision Tools, also referred to as AEDTs.

AEDTs are driven by an AI robot that aligns the job description with matching keywords  in an applicant’s resume. The AEDTs are not only limited to keywords. Some tools are also able to factor in emotional response.

These tools were introduced as an efficient and objective way to speed up an employer’s hiring process. The initial assumption was that machines would not adopt biases based on gender, race and other factors.

This was not the case, as reflected by Amazon’s recruitment engine in 2018. Amazon created this recruitment tool to make sorting through applications easier, but the tool ended up presenting bias towards male candidates.

“In effect, Amazon’s system taught itself that male candidates were preferable,” according to Reuters. “It penalized resumes that included the word ‘women’s,’ as in ‘women’s chess club captain.’”

With a rise in artificial intelligence programs, New York City introduced a law to regulate AEDTs usage  and prevent cases like this from occurring.

The regulation, Local Law 144, will go into effect in the coming months.

Local Law 144 would “require employers to conduct bias audits on automated employment decision tools, including those that utilize artificial intelligence and similar technologies.”

Laurie A. Cumbo, Alicka Ampry-Samuel and Helen K. Rosenthal are just some of the lawmakers that sponsored this law. The law includes sections detailing specifics on Bias Audit, Data Requirements and Definitions.

Despite the thorough detailing included in the law, employees and citizens believe that it contains many loopholes. These loopholes would allow for an unfair hiring process, according to various NYC employees.

New York University’s Hilke Schellmann, an assistant professor of journalism, and Mona Sloane, senior research scientist at the NYU Center for Responsible AI commented on these loopholes.

“The problem: many companies don’t want to reveal what technology they are using and vendors don’t want to reveal what’s in the black box, despite evidence that some automated decision making systems make biased and/or arbitrary decisions,” the researchers stated in a research project.

If companies don’t reveal their technology, employees and lawmakers won’t know if any biases are present. An artificial intelligence machine can slowly filter out minorities from its hiring systems unbeknownst to the company itself.

As artificial intelligence presence in society grows, it is essential for lawmakers to hold companies responsible for conducting these bias audits.