peshkova - Fotolia

Affect recognition technology in crosshairs of Illinois lawmaker

The Illinois lawmaker who authored the first AI hiring bill has another in the works. This time, he's focused on the use of AI to identify emotions, or affect recognition.

Illinois may take up legislation next year on the use of emotion or affect recognition technologies in video-based hiring. The state is already a trailblazer on the issue; it is the first to pass an AI law that sets some rules around the use of AI in video interviews. That law takes effect Jan. 1.

The battle lines are now being drawn on AI's use in employment.

One research group, the AI Now Institute at New York University, is asking lawmakers to go even further. It wants a ban on emotion detection or affect recognition technologies in hiring and other major decisions that affect people's lives.

AI Now believes emotion or affect recognition tech is unproven, unverified and suspect. These kinds of technology analyze video and speech to try to understand a job candidate's personality and emotions.

The Illinois AI law, the Artificial Intelligence Video Interview Act, requires employers to notify applicants in writing when AI is used to analyze emotions and behavior. Employers are required to explain how it works, and how it evaluates applicants.

Illinois State Rep. Jaime Andrade Jr. (D-Chicago), who authored the AI law, is specifically concerned about the use of affect recognition. He is also the chairman of the committee on Cybersecurity, Data Analytics and IT.

Need for limits on affect recognition

"Until there is concrete evidence that proves affect recognition actually works, it should not be the sole reason or factor that a person receives a live interview or is categorized as a productive or non-productive employee," Andrade said in an email.

Andrade said he is drafting legislation that may set limits on how affect recognition may be used.

The AI Now report raises often-cited concerns about bias in AI systems. It said lawmakers "should specifically prohibit use of affect recognition in high-stakes decision-making processes."

There just isn't enough transparency to assess whether and how these [AI] models actually work.
AI Now Institute 2019 annual report

The institute argues that studies show "that there just isn't enough transparency to assess whether and how these models actually work."

U.S. state lawmakers may look to Europe on AI law, rights and privacy issues. The European Union's GDPR regulations give its citizens the right to three things, said Brandon Purcell, an analyst at Forrester Research.

Europe may be AI law role model

The first AI law requirement under GDPR is the right to know that an automated decision maker, or algorithm, is making a decision that is impacting you, Purcell said. The second is the right to know what that decision was, and the third is "the right to some sort of explanation" about how the decision was made, he said.

Purcell sees the Illinois law as guaranteeing the same sorts of rights set by GDPR.

Regarding the AI Now recommendation on affect recognition, Purcell said a ban would be shortsighted.

"Instead of banning the technologies, we should be working with the providers of these technologies to eradicate any sort of bias or discrimination within them," Purcell said. The technology has potential "to apply more uniform decision making processes to hiring, firing -- all sorts of human resources-related decisions," he said.

But Purcell believes that there will be a battle over the use of emotion or affect recognition technology, similar to what is now happening with facial recognition technologies.

Dig Deeper on Talent management

SearchSAP
SearchOracle
Business Analytics
Content Management
Sustainability
and ESG
Close