Legislators in Congress, along with state and local governments, are creating AI laws. This year, California adopted a law regulating chatbots, and Illinois recently approved an AI law affecting video interviewing. Congress is also considering a law requiring firms to fix algorithmic bias. HR will feel the impacts from some of these new laws.
Despite this regulatory uncertainty, businesses are adopting AI technology at a brisk pace, according to a new study by Oracle and Future Workplace LLC. About 53% of all U.S. businesses are using some type of AI technology in the workplace. Globally, the average is about 50%; last year this global average was at 32%.
This is according to a new survey of 8,400 employees, business and HR managers in 10 countries. About 3,000 of the survey respondents are in the U.S.
It's possible that AI laws could foster adoption of AI by business, said Dan Schawbel, research director at Future Workplace, a New York City-based consultancy. The laws could create better transparency around AI and help provide an ethical framework for its use, he said.
"This is new ground, and it's going to sort itself out," Schawbel said. "But I think if there's a level of transparency within companies about how the data is being used, then people will naturally use it more."
Businesses may have a lot of work sorting out AI laws that can arrive through Congress, state and even local governments.
Dan SchwabelResearch director, Future Workplace
For instance, the California law -- "Bolstering Online Transparency" or the B.O.T. bill -- applies to chatbots that "incentivize a purchase or sale of goods or services in a commercial transaction" or "influence a vote in an election."
This bill, which became law on July 1, doesn't appear to impact HR's use of bots in recruiting, according to AI law attorney K.C. Halm, an attorney at Davis Wright Tremaine LLP in Washington, D.C.
HR bots are only answering questions about the hiring process or facilitating an exchange of information. "It seems unlikely that they would be covered by this law," Halm said. The exception might be a recruiting chatbot that tries to interest a candidate to also check out the firm's new product line, he said.
Should HR disclose chatbots, nonetheless?
Another issue for HR is whether, as a matter of course, they should always disclose the use of chatbot technology to candidates.
The use of chatbots is a "great opportunity" for candidate digital marketing, said Elizabeth Mye, global vice president of HR at Intermedia, a business communications provider based in Sunnyvale, Calif. "It's being done for online shoppers, so why not for candidates?"
But Aye believes that candidates should know when they are interacting with a bot and not a human. "Transparency in informing potential candidates of bot technology use is essential in establishing trust," Mye said.
"Candidates require an experience where they feel that their future employer will support their best interest and success," Mye said. "Starting out by hiding or veiling that the candidate is speaking to a computer will not give a candidate assurance that the company is up front, and could set the tone that the only thing that matters is whether they have the right technical skills for the job," she said. That could sour the candidate on the company, she argues.
But the development of AI laws in states and municipalities is something that businesses will have to pay attention to, Halm said.
San Francisco, for instance, is prohibiting government agencies from using facial recognition technology. New York City has created an Automated Decision Systems Task Force to find a process for reviewing the use of automated decision making systems, which may lead to new AI laws. The task force is due to release a report by year end.
One of the first HR-specific AI laws was recently adopted in Illinois. Starting Jan. 1, businesses that use AI-enabled video to screen job candidates, will be required to let job candidates know that they are using AI, how it works and the characteristics it is examining for.
Federal effort on AI bias
On the federal level, U.S. Senators Cory Booker (D-NJ) and Ron Wyden (D-Ore.) this year introduced the Algorithmic Accountability Act, which seeks to empower the Federal Trade Commission to write regulations around the use of automated decision making systems. In the House, Rep. Yvette Clarke (D-NY) is also a sponsor.
Specifically, the bill requires firms to assess their use of AI systems "for impacts on accuracy, fairness, bias, discrimination, privacy and security." The bill, introduced in April, has not made any legislative progress.
"I think we're collectively all exploring how AI is going to play out in the workplace," said Emily He, senior vice president of human capital management in the cloud business group at Oracle. There are guidelines that need to be put in place around security and privacy as well as making sure these systems aren't perpetuating some of the biases made by humans, He said.
But AI technology "is also starting to mimic, in small ways, the way humans think," He said. "And that for me is huge, because this technology has the possibility of making the workplace more human."