More Than One in Three Firms Burned by AI Bias

[ad_1]

Bias in AI systems can result in significant losses to companies, according to a new survey by an enterprise AI company.

More than one in three companies (36 percent) revealed they had suffered losses due to AI bias in one or several algorithms, noted the DataRobot survey of over 350 U.S. and U.K. technologists, including CIOs, IT directors, IT managers, data scientists and development leads who use or plan to use AI.

Of the companies damaged by AI bias, more than half lost revenue (62 percent) or customers (61 percent), while nearly half lost employees (43 percent) and over a third incurred legal fees from litigation (35 percent), according to the research, which was conducted in collaboration with the World Economic Forum and global academic leaders.

Biased AI can affect revenues in a number of ways, said Kay Firth-Butterfield, head of AI and machine learning and a member of the executive committee of the World Economic Forum, an international non-governmental and lobbying organization based in Cologny, Switzerland.

“If you pick the wrong person through a biased HR algorithm, that could affect revenues,” she told TechNewsWorld.

“If you’re lending money and you have a biased algorithm, you won’t be able to grow your business because you’ll always be lending to a small subset of people you’ve always been lending money to,” she added.

Unintentional Yet Still Harmful

Participants in the survey also revealed that algorithms used by their organizations inadvertently contributed to bias against people by gender (34 percent), age (32 percent), race (29 percent), sexual orientation (19 percent) and religion (18 percent).

“AI-based discrimination — even if it’s unintentional — can have dire regulatory, reputational, and revenue impacts,” Forrester cautioned in a recent report on AI fairness.

“While most organizations embrace fairness in AI as a principle, putting the processes in place to practice it consistently is challenging,” it continued. “There are multiple criteria for evaluating the fairness of AI systems, and determining the right approach depends on the use case and its societal context.”

Mathew Feeney, director of the project on emerging technologies at the Cato Institute, a Washington, D.C. think tank, explained that AI bias is complicated, but the bias that a lot of people attribute to AI systems is a product of the data used to train the system.

“One of the most prominent uses of AI in the news these days is facial recognition,” he told TechNewsWorld. “There has been widespread documentation of racial bias in facial recognition.

“The systems are much less reliable when seeking to identify black people,” he explained. “That happens when a system is trained with photos that don’t represent enough people from a particular racial group or photos of that group aren’t of good quality.”

“It’s not caused necessarily by any nefarious intent on the part of engineers and designers, but is a product of the data used to train the system,” he said.

“People who create algorithms bring their own biases to the creation of those algorithms,” Firth-Butterfield added. “If an algorithm is being created by a 30-year-old man who is white, the biases that he brings are likely to be different from a 30-year-old woman who is African American.”

Bias Versus Discrimination

Daniel Castro, vice president of the Information Technology & Innovation Foundation, a research and public policy organization in Washington, D.C. maintained that people play fast and loose with the term AI bias.

“I would define AI bias as a consistent error in accuracy for an algorithm, that is, a difference between an estimate and its true value,” he told TechNewsWorld.

“Most companies have strong market incentives to eliminate bias in AI systems because they want their algorithms to be accurate,” he said.

“For example,” he continued, “if the algorithm is incorrectly recommending the optimal product to a shopper, then the company is leaving money on the table for a competitor.”

“There are also reputational reasons that companies want to eliminate AI bias, as their products or services may be seen as subpar,” he added.

He explained that sometimes market forces to eliminate bias are ineffective.

“For example, if a government agency uses an algorithm to estimate property values for tax purposes, there may not be a good market mechanism to correct bias,” he explained. “In these cases, government should provide alternative oversight, such as through transparency measures.”

“But sometimes people refer to AI bias when they really just mean discrimination,” he added. “If a landlord discriminates against certain tenants, we should enforce existing anti-discrimination laws, whether the landlord uses an algorithm or a human to discriminate against others.”

Regulation in the Wings

The DataRobot survey also quizzed participants about AI regulation. Eight out of 10 of the technologists (81 percent) said government regulation could be helpful in two areas: defining and preventing bias.

However, nearly half of those surveyed (45 percent) admitted they were worried regulation could increase their cost of doing business.

In addition, nearly a third of the respondents (32 percent) expressed concern that without regulation, certain groups of people could be hurt.

“You’re seeing a lot of calls for that sort of thing, but AI is just too broad when it comes to regulation,” Feeney said. “You’re talking about facial recognition, driverless cars, military applications and many others.”

There will be a great deal of discussion about regulating AI in 2022, global professional services firm Deloitte has predicted, although it doesn’t believe full enforcement of regulations won’t take place until 2023.

Some jurisdictions may even try to ban whole subfields of AI — such as facial recognition in public spaces, social scoring, and subliminal techniques — entirely, it noted.

“AI has tremendous promise, but we’re likely to see more scrutiny in 2022 as regulators look to better understand the privacy and data security implications of emerging AI applications, and implement strategies to protect consumers,” Deloittes’s U.S. Technology Sector Leader Paul Silverglate said in a news release.

“Tech companies find themselves at a convergence point where they can no longer leave ethical issues like this to fate,” he warned. “What’s needed is a holistic approach to address ethical responsibility. Companies that take this approach, especially in newer areas like AI, can expect greater acceptance, more trust and increased revenue.”

[ad_2]

Source link

Recommended For You

About the Author: NFS

Leave a Reply

Your email address will not be published. Required fields are marked *