Experts Call for More Diversity to Combat Bias in Artificial Intelligence

A 2022 study found a robot trained by AI was more likely to associate Black men with being criminals, or women with being homemakers

0
AI is informed by the data it’s built upon and at times that data can be biased and flawed. Experts call for increasing diversity in the field to combat bias in AI. Mandatory Credit: gorodenkoff/iStockphoto/Getty Images

(CNN) — Calvin Lawrence has dedicated his career to artificial intelligence. But even after decades of experience in computer engineering, he said one thing remains incredibly rare.

“I’ve worked on many AI projects over the last 25 years, not more than two [of my colleagues] looked like me,” Lawrence, who is Black, said.

Artificial intelligence holds the promise of rapidly reshaping our society, but with that promise, Lawrence said, comes the challenge of confronting and dismantling biases that can be encoded into emerging technology.

Lawrence is the author of the book, “Hidden in White Sight,” which examines how AI contributes to systemic racism.

AI is informed by the data it’s built upon and at times that data can be racist, sexist and flawed.

In August, a Black mom in Detroit sued the city after she says she was falsely arrested while eight months pregnant because officers linked her to a crime through facial recognition technology. Detroit’s police chief later blamed “poor investigative work.”

A 2022 study found a robot trained by AI was more likely to associate Black men with being criminals, or women with being homemakers. The team of researchers concluded the continued use of such technology risked “amplifying malignant stereotypes” that fuel racism and misogyny.

In New York City, the local health department recently expanded a coalition challenging clinical algorithms that adjust for race because they say the outcomes are often harmful to people of color. These algorithms have been shown to overestimate a person of color’s health, according to a statement from the New York City Department of Health and Mental Hygiene, which can cause a delay in treatment.

Join Us on GodRadio.com

In a statement shared with CNN, a spokesperson for OpenAI, the company behind ChatGPT and other artificial general intelligence (AGI) models, said bias is a significant issue across the industry and OpenAI is dedicated to “researching and reducing bias, and other risks, in our models.”

“We are continuously iterating on our models to reduce bias and mitigate harmful outputs,” the company said in a statement, adding that for every new model released, OpenAI publishes research on how they are working to achieve those goals.

The best way to ensure AI reflects the experiences of people of color, Lawrence said, is to make sure they’re employed and engaged in every step of the process.

“You certainly don’t have a lot of Black folks or data scientists participating in the process of deploying and designing AI solutions,” he said. “The only way you can get them to have seats at the table, you have to educate them.”

Increasing diversity

Studies have found that the lack of diversity and representation in technology fields begins well before college. Students of color generally have less access to foundational computer science courses in high school, a 2023 report by the Code.org Advocacy Coalition found.

While 89% of Asian students and 82% of White students had access to these courses respectively, 78% of Black and Hispanic and 67% of Native American students had this same privilege.

“These opportunities are not evenly distributed, and that is a problem,” said Andres Lombana-Bermudez, a faculty associate at the Harvard University Berkman Klein Center for Internet and Society.

That disparity in access can also lead to fewer people of color studying computer science and artificial intelligence at the collegiate level, Lombana-Bermudez said.

In 2022, more than two-thirds of all doctorates in computer science, computer engineering or information in the United States were awarded to non-permanent U.S. residents for whom no ethnicity is available, according to the 2022 Computing Research Association’s Taulbee Survey.

Nearly 19% of degrees went to White doctoral candidates and 10.1% were awarded to Asian candidates, as compared with only 1.7% for Hispanic graduates and 1.6% for Black graduates.

Lawrence said he believes diversifying the field of artificial intelligence could make the technology safer and more ethical.

Lawrence said he started the nonprofit, AI 4 Black Kids, which works to educate Black children about artificial intelligence and machine learning from a young age, with the hope of one day increasing representation in the field.

“AI is trained on so few historical points of view … the goal for me is, having more Black people involved in that process,” he said.

The nonprofit offers mentorship programs to kids aged 5 to 19, as well as scholarships and college counseling, Lawrence said.

Combating bias in AI requires not only increasing racial diversity, but a diversity of thought as well, Lombana-Bermudez said. He encourages employing sociologists, lawyers, political scientists and other types of humanities-oriented academics to help contribute to the conversation around AI and ethics.

Lombana-Bermudez said his hope is that future generations may alleviate some of the problems of bias and inaccessibility because they’re growing up alongside the technology.

“I am hopeful that this will change and in the future, we will have better technologies,” he said. “But it’s a struggle, and it’s not easy. It is complex.”