This article discusses how emerging technology domains (such as artificial intelligence) adversely affect the diversity and equality of ethnic people and communities in society.
Introduction and Context
Can technology be racist and discriminating? The short answer is YES. And technologists can be racist too! Racism is a global pandemic. It is ubiquitous, has historical roots, and is used as a political tool affecting millions of innocent people who fall into the minority category.
Unfortunately, emerging technology domains and stacks — particularly artificial intelligence (AI), big data analytics, deep learning (DL), neural networks, natural language processing (NLP), and machine learning (ML) fields — are also part of this undesirable situation.
There are two primary racism types. The first one is racism performed by individuals and communities. The second one is systemic racism, which adversely impacts minorities and is more challenging to deal with.
Systemic racism, also known as institutional racism, is embedded in the laws and regulations of organizations, states, and countries. Systemic racism can impact critical societal rights such as employment, education, and healthcare in the form of discrimination.
You may think that what has racism got to do with technology. But, in fact, it has a lot. So my purpose is to provide valuable insights into the role of technology in racism by sharing an overview from my research in the field.
Face recognition by artificial intelligence tools indicates racism. Inequity in face recognition algorithms is well documented. For example, according to an article by Alex Najibi published at Harvard University:
“Face recognition algorithms boast high classification accuracy (over 90%), but these outcomes are not universal. A growing body of research exposes divergent error rates across demographic groups, with the poorest accuracy consistently found in subjects who are female, Black, and 18–30 years old”.
An outstanding study at the Massachusetts Institute of Technology (MIT) validates current issues and proposes taking action. “The Gender Shades Project pilots an intersectional approach to inclusive product testing for AI.
"Gender Shades is a preliminary excavation of inadvertent negligence that will cripple the age of automation and further exacerbate inequality if left to fester. The deeper we dig, the more remnants of bias we will find in our technology. We cannot afford to look away this time, because the stakes are simply too high. We risk losing the gains made with the civil rights movement and women’s movement under the false assumption of machine neutrality. Automated systems are not inherently neutral. They reflect the priorities, preferences, and prejudices — the coded gaze — of those who have the power to mold artificial intelligence”.
After this brief background, I provide a perspective on how racism and gender inequality occurs in the technology field by providing outstanding perspectives from credible and up-to-date sources and introducing its implications and impact.
A Brief Review of the Literature for Racism & Inequality in Technology
Harvard University Press published a 275-page anthology titled Racism in America. In the opening of this outstanding collection contributed by several researchers, Annette Gordon-Reed (an American historian and law professor) at Harvard University made an eye-opening statement:
"Although George Floyd’s death was the spark, there was also an instantaneous recognition that the circumstances giving rise to what happened to him that day were systemic, the product of many years of thoughts, choices, and actions. People also understood that history has mattered greatly to this process, particularly the tortured history of race and White supremacy (not just a matter of White and Black, but of White people and people of color, generally) that has been in place for centuries. Current policies, shaped by that history, must be subjected to scrutiny and critiqued. Plans for the future, based upon new understandings about how to achieve a more racially just society, must also be formulated."
This remarkable anthology covers a combination of history, economy, political sciences, cultural commentaries, and biographies addressing many issues in America.
However, these points also relate to other countries. These contributors persuasively show that:
“the worldwide system of slavery, the ambition for empire that disrupted the lives of indigenous people on several continents, and the powerful legacies from those events have fueled BLM (Black Lives Matter) moment and the current desire for a reckoning”.
You can read this comprehensive anthology free at this link.
Systemic racism within the technology context, in theory, is researched by psychologists, philosophers, political scientists, and technologists. For example, in 2005, Derek Hook published a paper titled “Affecting whiteness: racism as a technology of effect” in the International Journal of Critical Psychology. You can read the paper free at this link.
Outstanding critical research in material culture was published in a book titled “Technology and the Logic of American Racism: A Cultural History of the Body as Evidence” by Sarah E. Chinn in 2000. Chinn examined several social case studies. She touched on the American Red Cross’ decision to segregate the blood of black and white donors during World War 2 and discussed its ramifications for American culture. She mentioned fingerprinting, blood tests, DNA tests and gave a trial of O.J. Simpson as a racist nature of criminology.
Many academicians and thought leaders reviewed this book. For example, Priscilla Wald from Duke University said, “Technology and The Logic of American Racism are important not only for its analysis of racism in the US but also for its exploration of the relationships among the languages of science, law, literature and popular journalism. Chinn’s work shows that students of the humanities have a significant contribution to make to the study of the impact of historical and contemporary scientific developments on the shape of US culture”.
The Choice magazine commented on the importance of this book: “Chinn’s study goes far beyond these examples, providing some of the clearest thinking available on the relationship between bodies and culture. The argument is never reductive. With impressive grace, the author manages both to reveal how bodies have been made to testify and to be conscious of ‘the gingerliness, respect, strength, edginess, and tenderness with which we should approach our bodies and the bodies of others, whether in words, concepts, or touch highly recommended for all academic collections.”
Another interesting study was published in the Journal of Technology Education titled “Perceptions About the Role of Race in the Job Acquisition Process: At the Nexus of Attributional Ambiguity and Aversive Racism in Technology and Engineering Education”. This paper was authored by Yolanda Flores Niemann, who is a Professor of Psychology and researcher Nydia C. Sánchez from the Department of Counseling and Higher Education at the University of North Texas in 2015. This study explored the role of race in the negative job acquisition outcomes of African American graduates of a federally funded multi-institution doctoral training program. You can read the paper free at this link.
International Neuroethics Society (INS) offers annual meetings covering the impact and implications of technology for racism. In 2020, a keynote speaker at this 2020 annual meeting “delivered a riveting explanation of how racism is deeply embedded in many technologies, from widely used apps to complex algorithms, that are presumed to be neutral or even beneficial but often heighten discrimination against Black people and other marginalized groups”.
A few highlights from the INS meeting give us interesting perspectives. First, sociologist Dr Ruha Benjamin described problems of racism embedded in our processes of building and using technologies. For example, Dr Benjamin cited a horrific example that came to light in a newspaper report in 2015. The North Miami police department used mug shots of Black male criminals for target practice, a previously hidden instance of anti-Black sentiments that still distort policing.
Two academic studies cited by Benjamin showed how difficult it is to root out ingrained prejudices. Researchers at the Yale School of Education told a group of preschool teachers to watch video clips of children in a classroom and told them to look for signs of challenging behavior, the kind that might get kids tossed out of school or the classroom. Eye-tracking technology showed that the teachers spent more time looking at Black boys than at white children.
In 2014, Stanford University researchers found that “when white people were shown statistics about the vastly disproportionate number of Black people in prison, they did not become supportive of criminal justice reform to relieve injustices against Black people but instead became more supportive of punitive policies, such as California’s Three Strikes Law and New York City’s stop-and-frisk policy, that was partly if not mainly responsible for the disproportionate incarceration rates”. You can read the paper free at this link.
In a paper titled “Advancing Racial Literacy in Tech” by Dr Jessie Daniels, Mutale Nkonde, and Dr Darakhshan Mir. They articulate why ethics, diversity in hiring, and implicit bias training are not enough to establish racial literacy in technology workplaces. They highlight that “racial literacy is a new method for addressing the racially disparate impacts of technology. It is a skill that can be developed, a capacity that can be expanded”. You can download the paper free at this link.
Miriam Tager (Professor in the Education department at Westfield State University” published a research book called “ Technology Segregation: Disrupting Racist Frameworks in Early Childhood Education”. This study challenges the racist framework and reveals disruptions and strategies to counter deficit discourse based on white supremacy.
This research study by Miriam covers two qualitative studies in the Northeast. It reveals that school segregation and technology segregation are the same. Utilizing critical race theory as the theoretical framework, this research finds that young Black children are denied technological access, directly affecting their learning trajectories. This book defines the problem of technology segregation in terms of policy, racial hierarchies, funding, residential segregation, and the digital divide.
An article on Vice highlighted that “Significant Racial Bias’ Found in National Healthcare Algorithm Affecting Millions of People”. This article includes a series of studies arguing that by focusing on costs as a proxy for health, risk algorithms ignore racial inequalities in healthcare access.
A research study titled “Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice” was conducted by Rashida Richardson (Northeastern University School of Law), Jason Schultz (New York University School of Law) and Kate Crawford (AI Now Institute; Microsoft Research).
In this definitive research study, they analyzed 13 jurisdictions in the US that have used or developed predictive policing tools while under government commission investigations or federal court-monitored settlements, consent decrees, or memoranda of an agreement stemming from corrupt, racially-biased, or otherwise illegal policing practices. In particular, they examined the link between unlawful and biased police practices and the data available to train or implement these systems.
Dr Steven Hyman (a distinguished service professor at Harvard, Director of the Stanley Center for Psychiatric Research [Stem Cell Institute], and board chairman of the Dana Foundation) noted that “the topic of algorithmic bias is starting to emerge as an extraordinary challenge in healthcare and medicine”.
How Can Technology Be Racist?
During my ethnographic research in studying diversity and equality in technology workplaces, I came across much evidence of racism in the use of technology tools and racist behavior by technology professionals. I shared my findings in several academic papers in the 1990s. I also shared one of my research reports on cultural diversity in IT workplaces in an article on this platform.
My studies demonstrated that racism in technology fields existed. While the situation was subtle in many companies, it was also possible to see overt cases in some organizations. In addition, I witnessed very harsh criticisms against members of some ethnic groups. The most affected groups were Indians, Africans, and Asians, particularly contractors coming from China, Hong Kong, Taiwan, Thailand, Indonesia, and Vietnam.
In addition to the technology itself, I also witnessed racist technologists in the workplace. I documented them in comprehensive ethnographic case study reports.
Let me share a striking sample. Some Asian colleagues were called condensing names and even execrated words. For example, I never forget when one of my colleagues made an error, and a Caucasian supervisor said:
“You bloody Chinese people have no idea configuring a damn computer system. Why don’t you go to your darn country, create fake products and get rich!”.
The incident was reported to the organization’s governance and compliance committee. The organization was firmly focused on diversity as almost 90% of employees had an ethnic background. The supervisor had faced disciplinary action. However, the incident left an awful impression on these employees and shattered the trust in the team.
There were many such examples during my observations. I also heard about similar incidents from colleagues in other organizations, technical community members, and friends on social media. There are numerous mentions of technology being racist in the literature. They cover specific, systemic, structural, and institutional elements of racism in the technology landscape. I briefly covered some of them in the literature section above at a high level.
Typical racism situations related to technology come under two categories. The first one is the actual use of technologies to discriminate against people and the second one is the unconscious bias of individual technology professionals at technology companies.
From linguists’ perspectives, the use of technical words reflects racist tendencies. For example, the words “master and slave” in cluster technologies reflect such inclination. Machine learning has the potential to create racist outcomes. For example, large data sets used for training machine learning algorithms are derived from biased data with hidden racist elements.
Machine learning algorithms do not pick up these unintended biases in datasets, and they reflect the data as expected. Diversity and equality officers cannot detect anomalies and noncompliance elements.
As mentioned by Dr Benjamin, (in how racism is coded into technology), “machine learning programs use predictive police technologies to pinpoint where street crime is apt to occur usually in lower-income Black neighborhoods, which are more heavily policed to start with.”
Recently, Stanford University conducted a seminar and shared the presentation on YouTube titled “What is Anti-Racist Technology?”. This presentation provides valuable examples from several angles. Therefore, I attach the seminar for your review.
People believe that resource allocation by algorithms is objective and fair. The assumption is machines do not intend and make decisions on emotions. That is true. Machines have no feelings and human-like biases; however, software programs can reflect sentiments generated by algorithms.
The Computer History organization posted an interesting article curating several viewpoints recently. It is titled “Decoding Racism In Technology”. Here is an interesting excerpt from the opening of the article:
The audio transcription for CHM’s virtual event “Is AI Racist?” delivered “joy blend weenie” for Joy Boulamwini, an accomplished computer scientist at the MIT Media Lab who is Black and female. Sure, her name may be hard for Americans to pronounce, but Google delivers over 17 pages of search results about her, so you might think machine learning systems had plenty of data to teach them.
The transcription algorithm also transformed Black female scholar Dr Vilna Bashi Treitler into “Dr Vilna bossy trailers” and interpreted White professor Miriam Sweeney as “Miriam, sweetie.” Though admittedly a minimal sample size, in keeping with research on bias in AI, the system had no trouble with White and Asian male names, even “Satya.”
This YouTube video posted by Computer History Museum recently defines racism embedded in social systems, not individual behavior presented by Charlton McIlwain, vice provost for Faculty Engagement and Development at the New York University.
Deborah Raji (Computer scientist and activist) explains the problems with biased facial recognition technology. She says:
“People often focus on the interpersonal aspects of racism, characterizing AI like a human (just like the title of this event), but rarely are racists purposely building racist systems, although it does happen”.
Deborah thinks that:
“Engineers often just aren’t thinking about racial equity during the development process for a product or system. An engineer herself, she’s seen firsthand how attempts to emulate and automate human thinking through AI can reduce accountability and put marginalized people at risk, unable to appeal decisions made by an algorithm. Racism is actually coded into the product."
Here is the video by Deborah Raji on YouTube.
Lili Cheng, Vice President at Microsoft AI and Research, explains where artificial intelligence can encode bias. She describes the situation raised by Mr Raji clearly.
You can watch Lili’s presentation on YouTube.
Women of color are affected by biases in artificial intelligence systems. Information Studies Assistant Professor Safiya Noble at the University of California, Los Angeles, considers how biased institutions and systems perpetuate bias in technology and how that might be changed.
For example, Safiya described how she discovered that even before Google, search engines that purported to deliver knowledge and facts commodified women and girls and presented them in hyper-sexualized ways. In addition, women of color were mainly represented pornographically.
Safiya commented that “Unfortunately, these kinds of misrepresentations have continued, but no one is focusing on racism in banal technologies like search or on larger systems issues, like how digital advertising platforms have taken over delivering knowledge from libraries and universities. Instead, they are looking at bias in social media”.
Here are Safiya’s eye-opening statements on YouTube.
The article on The Atlantic points out that “acts of technological racism might not always be so blatant, but they are largely unavoidable. Black defendants are more likely to be unfairly sentenced or labelled as future re-offenders, not just by judges but also by a sentencing algorithm advertised in part as a remedy to human biases. Predictive models methodically deny ailing Black and Hispanic patients’ access to treatments regularly distributed to less sick white patients. Examples like these abound.”
Algorithmic bias happens in machine learning and artificial intelligence. For example, Avriel Epps-Darling (a doctoral student at Harvard University) mentioned that:
“These sorts of systematic, inequality-perpetuating errors in predictive technologies are commonly known as “algorithmic bias.” They are, in short, the technological manifestations of America’s anti-Black zeitgeist. They are also the focus of my doctoral research exploring the influence of machine learning and AI on identity development. Sustained, frequent exposure to biases in automated technologies undoubtedly shapes how we see ourselves and our understanding of how the world values us. And they don’t affect people of all ages equally.”
Another academic and influential artificial-intelligence computer scientist Dr Timnit Gebru was at the center of a race row that engulfed Google’s AI research workforce and raised passions beyond, as reported by the BBC. Dr Gebru said Google fired her after taking issue with an academic paper she had co-authored. You can watch Dr Gebru’s statements in this video.
We see the impact of racism on social media too. For example, as reported by The Atlantic, Twitter users uncovered a disturbing example of bias on the platform: An image-detection algorithm designed to optimize photo previews was cropping out Black faces in favor of white ones. Twitter apologized for this botched algorithm, but the bug remains.
According to The Guardian, Facebook, Twitter, YouTube, Google, and Amazon issued statements in response to Black Lives Matter this year but did they follow through? Here is the article titled “Tech platforms vowed to address racial equity: how have they fared?”.
Reddit is another social media platform affected by racial discrimination. According to this article published in the Technology Review:
“Reddit users — including those uploading and upvoting — are known to include white supremacists. For years, the platform was rife with racist language and permitted links to content expressing racist ideology. And although there are practical options available to curb this behavior on the platform, the first serious attempts to take action, by then-CEO Ellen Pao in 2015, were poorly received by the community and led to intense harassment and backlash.”
Some countries take action and make progress. I particularly review the progress on the elimination of systemic racism in Europe and Australia. According to Sarah Chander:
“The EU is preparing its ‘Action Plan’ to address structural racism in Europe. With digital high on the EU’s legislative agenda, it’s time we tackle racism perpetuated by technology writes“. She points out that “The increased use of both place-based and person-based “predictive policing” technologies to forecast where, and by whom, a narrow type of crimes are likely to be committed, repeatedly score racialized communities with a higher likelihood of presumed future criminality.
Sarah Chander reports that:
“Most of these systems are enabled by vast databases containing detailed information about certain populations. Various matrixes, including the Gangs Matrix, ProKid-12 SI and the UK’s National Data Analytics Solutions, designed for monitoring and data collection on future crime and ‘gangs’ in effect target Black, Brown and Roma men and boys, highlighting discriminatory patterns on the base of race and class. At the EU level, the development of mass-scale, interoperable repositories of biometric data such as facial recognition and fingerprints to facilitate immigration control has only increased the vast privacy infringements against undocumented people and racialized migrants.”
In systemic racism context in Australia, industry Professor Lindon Coombes at the Jumbunna Institute for Indigenous Education and Research at the University of Technology Sydney, UTS, points out that:
“You can have very good people with very good intent trying to do the right thing, but if those structures and systems are not in place and are not understood, those good people and those good intentions can come to nothing and actually do harm”.
Antoinette Lattouf (a multi-award winning senior journalist and director of Media Diversity Australia) said
“This idea that if you’re hardworking, no matter what minority you are or what group you hail from, you’ll be able to have equal participation and equal rights … we know when we look at our institutions, that that’s not in fact true. When we look at who brokers power and who has a voice, whether it’s politics, business or media, it’s still overwhelmingly white and overwhelmingly male. So when our systems and the dictators of power don’t look like the rest of Australia, for me that system shows that we’re not an equal country where everybody has a fair go.”
You can watch the significant attention around the lack of institutional responses to racial discrimination, and systemic patterns of racial discrimination in Australia posted to YouTube by UTS.
National Institute of Standards and Technology in the US published a research paper titled “NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software: Demographics study on face recognition algorithms could help improve future tools.”
Patrick Grother, a NIST computer scientist and the report’s primary author commented:
“While it is usually incorrect to make statements across algorithms, we found empirical evidence for the existence of demographic differentials in the majority of the face recognition algorithms we studied. While we do not explore what might cause these differentials, this data will be valuable to policymakers, developers and end-users in thinking about the limitations and appropriate use of these algorithms.”
Individual and systemic racism is evident and even prevalent in technology, as pointed out in the literature, press, social media, and from my professional studies of the topic.
In most cases, the situation is hidden. However, many overt incidents surfaced and affected people publicly. I introduced the phenomena of racism and gender inequality in technology leveraging various credible publications and provided notable examples from diligent and rigorous professionals.
Technology per se cannot be racist. Machines are neutral. However, software programs can reflect the biases of the programmers and institutions. Many established technology companies are aware of these risks, and they take preventable actions to mitigate them.
However, public awareness and input are essential to improve the current conditions and prevent future occurrences by mitigating imminent risks collaboratively.
Thank you for reading my perspectives. I’d be delighted to obtain your feedback.
This is original content from NewsBreak’s Creator Program. Join today to publish and share your own content.