Bias in Artificial Intelligence

Feb 24, 2020

Artificial Intelligence (AI) has crept its way into many facets of business and technology. From customer service chat bots to machine learning algorithms, AI can be a highly influential tool that companies and agencies base major decisions on. But what happens when AI forms biases? While AI is machine-based, it is created and implemented by humans, and humans naturally have their own biases such as gender assumptions, racial and social preconceptions, or even just a tendency to gravitate toward the familiar. It is not only important that AI creators be highly aware of what they’re feeding their algorithms, but perhaps there should also be more transparency and accountability in the AI world.

A well-known example of this is Microsoft’s Twitter-based social AI chat-bot, Tay, that launched in March of 2016. Tay’s purpose was to be a symbol of the power and potential of AI to grow and learn from the people around it – she was meant to converse with people on Twitter and develop a personality shaped by those conversations. Unfortunately, users discovered that Tay could not choose to ignore the negative tweets that came in, so Tay took in the many racist and derogatory tweets she received and incorporated them into her persona. Microsoft had to disable Tay’s account less than 16 hours after its launch. A more alarming and consequential example of racial bias occurred in an AI program called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), which was used by a Wisconsin court to predict the likelihood of convicts to re-offend after their release from prison. An investigative publication by ProPublica found that this risk assessment system took on an enormous bias against black prisoners, flagging them as 45% likely to reoffend, while their white counterparts were flagged at only 24%. This system’s racial bias has led to black defendants being handed down longer sentences.

In addition to alarming racial and social biases, there is also the natural human bias of being drawn toward the familiar, such as with gender assumptions. For example, many people typically associate CEOs with being male, as they make up the larger demographic. This “familiar” thought appeared in Apple’s iMessage text suggestion that will automatically suggest the “male businessman” emoji when “CEO” is typed in a text message. Similarly, the Google Translate application fumbled when translating between Turkish and English, translating ‘o bir doktor’ and ‘o bir hemşire’ into ‘he is a doctor’ and ‘she is a nurse’ respectively. This bias towards the familiar also comes into play in AI when it comes to hiring, as more and more companies are utilizing AI programs to support the hiring process. Industries like tech and other STEM-based sectors have earned a reputation for a lack of diversity in the workplace, with an overwhelming percentage being Caucasians, Asians, and men. While many of these companies make claims of wanting to diversify, if a company is not careful when using AI during this process, it will feed historically restrictive data into the AI and hire more of the same employees. As in any situation, it is vital to be aware of the data being given to the AI, since that is all the AI can learn from.

No one person is completely devoid of bias, so when a biased human creates, selects, collates, or annotates the training data fed to a machine learning algorithm, the potential for bias is high. Bias in AI is not as much an issue of ill will as it is a lack of awareness. How, then, can we eliminate bias from AI? It doesn’t seem likely that bias will ever be completely cut out, but there should be steps taken to avoid it whenever possible. As former Barnard professor and analyst at D.E. Shaw Cathy O’Neill states, “Big corporate America is too willing to hand over the wheel to the algorithms without fully assessing the risks or implementing any oversight monitoring.” Organizations ought to make a conscious effort to prevent bias in its training data, and this can be done by educating employees, intentionally monitoring the training data being used, and creating a culture of transparency both within the organization and outwardly to its clients, users, and affected parties. AI creators must be intentional and open-minded in order to avoid typical human biases. Steps cannot be made toward the elimination of bias until the companies that build AI programs are willing to be forthright about their training data and the creation of their algorithms, at least within their own organizations. In order to encourage this transparency and accountability, it is important for an organization to educate its employees and stakeholders about the causes and effects of bias in its algorithms. Until then, clients and users should be cognizant of the potential biases in AI, taking some amount of caution and care when utilizing and engaging with it.



Tags: