top of page

Alien, Zombie, Astroid and now "Ai Bias"​

Alien, Zombie, Astroid and now "Ai Bias"​

In 2016, a team of scientists from Microsoft Research and Boston University researched how machine learning runs the risk of amplifying biases present in data, especially the gender biases (Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings] https://arxiv.org/pdf/1607.06520.pdf ). The research team revealed that word embeddings trained even on Google News articles exhibit female/male gender stereotypes to a disturbing extent.


Until 2009, Amazon was de-ranking LGBT books by mistakingly classifying them as "adult literature" (https://www.theguardian.com/culture/2009/apr/13/amazon-gay-writers). Amazon stated: "We recently discovered a glitch to our Amazon sales rank feature that is in the process of being fixed. We’re working to correct the problem as quickly as possible."


Amazon may be fixed the problem in one algorithm but in 2016, Bloomberg analysts revealed that Amazon prime same-day delivery service areas excluded ZIP codes by race to varying degrees (https://www.bloomberg.com/graphics/2016-amazon-same-day/).


Even today, in 2018 we still see gender bias in machine learning-powered general tools as Google translate. The underlying algorithm associates doctor with men and nurses with women when translating between gender-neutral and gender-inclusive languages.


[UPDATED, October 2019: Google translate resolved this issue with a very smart solution; the translation shows one result for each gender.]







I'm confident that most of the solutions don't have the intention to include biases, but ignorance is not the same as innocence. The human has a long history of violence and discrimination, and the default tendency of a machine-learned system based on human data is inheriting these biases, causing disastrous effects on the so-known data-hungry Artificial Intelligence (AI).


The problem is much broader than to be solved just by changing the algorithms; it is related to every part of businesses, from engineers to product managers and executives. In this article, I'll surface some of the causes of the "bias" problem and provide a few suggestions to prevent it. I'll be using the term "bias" as disadvantageous treatment or consideration towards anybody or any group; meaning, being treated worse than others for some arbitrary reason.


Problems

From the civic announcements at the Agora in ancient Greece to your news notifications on your mobile app historically information has mostly been "served" to you. It means that unless you did your scientific research and experimentation the information you receive always has some bias. And, unlike some observational error, you would otherwise encounter, the bias in served information will have much more complicated reasons. To start with, I'll divide the bias in served information into two categories: Intentional and Unintentional.


An example of Unintentional bias is maybe the gender bias being seen between the relationship of the words taken from Google news. An example of Intentional bias is the incredible amount of cryptocurrency news or sided political news I see nowadays.


Until information was in manageable amounts the only source of Intentional and Unintentional bias was human, but once we crossed the line where we had more information than we could consume we gave rise to technologies that added error as well as amplified the source bias significantly. Today, these technologies are in our search engines, news feeds, social news feeds, translators and many more tools.


One of these technologies is recommendation engines. Recommendation engines solve a massive problem of the digital age: Information Overload. Even though they can also introduce a filter bubble, it is still the best technology that is available today. For example, at Yahoo, the recommendation engine we've developed was able to select a handful of news articles with outstanding relevance, out of a million news articles, in a single-digit millisecond time frame and for each of the billion users. In the absence of a recommendation engine, you would need to read every day through a million news articles to find the relevant articles to your liking. This approach is similar to your LinkedIn, Twitter, Facebook feeds and even for the search engines you use for a general search on your hotel and flight search.


Based on my observation, all the information I'm receiving on the internet using standard tools is from one or another search or recommendation machine learning algorithm. And, underneath the surface of these machine learning algorithms, I see four different areas where we need to monitor and control bias:


  • Data source

  • Data processing

  • Model

  • Inference

Since machine learning algorithms model the data, which can be anything from digitized information to the environment and digital bits, identifying the bias in the source information is very crucial for the down-stream systems to function in a fair way to the human.


Drilling little more into the concept of data sources, I see that there are at least three types of machine learning data sources when it comes to serving information to human: Content, Context, and Activity. Content is the data which is produced with the purpose to be presented directly to a human. Context is the state of the environment relative to the content. And, Activity data is generated in the result of the interaction between a user and content in a context.


Every one of us has a temporal objective, a subjective and intersubjective worldview which we carry into everything we create; into our articles, photos, movies, songs, music, paintings, software and more. On the intentional side, we are consciously aware of worldviews we choose, but there are also some worldviews we are oblivious, the unintentional, which originate from our paradigms and our social circles. These biasses can arise from cognitive error, conflicts of interest, context/environment or prejudices. For example, an analysis done on user comments on daily popular news articles revealed that the average user comment always has a negative tone regardless of the news topic. On the contrary, most of the people would disagree that they are negative.


Given the problems in human history, these biases are not surprising, but things get a little dangerous when we use these unintentionally biased content to create machine learning models; the models inherit the biasses with a degree of error and operate on it. Unfortunately, these biasses carried into machine learning algorithms are not visible to the human eye unless we deliberately expose them.


Black box AI is the name we give to machine learning models we don't care to understand. I said "we don't care" because in most cases, like deep learning, it can become very labor-intensive to explain every factor. Especially understanding the bias is a project on its own.


Systems like recommendation engines mainly try to predict user's behavior based on historical and collaborative internet activity. This approach causes algorithms to create information isolation named Filter Bubble (https://en.wikipedia.org/wiki/Filter_bubble). The nature of this isolation depends on the representations of the user activities in the system and can be anything from social, cultural, economic, ideologic to behavioral. If not given attention, filter bubbles can be intentionally or unintentionally used to control public opinion towards a particular bias. For example, in 2013, Yahoo researchers found out that web browsing on Yahoo Finance can anticipate stock trading volumes (https://research.yahoo.com/publications/6609/stock-trade-volume-prediction-yahoo-finance-user-browsing-behavior). This means a bias in the financial news ranking could affect user activity and hence affect stock trading volumes.


Solutions

Debiasing

It is every data scientist, product manager, and engineer's responsibility to have a robust strategy to detect, expose and debias the biases in AI products and services. While there are hundreds of possible biases, I think the following most critical biases are a good start for every content based machine learning system:


  • Racism

  • Sexism

  • Cynicism

  • Framing

  • Bullying

  • Favoritism

  • Lobbying

  • Classism

  • Polarity

One of the ways to detect these biasses is to model the bias by using a class of NLP (Natural Language Processing) techniques named Sentiment Analysis (https://en.wikipedia.org/wiki/Sentiment_analysis). Today, sentiment analysis is possible by using human-provided training data (e.g., sentiment labels) as well as unsupervised learning techniques like Unsupervised Sentiment Neuron (https://blog.openai.com/unsupervised-sentiment-neuron/). Also, in recent years, RNN (Recurrent Neural Networks) algorithms became very popular in solving NLP problems.


Preventing Filter Bubbles

One approach to avoid filter bubbles is building exploration and exploitation tradeoff strategies. Exploration and exploitation tradeoff allows the system to create a balance between serving information "from outside" and "more about" the filter bubble. Some techniques involve addressing the problem using Multi-armed bandit solutions (https://en.wikipedia.org/wiki/Multi-armed_bandit).


Glass Box AI

Today, we see more and more researchers and companies moving into this area, and creating technologies to explain machine learning models. One of these technologies is LIME (https://github.com/marcotcr/lime). LIME is based on the https://arxiv.org/pdf/1602.04938.pdf paper, and currently, it can explain any black-box classifier, with two or more classes.


Another step towards transparency is DARPA's Explainable AI (XAI) program (https://www.darpa.mil/program/explainable-artificial-intelligence) which aims to produce "glass box" models that are explainable to a "human-in-the-loop" (read more about XAI at https://en.wikipedia.org/wiki/Explainable_Artificial_Intelligence .) Also, leading researchers like Kate Crawford (http://www.katecrawford.net) are studying the social implications of AI and bringing more and more awareness of the industry.


On the commercial side, companies like Optimizing Mind (http://optimizingmind.com/) develop technologies that understand how deep learning models interpret each component of the input.


——


While we are introducing more AI technologies into our processes, it is everybody's responsibility to understand the bias issues and take necessary precautions.


In this article, I presented just a few aspects of the dangers of artificial intelligence solutions. If you want to learn more about all the aspects of making the right decision, please check out my Stanford Continuing Studies course named "Product Management in the Artificial Intelligence Era" at https://continuingstudies.stanford.edu/courses/professional-and-personal-development/product-management-in-the-artificial-intelligence-era/20191_WSP-359. Although nothing can replace a classroom experience, I'm also planning an online version as well for those who are not in this area.


Thank you for reading!

18 views0 comments

Recent Posts

See All
bottom of page