AI mistakes? How could this happen?

Aug 10, 2018

AI perceives its environment and takes actions that maximize the probability of successfully achieving its goals. This does not ensure success (the correct answer). This is a common misconception. AI-based systems will produce some wrong answers. AI, much like natural intelligence, is fallible, but not for the reasons many claim. There are plenty of real-world examples of AI mistakes from the world's leading companies in AI deployment. When the stakes are highly visible, for example in making oncology treatment recommendations, the broader community will demand explanations for why and how mistakes happen. The reasons for failure are often quite simple. Let's explore a few common root causes:

  • The wrong data are used in estimation. Simulated data are often used for experiments in AI, but they are not suitable for the estimation of the underlying equations / models. These simulated cases disturb the underlying distributions within the data and can lead to undesirable outcomes. Responsible Party = Human.
  • Extreme values are not considered. Often researchers will "clean" the data used in estimation, for example removal of outliers. Cleaning data in itself is not the problem - failure to consider what will happen to the model predictions when it encounters an outlier is the problem. Responsible Party = Human.
  • The data generating process (DGP) is in flux. Business processes change, new data are collected, some data are no longer collected, laws change, administrative policies change, all of these can have serious implications for DGP and in turn, serious implications for AI. For example, income was previously collected as a continuous variable and the equations were estimated with these data, but income is now collected as a categorical variable, segmented in $25,000 increments. Did anyone re-estimate the equation(s) that use income? Responsible Party = Human.

You have probably noticed a common characteristic of these examples...Human. People are at the heart of AI's successes and failures. The analytic software used to estimate the underlying analytics for AI (e.g., R, Python, SAS, Oracle) will produce results. The results may be complete nonsense, but it is the responsibility of the data analyst and the stakeholders to understand why.

Do you have an example of AI mistakes that can't be traced back to a human?

 



Tags: