This is Part 2 of a two-part blog post exploring prediction accuracy in Artificial Intelligence and Machine Learning...
These posts focus in particular on how the drive to create accurate models that make good predictions can have a negative impact on the ease at which it is possible to explain the rationale behind those predictions. And in turn, we will then explore what this can mean for the ethical use of using Artificial Intelligence in decision making.
We started by looking at how a Decision Tree can predict the quality of red wine, and found that although it was quick and easy to create a model that gives us highly explainable results, the degree of accuracy that a simple Decision Tree could achieve was limited. In this post we’re going to look at two other models starting with Logistic Regression.
Model 2: Logistic Regression
If the phrase ‘Logistic Regression’ (or even just that picture of a calculator!) has just brought you out in a cold sweat, then don’t panic!!
Basically, for the purposes of this blog post, think of logistic regression as being a bit like making a cake. When you make a cake, you take various ingredients in different quantities, mix them together and put them all in an oven. With logistic regression, we took the things we knew about the bottle of wine (alcohol level, pH etc.) as the ingredients, worked out how much of each of them we should put into our mixture then passed it through a mathematical formula (our equivalent of the oven) and out popped a cake – or in our case, a prediction of whether a bottle of wine is good or not.
Many Artificial Intelligence models use a similar approach, and the increased sophistication meant that we could achieve much better results. We were able to produce a model that gave us an 89% prediction accuracy, and importantly, unlike our simple Decision Tree, this was making predictions for every bottle of wine in the list, not just the small subset of ones it thought were likely to be good.
Explainability
Unfortunately, this improvement in accuracy comes at the expense of explainability. It is much more difficult to explain how the model is making its predictions than it was for the Decision Tree.
Explainability in Artificial Intelligence has become a bit of a hot topic recently, and because of that, there are now tools on the market that are helping to provide insight into these types of models. We ran one of those tools on our model and got the following:
Which is interesting, as it’s basically telling us how much of each ingredient was put into the cake mixture to get the prediction.
But, (and this isn’t a criticism of the tool as it is a difficult challenge it’s trying to address and this is just one graph from the many it provides) the problem is that it’s a bit like knowing the cake ingredients without knowing the recipe. It helps to a certain extent, and certainly you can see what the most important ingredients were, but it would be difficult to sum up the above in a simple sentence other than being able to say something like “the most important factors in predicting the quality of red wine seem to be the levels of alcohol and sulphates”.
So what?
Well, by trading off how easy it is to explain the results of the model, we’ve been able to achieve a significant improvement in prediction accuracy. However, this was a much more time-consuming process to get this result and required specialist data science knowledge and skills to create the model.
By using tools that are starting to come onto the market, we can still offer some explanation about what the most important factors are in making the decision about the quality of the wine, but realistically, the level of information we have extracted here is unlikely to be sufficient to explain the reasoning behind a more sensitive situation such as being refused a loan more than simply saying something like “I’m afraid that you don’t meet our lending criteria”.
Logistic Regression Summary
- The accuracy of the predictions from this model is much higher, and we can now make predictions for every wine on the list, not just a subset
- The gain in accuracy has come at the expense of how easy it is to explain the model
- Producing a model like this takes significantly more time than a simpler model and requires specialist skills and knowledge
Model 3: Neural Network
A Neural Network is a Machine Learning model that is designed to mimic the way a human brain works. If you’ve ever heard the term ‘deep learning’, then this is normally being used in the context of a Neural Network, because they can offer much deeper levels of sophistication than the other models that we have looked at.
Neural Networks are used in many Artificial Intelligence and Machine Learning situations in the modern world, and they are capable of doing some amazing things in terms of image and speech recognition, diagnosing cancers, predicting the weather – oh, and helping to determine whether a particular bottle of red wine might be any good or not!
We built a Neural Network that was capable of predicting the quality of the red wines in our sample with an accuracy of just over 90%, which is pretty impressive, although it’s worth noting that this is only a fractional improvement over our previous model – so much so, that we wouldn’t be confident in saying whether or not it is actually better at making predictions than the model we used in our second example without a lot more testing. What we can say with some confidence though, is that both of the models appear to be pretty good at making predictions for our red wine dataset.
Explainability
The bad news is that all this comes at a cost, which is that Neural Networks are one of the worst models in terms of their explainability. Very much like the human brains that they are trying to emulate, although you can look at their inner workings, making sense of those inner workings and how they interact with one another to make a prediction is almost impossible – so much so, that effectively a Neural Network is a ‘black box’.
So what?
Neural Networks are extremely powerful and versatile models that can be used to solve a large range of Artificial Intelligence problems. But this power comes at the expense of explainability, which for a Neural Network, is pretty much zero.
This can be problematic in many situations where it is important to be able to explain the reasons behind a particular recommendation, or why a decision has been made – especially when this has affected an individual consumer or customer. Therefore, care needs to be taken to think through the potential ethical challenges that using a model like this can create, and when it might be more appropriate to use something simpler, but with a higher degree of explainability.
Neural Network Summary
- A very powerful and flexible Artificial Intelligence model that can be deployed in a range of different situations
- Creating a Neural Network takes time and specialist skills
- It is pretty much impossible to explain how a particular Neural Network makes its predictions, and it’s effectively a ‘black box’ in terms of explainability
Summary and Conclusions
The purpose of this two-part blog post was to highlight and bring to life some of the considerations that are important when applying Artificial Intelligence to real-world problems. This is particularly relevant when considering the ethical implications of not being able to explain the reasons behind a certain decision that an Artificial Intelligence model has made.
Whilst there is not a direct relationship between the complexity and/or accuracy of a model with how explainable it’s predictions are, it does tend to be true that the more complex model (which are in turn more difficult to explain), make better and more accurate predictions. When considering deploying Artificial Intelligence, it is therefore important to consider the tradeoffs and the ethical impact of using a model that provides more accurate results, but in a less transparent way.
Consider ethics from the outset
The ethical impact of Artificial Intelligence is now gaining much more focus and attention, and this has resulted in tools being brought onto the market that can provide further insight into why decisions have been made. By utilising these tools effectively, coupled with a design that considers ethical considerations from the outset (such as restricting information fed to the Artificial Intelligence to only information that isn’t considered sensitive), highly effective and accurate Artificial Intelligence models can be created that deliver significant value and avoid unexpected ethical dilemmas.