In our last blog article we asked the question as to whether the use of Artificial Intelligence (AI) in decisions that affect our everyday lives could lead to individuals being disadvantaged...
So, where were we?
In one of our latest blog posts, we started discussing whether the use of Artificial Intelligence (AI) in decisions that affect our everyday lives could lead to individuals being disadvantaged. We talked about the different ways AI is being used in decision making, and highlighted a couple of examples. In case you missed it, be sure to catch up here.
Continuing on from Part 1, we will look at some more examples, including more controversial ones, before discussing how someone could be disadvantaged and whether it is something we should be concerned about.
Where AI is being used in decision making – continued…
Financial Services
As with the insurance sector, AI is now being heavily utilised in the Financial Services sector to identify risk levels which can then be applied in decision making.
The primary area of focus is the assessment of applications for credit, loans or mortgages, and the potential approvals/rejections. AI is used to assess applications against historic data to determine any associated risks and uses that to support decision making. This goes far beyond the traditional use of credit scores and therefore is a more subtle way to identify and categorise individuals.
AI is being utilised in several ways in the legal sector, such as for document processing and due diligence exercises. It has not made inroads into decision making to the same extent as the Insurance and Financial Services sectors quite yet though. One area that is particularly interesting is where AI is being used by companies to analyse and predict the outcomes of legal cases and motions. The objective is to allow firms to accurately forecast results and adapt their strategy accordingly. Effectively enabling decisions, informed by AI, about what to do at each stage. The consequential next step could be that firms start to use a similar approach to determine whether to take on certain cases or not.
Government Services
The use of AI in government services and decision making is probably at its highest profile ever and not in a positive way. Recent stories in the media relating to the usage, and subsequent rejection, of algorithms to predict student grades has led to many local authorities halting the use of AI for decision making. Prior to this it was claimed that nearly half of councils in England, Wales and Scotland have used AI to help make decisions about benefits, social housing, and other services.
Despite this, it is likely that AI will continue to be used in one form or another to supplement human activity in the processing of applications for benefits and housing. The assessment of these applications relies on processing data against complex eligibility criteria or rules. Consequently, it is well suited for AI processing as long as the right controls are in place, but more on that later.
In other areas, the Department for Transport has utilised AI to identify which garages should be targeted for MOT inspections. Largely based on potential risk these decisions allow a better use of resources and more cars on the road that comply with roadworthiness and environmental requirements.
Policing
In 2017, Durham Constabulary started working with academics to develop a tool to predict whether a suspect of a crime would be at a low, moderate or high risk of committing a crime over the following two years. The purpose was to identify offenders should be referred to a rehabilitation programme rather than take them through the UK’s court systems.
There have been a number of other initiatives where police forces have used AI technology. For example, South Wales Police used facial recognition for persons of interest and Kent Police used a system trained using historic crime data and to identify where crimes may occur. Both initiatives have now been stopped for various reasons.
Is there anything to worry about?
These are just some examples from different sectors to illustrate how AI is being used to inform or make decisions that could impact us individually.
At one end of the spectrum there is decision-making when it comes to marketing. Although the impact is minimal, we are being affected by AI decision making. At worst we receive annoying contact offering us services we are not interested in. Intriguingly, the impact is reduced if the prediction on what we are interested in is more accurate when the AI is working “better”.
At the other end of the spectrum, AI can be used to make decisions that could more fundamentally impact our lives. Whether we are eligible for a service, likely to commit a crime or even in need of treatment of an illness we were unaware of. If these decisions are wrong, then the impact on an individual can be catastrophic. Leading to them being treated unfairly and creating ethical challenges.
So in reality, yes, there is something to be worried about. Any decision that could impact our lives should be something we are aware of and understand.
This is not limited to automated decision making that utilises AI though. Decisions of this nature that are performed by humans are similarly at risk of being wrong and treating someone unfairly. Human decision making is prone to cognitive bias, where they focus on irrelevant, preconceived, or wrong factors. A human decision maker is susceptible to cognitive bias, especially when they are overworked, distracted, stressed or poorly trained. Cognitive bias is also a significant risk in all the examples described above.
What is being done to protect people?
With sectors and areas where decisions are being made by humans that could impact lives there are key controls that are put in place to protect people and to ensure the organisation making the decision does not face legal action. Primarily, these are oversight, the right to appeal and transparency.
Many sectors have their own regulators who have already put in place measures to mitigate the risk of bad decisions, which are being extended to those informed by AI. The majority of these require transparency to provide an explanation for why a decision was made in a certain way. An explanation as to why someone was refused a loan or housing benefit which boils down to “the computer says no” is simply not sufficient from an ethical or regulatory point of view!
In addition, there are also cross-sector measures being put in place to protect us. The General Data Protection Regulation (GDPR) provides protection for individuals in terms of data use regardless of sector and technology.
Article 22 of GDPR states that an individual has “the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”. This is unless the decision is necessary for entering into or performing a contract, the individual affected by the decision has consented; or a legal authorisation exists.
The GDPR gives individuals the right to receive an explanation of how a decision was reached and have it reviewed by a human.
What Does the Team at Rising Tide Think?
Although there is potential for AI based decision making to have a negative impact on an individual – organisations and law makers are aware of the risks and are trying to mitigate them. Furthermore, the risk is not necessarily any greater than when decisions are made by humans.
It is critical to be aware of the risks and have controls in place, but this should not limit progress. AI based decision making has the potential to change organisations and the services received by their customers for the better. For example, AI decision making done well does not suffer from cognitive bias. It will always make a decision based on the information presented rather than any preconceived notions.
Innovation has always pushed the boundaries of how we do things; challenging our practices and laws. The emergence of AI and its integration into business practices will do the same but that is not a reason to halt progress. Rather, it is the reason to ensure it is done right and that there are mechanisms in place to protect us all.