IA and Ethics

The killer robot in “Metalhead” (Black Mirror Episode 5 Season 4)US citizens affected by the Cambridge Analytica scandal (source: Business Insider) 

 

Introduction

 

Artificial Intelligence (also called “AI”, if needed read our 5 minute read post : What is Artificial Intelligence?) is a much fantasized area of research. It’s capabilities today have often been exaggerated so much so that people or companies have compared them to those of the human brain. If we are indeed far enough from this stage of “general AI” it is undeniable that recent key achievements in the field of Machine Learning have made real-world business applications possible.

With this enthusiasm and flurry of experiments all over the world a crucial question needs answering : what are the main ethical* risks posed by AI ? This paper will attempt to cover the opportunities and main pitfalls machine learning systems can fall in and the solutions that we recommend and implement today.

* By ethical we mean that they comply by moral principles and values that are shared by members of a society. A set of values and principles have been agreed upon during the Human Rights declaration by the United Nations in Paris in 1948 but are challenged by numerous states and entities today. We state our ethics in the end of this post.

I. Opportunities

 

AI carries incredible opportunities for actors big and small around the world. Here are a few sectors for which AI can make a huge difference.

AI helps detect lung cancerAI can help radiologists detect cancers more accurately

I.1 In the field of agriculture

AI can be used in the field of agriculture to identify with great precision where farmers should water their crops and where they must intervene in order to save them from an environmental threat. This is achieved through deep learning models trained on satellite images as well images taken by cameras on farm tractors.

I.2 In the field of transportation

Car accidents are estimated to cause 1.25 million deaths in the world each year. They are also hugely costly, causing hundreds of billions of dollars in damage each year in the U.S. What’s more the National Highway Traffic Safety Administration (NHTSA) estimates that 94% of auto accidents are caused by human error.

Reducing human input has the potential to result in fewer deaths and injuries and reduced economic damage if automated driving systems are up to the task of taking over. Autonomous vehicle technology relies heavily on AI technology.

I.3 In the field of health

AI has shown itself incredibly valuable in assisting doctors in making the right decisions for their patients. Recently an artificial intelligence has been developed that is better at spotting breast cancer in mammograms than expert radiologists.

The AI outperformed the specialists by detecting cancers that the radiologists missed in the images, while ignoring features they falsely flagged as possible tumours.

I.4 In the field of homeland security

Police departments and secret services can benefit from AI by gaining precious time in identifying know criminal faces or vehicle licence plates in CCTV footage and photos published online. AI can also improve weapon detection at airports scanning luggage by automatically detecting suspicious luggage. This can prove greatly helpful in preventing an attack on the population.

I.5 In the field of financial services

The banking sector has undergone a large scale transformation since the 2008 banking crisis with an emphasis put on compliance. AI can help compliance officers with the many challenges they have : automatically extract and synthesize huge amounts of information to verify that transactions and new accounts are compliant. Process internal recordings to ensure no breach of protocol has been made. In the ultimate aim to prevent a new risk for the bank and the system as a whole.

 

II. Risks

 

As often with new and powerful technologies Artificial Intelligence carries risks on different levels. After defining the notion of bias we will describe theses risks.

II.1 Bias

Bias is commonly defined as an inclination or prejudice for or against some person, some group or something.

  • It may be conscious : for example choosing a banana to eat by it’s yellow color : we may have a conscious bias to choose yellow bananas because we have prior experience or scientific knowledge of bananas being ripe when they are yellow
  • Or it may be unconscious : associating negative feelings with a community even though you are openly inclusive (as the famous Harvard implicit test on race has demonstrated)

“Two quite opposite qualities equally bias our minds – habits and novelty.” – Jean de la Bruyère (Les Caractères ou les Moeurs de ce siècle, 1688)

Bias is a very widespread reality, as Jean de la Bruyère notes bias can be forged by habits, repeating the same choices and being unwilling to change, whereas something new can trigger a positive or a negative bias just by being new. The field of cognitive biases is an extremely active field that has yet a lot to reveal on inherent human biases.

Statistics and the field of machine learning have their own mathematical definition of bias. Bias as well as Variance are the main sources of errors between a machine learning model’s predictions and reality :

  • Bias error is due to over simplified models that fail to capture the complexity of reality
  • Variance error is due to over complexified models fit perfectly the training data but fail to generalize well

 

II.2 Bias in > Bias out

Education, employment, health, access to credit : even today these important components of our society are prone to biased behaviour and decisions.

In the law enforcement domain multiple studies have shown existing biases :

  • A 2011 study has shown that British judges were more merciful after lunch than in other moments of the day.
  • The Open Justice initiative in California has shown LAPD’s negative bias towards black people : with 28% of persons stopped by the police being black when they make up only 9% of the local population

    British judges are more lenient after lunchBritish judges are statistically more merciful after lunch

    If past data is treated as ground truth then AI models trained on the data will reproduce existing social biases. This has already been verified on multiple occasions :

    Key Issue #1:

    Whenever an AI is developed with human related data biases are bound to be present in the training data.

    Our recommendation:

    Acknowledging existing biases is the first step that will set a zero-bias target in the choice or development of an AI system. For the AI system to be fair a bias analysis needs to be carried out on past data so that gender or ethnicity biases can be balanced prior to AI training. Because our societies’ bias’ are undergoing a long term change (for instance deleting the gender pay gap) it is also crucial for AI systems to have a continuous bias mitigation feature.

     

    II.3 Intrusive AI

    Today’s AI systems are based on calculations, a sum of binary operations that are made by our computers : they lack emotions and are unable to be sincerely empathetic and understand our emotions. They can only simulate and reproduce trends they have learned (with more or less randomness).

    These limitations make systems claiming to recognize emotions, mental health, personality or other interior states inherently flawed. A call to ban this technology have recently emerged.

    However, automatically identifying what content triggers a given reaction (e.g a hate emoji on Facebook) on a given segment of the population (e.g suburban single mothers in Ohio) and using this information to create similar contents to trigger these emotions again is already a troubling capability.

     

    State populations affected by Cambridge AnalyticaUS citizens affected by the Cambridge Analytica scandal (source: Business Insider)

    The Cambridge Analytica scandal has revealed how targeted advertisements have attempted (and arguably succeeded) to influence voters on a massive scale. In the case of the american 2016 presidential campaign the company aggregated over 80 million Facebook profiles and processed a huge amount of personal information to detect patterns among voters and design tailored messages to provoke desired emotions and reactions.

    Key Issue #2:

    The accumulation and aggregation of highly personal data (political view, family status, sexual orientation, …)

    Our recommendation:

    Regulators should strengthen data protection and regulation. Businesses should be wary of the data they process and use the minimum amount of data to carry out their processes.

    On the 25th of may 2018 the European Union finally gave a regulatory framework to protect citizen’s data (GDPR) and recently the state of California has also adopted a Consumer Privacy Act (CCPA) .

    These frameworks are essential to encourage best practices in businesses and help avoid a new Cambridge Analytica from happening.

    II.4 AI for good or bad ?

    The third ethical issue in regards to AI is one linked to the very application in which the AI is supposed to make an improvement.

    Elon Musk along other prominent figures in AI have called for a ban on autonomous killing robots. The initiative is much needed however it is important to note that autonomous killer robots are unfortunately only one step further in the development of killing robots which already exist and are used today.

    Killer robots illustrate the augmentation with AI of an unethical product or means (killing a fellow man).

    Killer Robot in Black Mirror (Metalhead)The killer robot in “Metalhead” (Black Mirror Episode 5 Season 4)

    The same can be said of mass surveillance or genetic selection and modification of animal living cells. If Artificial Intelligence can assist in performing these actions the resulting models are unethical by design as their task done by a human is already unethical.

    To be perfectly clear : this argument does not mean it is Ok to develop killer robots, quite the contrary. It shows that AI models can never be ethical if the underlying goal in itself is unethical.

    Key Issue #3 :

    Augmenting existing unethical means (e.g genetic selection) or products (e.g lethal weapons) with AI.

    Our recommendation:

    The development of AI at scale is the perfect occasion to reconsider a part of our activities, especially those that are questionable in regards to our set of values and moral principles. Investment and development of AI should go into activities which create sustainable value for the human community and should not bring harm or destruction in any way.

     

    III. Our engagement

    As we have seen in the first section AI can prove hugely valuable to society and to workers, lawmakers, bankers, teachers and many more professionals. However it is important to remain strongly vigilant to ensure that AI is ethical by design.

    As an AI company we commit to carrying out these key precautionary measures when introducing AI into existing processes for our clients :

    • Perform a bias analysis : we make descriptive statistics on demographics to see how past decisions have varied according to variables such as gender or ethnicity. If the study proves an existing bias then unbias the data on which the AI will learn past trends. We recommend our clients to also be proactive and launch an internal communication campaign to underline past biases and promote better human decisions
    • We keep only the strict amount of data relevant to the given process : we carefully review with the business what are the mandatory data. If we need to process unstructured data such as images, videos or texts we anonymize it as much as possible. This way we guarantee our treatment to be GDPR compliant and we also avoid our models adopting unwanted biases if a supposedly neutral variable (for example gender) takes in fact a positive or negative weight
    • We advise our clients from the risks of going fully automatic : Machine learning (which fuels the vast majority of AI systems), is based on probabilities. You can have a confidence score on one prediction but it will only be a statistic based on past observations. One can think of it as a weather forecast, it can be extremely accurate, and is extremely useful, but remains a forecast all the same, with a (small) probability to fail. When making decisions as critical as to lend or not money or to give access to property, education or health a human validation is mandatory
    • Before working on a project and installing our solution we ask ourselves and the client : will the system do good ? Are there any chances a highly negative outcome can occur ? We carefully weigh the pros and cons and make sure there are safeguards that will easily identify and prevent the system from getting out of control

       

      Final words

      We at Datakeen are focused on producing machine learning technology that is free of existing biases or is not detrimental to any group of people or community.

      We work hand-in-hand with our clients in ensuring their input data is unbiased through preliminary analysis and that their use of AI does not breach any ethical boundaries. This is part of our DNA and makes us proud as an AI company.

      I hope you enjoyed the read and invite you to read our future publications.

       

      A propos de l’auteur

       

      Gaël Bonnardot, Co-founder and CTO at Datakeen

      Passionate and practitioner of Machine Learning and it’s deployment in real-life business cases, Gaël directs AI product development at Datakeen and is committed to making AI as ethical as it is efficient.

       

      Going further

      Here are a few interesting reads related to the issue at stake :