Is a “neutral” artificial intelligence possible?



02/05/2018

Does the development of artificial intelligence (AI) necessarily mean greater control and infringement of personal data, fundamental freedoms and privacy? Or is it possible to move towards artificial intelligences that do not attack these dimensions or, even, protect them?

“Technology is neither good nor bad; nor is it neutral”. With this now classic statement, the technology historian Melvin Kranzberg tells us that it is only what we, humans, do with the technology that determines its moral impact. The very same AI can have different results when introduced into different situations.

Many current applications by AI companies have gained notorious press coverage such as the improper use of personal data (where 87 million Facebook users’ data was shared with Cambridge Analytica), and censorship of search results (by internet search companies such as Google).  But companies such as Facebook and Google are allowing you to use their products for free for ulterior reasons, not just to serve your interests. In particular, many internet AI companies wish to perform better targeted advertisements: the advertisement dollars they earn pay for their programmers, data farms and other aspects of their core business. This means the previously mentioned transgressions and other infringement of personal data, freedoms and privacy seem simply inevitable.  Advertisers have many options and in turn organizations attempting to gain these dollars will push the ethical boundaries of privacy and related topics.

However, as Kranzberg mentions the same technology can produce different results in different contexts. Consider two very different contexts: i) Personal AI assistants and ii) AI prisoner risk assessments.

AI assistants such as Amazon’s Alexa, Google’s Allo and Apple’s Siri are so far one of the most pervasive applications of AI with the ability to impact billions of people on a daily basis. Today the focus of these tools is simple: understanding what music to play, adding to our shopping list or sending text messages. However, the long term vision of these assistants is to hold conversations with a user and be delegated tasks, which entail making decisions on our behalf. This may seem far-fetched, but naysayers can look at the self driving cars of today and compare the progress over just the last five years. Such assistants are an inevitable application of AI, as they effectively allow ourselves to be extended. But just as advertisers make demands to get better results, individuals will demand the need for the AI to be neutral. Clearly we all want the AI assistant to make decisions for the benefit of us, not for the advertisers. Hence, just as it is inevitable that advertisement based products will always be on the border of infringing our privacy, these types of personal assistants must be neutral if they are to exist. And the demand for them is so great they will exist.

Consider the risk assessments for those charged with crimes. Risk assessments involve determining the person’s propensity to be a repeat offender or even propensity for violent crime. These risk assessments are used in many ways, including setting bonds and sentencing. The US Justice Department’s National Institute of Corrections encourages this information to be used and it is currently used in Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington and Wisconsin. Such AI technologies should meet immense societal demands to be neutral, and will have to be under constant scrutiny.

What  are the technical challenges to overcome?

AI companies are addressing these challenges in three different ways: (i) explainable AI, (ii) fair machine learning and (iii) differential privacy.

Explainable AI (XAI) offers the possibility of moving the AI from a black box to be explainable to a human.  This involves the AI not just making decisions or predictions but also augmenting the decision/prediction with answers to auxiliary questions such as: ``Explain why you made this prediction, and why not this other prediction”, “Explain when and why your are most/least confident” and “Explain when you are likely to make mistakes”.

Many AI systems also involve machine learning. This involves teaching a machine to solve a task such as predicting if a loan will be successfully paid back, or predicting who would be a good hire for a job. Machine learning works by being provided many positive and negative examples and then allowing the computer to learn a predictive model based on this data.  But if a machine is taught from examples that reflect the stereotypes found in human culture then it will be biased just as the data is biased. The need for unbiased machine learnt models has produced the emerging field of fair machine learning.

Finally, one of the greatest challenges of AI systems is related to domains with highly sensitive information such as medical records. Differential privacy aims to perform computations such as aggregate queries and computations without the ability to identify individuals from which the computation was calculated. Extensions which give the user even more privacy such as local differential privacy, have been embraced by Google to collect web browsing information (using their RAPPOR system), Apple to collect typing history and Microsoft to collect telemetry information over time.

If AI machines get to think more and more as humans do, shall we reconsider their legal status? Will they remain simply machines?

There has been considerable debates on the rights of AI machines. In 2017, the EU parliament passed a resolution to the European Commission on civil law rules that recommends sophisticated robots be given “electronic persons” legal status. The report states “so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to case where robots make autonomous decisions”. The argument for such rights is the following: if AI robots are going to replace humans at complex tasks such as surgery, they should be held responsible, as a human would.

However, the matter is not so straightforward. The major opposition to this granting of legal status to robots lies in that if a robot is given legal status based on the Natural Person model, then it must be given other rights such as dignity or privacy, directly confronting and competing with human rights. A group of several hundred AI, robotic and ethics experts have just signed a letter in opposition to this potential move.

Ajouter un commentaire