Artificial Intelligence
by ambyrne
Lifestyle, TechnologyArtificial Intelligence should frighten us all, and not for the reasons you think

There has been so much talk in recent weeks and months about the advancement of Artificial Intelligence, or AI and for the most part it seems that people want the technology to move forward with the assumption that it will be of benefit to humanity. The current thinking is that AI will create better self-driving cars, advances in medical technology and ultimately in many sectors eliminate the possibility of human error, yet for all the proposed positives of this emerging tech, there is a small but growing corner of Silicon Valley that is firmly against the rise of the machines, or at the very least its strict regulation.

Let’s take a step back before we talk about the why’s and have a quick looks about what the technology is and what state it’s in at the moment. To put everyone’s mind at ease we are still a long way from the dystopian futures of sci-fi movies or the postapocalyptic idea that machines will overthrow mankind and rule the world with a silicon fist. Currently when people talk about Artificial Intelligence they are mostly referring to machine learning.

Machine Learning is when a computer or system is fed information about a particular topic and trained to give responses to questions based on the data or the most likely correct answer. Sounds very much like what computers have always done on the face of it, but machine learning is different, the idea is that as the system gets more and more data it gets ‘smarter’ or it makes leaps based on the information at hand that a human may not see. It effectively allows the machine to make its own decisions based on the data rather than the programmer or user giving it strict criteria to follow.

The potential benefits of this are untold. In theory if given enough data about a disease or illness the machine learning could spot patterns and make links that scientists may miss or have not considered. This has been broadened in recent years with Deep Learning. Deep Learning is taking this to the next level where criteria are removed and as much data as possible is given to the machine to see what patterns and decisions it makes. While this isn’t quite intelligence, it enables a machine to call on a vast amount of past ‘experience’ when presented with a challenge.

To give an example, machines have recently been used to predict what length and type of sentence offenders in court cases will receive based on their past offences, the severity of the case, the past rulings of the judge and so on and so on, and the results have been impressive to the point where there have even been one or two examples where the AI has been used to select the sentence. Interesting use of the technology, but people were quick to claim that this lacked a certain human factor in the decision-making.

Apart from the fact that I’m sure most people wouldn’t like to be judged by a machine which inherently cannot take into consideration any emotion or remorse when making its decision, there is also the other potential problem; bias. Deep learning relies on the information fed to it, and obviously the more information its given the more exact or true to life its decisions are going to be, but what if the information being given to the system to learn from is flawed in itself?

There is huge potential for certain companies using Deep Learning to put in user bias in the form of data that has been pre-screened for suitability. In other words they may deliberately or unintentionally give the machine data that will lean towards one particular view-point or another. In disadvantaged areas the data could be skewed in terms of wealth, race, ethnicity and this in turn leads to biased results. It’s virtually impossible at this point to ask the computer to take into consideration the fact that what it’s being given might be incorrect. It just operates based on what its given.

Microsoft Tay Artificial Intelligence (AI): from thinking humans 'super' to racist in less than 24 hours

Microsoft Tay Artificial Intelligence (AI): from thinking humans ‘super’ to racist in less than 24 hours

A fantastic example of this from last year was Microsoft’s ill-fated Tay. What they did was simple. They created a bot that had a twitter account, and as you asked it questions it scoured the web for related information and gave responses. The more people who interacted with it, the more it refined its replies. Seemed innocent enough at first, and initially it was saying that humans were ‘super cool’ and it was looking forward to interacting with everyone. But the masses of the internet kept asking it questions about topics that very quickly influenced the data set it was using for replies. If you keep Googling why Donald Trump is great or was the Holocaust real and all of a sudden its bank of knowledge is full of stuff related to this.

Within 24 hours Microsoft decided to pull the plug as, what had started as a cool publicity stunt and experiment of the technology, turned into a stream of racist and right-wing tweets. Not only did it advocate building Trump’s Mexico wall and getting Mexico to pay for it, Tay even went so far as to condone the Nazi regime and had decided before the day was out that feminists needed to ‘burn in hell’. Not great and not a good example of machine learning.

Remember though with these examples, the machine isn’t ‘aware’. It doesn’t know that its causing offence, or even what offence is really and all Tay was doing was parroting back the types of things the people who were tweeting it were saying.

We're not quite at the skynet stage yet

We’re not quite at the Skynet stage yet

True AI is a long way off where this learning takes a leap to the point where the computer is aware of the decisions it makes and can make intuitive leaps that technically break the logic of ‘just follow the data’. Even though machines can beat tests designed to check for genuine Artificial Intelligence such as the Turning Test, or beat grand masters at chess, this, to date is only based on having enough information to reference to give the best possible answer or move in a given situation, and is not even close to Skynet coming back in time for John Connor. (if this happens I’ll tell you last week). With that in mind we can rest easy that AI is not out to get us, but we still need to be aware that the human factor is still the basis for any potential problems in this space.