AI Machine Learning Has Teething Problems

Front-Page Technology
AI Machine Learning Has Teething Problems

When a babies teeth first poke through their gums, they tend to experience teething problems such as drooling, irritability, and they try to bite on everything that they can get their hands on.  In a similar manner, the recent advances in Artificial Intelligence (AI) and especially Machine Learning are experiencing some serious teething problems that need to be acknowledged and resolved.

First, let’s start with the positive side of the emergence of those new teeth in AI Machine Learning.  As widely reported in the media, Google’s AlphaGo recently beat a ranked human Go player.  The three-time European Go champion, Fan Hui, had thought that he would win, and so did most of the rest of the AI world.  AI has done a pretty good job of winning over humans in games like checkers and in chess, and even beat humans in the popularly televised game of Jeopardy! But, the game of Go is different from checkers and chess in ways that makes traditional AI not so equipped to be a winner.

AI Conventional Logic

In particular, the way in which we understand strategies in checkers and in chess are that they involve relatively “logical” moves that can be predetermined.  In chess, for example, there are a slew of opening moves that can make a big difference as to whether you have strong or weak odds of ultimately winning the match.  Knowing or really memorizing the tons of opening moves is a means to be a winner in chess.  Likewise, once a chess game is underway, you can look ahead to moves that might come along, often described as ply, and try to guess your opponent’s moves and your counter moves that will lead to a successful crushing of your opponent.

Notice that the playing of checkers and chess involves logical aspects of thinking about one move or another, and can be partially played by rote memorization of moves and counter moves.  The larger the base you have of memorized moves, and the larger the mental capacity you have to anticipate moves and counter moves, the better you will play in checkers and chess.  That all spells well for modern day computers because they can be programmed to employ a large database of moves and be able to computationally calculate moves and counter moves. 

Indeed, one could say that the nanosecond and picosecond speeds of today’s computers, along with multi-processing and the use of Big Data, has pushed computers far enough along that it makes things tough for a human player.  A human player has their brain, but there seems to be crucial limits to the same kind of memorization and computation for which a game like chess or checkers ultimately is decided based upon those limits.

In contrast, the game Go is not as amenable to just large stores of moves and computationally figuring out counter moves.  The nature of the game Go is such that the number of possible moves is huge, and so there is not really a practical means today to identify all possible combinations and also look ahead to the vast array of moves and counter moves.

AI Machine Learning to the Rescue

For a game like Go, the approach to use is AI Machine Learning [3].  Typically involving Artificial Neural Networks (ANN), AI developers opt to let the machine itself learn how to play.  One consequence of this approach is that us humans might not even be able to explain why a particular move was made by the machine.

As Fan Hui said during the Go matches that he lot against AlphaGo: “It’s not a human move. I’ve never seen a human play this move. So beautiful”.

Like a small child that is learning a new task, the computer playing Go has been exposed to the game of Go over-and-over, including even playing games against itself.  By observing and “learning” from the thousands of games played, it is assumed and hoped that the computer will be able to master the game. This is a fundamental tenant about AI Machine Learning and offers its best hope and also its also disconcerting drawbacks that will be discussed further next.

AI Machine Learning Teething

If you ask the creators of AlphaGo how it is that the program was able to beat the Go master, their answer is perhaps troubling.  In a review of the AlphaGo program in Nature magazine, the reporter concluded that: “Alas, I’m afraid its understanding is simply unknowable – and not just because the computer has no voice to express its evaluations…The computer doesn’t explicitly parse out these concepts – they simply emerge from its statistical comparisons of types of winning board positions at Go. In effect, AlphaGo has a kind of digital intuition”.

It might be exciting to consider that computers will embody a kind of “digital intuition” and that somehow it maybe is akin to “human intuition,” but are we to feel comfortable in blindly trusting this digital imagination? 

In March 2016, Microsoft unveiled an AI chatbot called Tay.  Intended to be playful, the Twitter bot was supposed to provide conversational understanding, and the more that actual humans chatted with it the “smarter” it would get.  Shortly after getting exposed to the real-world, Tay began to emit messages saying that Hitler did no wrong, and was referring to feminism as a cancer.  According to Microsoft as a response to the outcry that occurred: “The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay”. 

On the one hand, we can applaud the developers of Tay for wanting to further extend the capabilities of AI Machine Learning.  In a manner similar to AlphaGo, the more that Tay could “experience” the environment then the better that it presumably would get.  Of course, the key there is how the AI Machine Learning experiences the environment and want kinds of limits or filters it has to ensure that it is learning the “right” things and avoiding learning the “wrong” things.

Imagine a child that puts its hand onto a hot stove.  The feedback is readily apparent to the child that putting their hand onto the hot stove is not a good idea, and hopefully the child learns to avoid doing so in the future.  Tay seemed to not have been established to gauge those aspects of the environment that are the hot stove, and so it just accepted everything thrown its way.  And, of course, humans being as they are, they relished shaking up Tay and getting the AI Machine Learning to get itself in trouble.

This kind of experimentation such as with Tay and even with AlphaGo is fine as it does not especially impact the real-world of humans.  But, imagine a self-driving car that is predicated upon the same kind of AI Machine Learning.  Without the proper and needed guidance for the AI Machine Learning, and if we are not going to be able to figure out what is going on in that digital noggin, then we had better make sure that the AI Machine Learning gets feedback that helps it self-regulate. 

These teething problems need to be well acknowledged by AI developers as they seek to make progress in Machine Learning.  Those wishing to exploit AI Machine Learning need to be cognizant of the inherent limitations to this black-box approach and be careful how they decide to unleash such systems upon humans.  I am not a modern day Luddite trying to hold back technology, and in fact I am taking the opposite view that we do need to reach further and extend AI Machine Learning.  Let’s just make sure that as the Hippocratic Oath calls upon us to do, let’s do no harm — first and foremost.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*

Time limit is exhausted. Please reload CAPTCHA.

Lost Password