Clarion conference producer Conor Mulheir contemplates the origins and applications of artificial intelligence and in part one of this two-part feature asks where it all began.
When American mathematician Claude Shannon published his paper Programming A ComputerFor Playing Chess in 1950, it would have been difficult to imagine the sophistication of today’s artificial intelligence (AI) and its seeming omnipresence in our business and personal lives.
He demonstrated extraordinary foresight with regards to the possibilities this technology would bring to the world.
Introducing the potential of a chess playing computer, he said at the time: “Although perhaps of no practical importance, the question is of theoretical interest, and it is hoped that a satisfactory solution of this problem will act as a wedge in attacking other problems of a similar nature and of greater significance”.
He went on to outline several potential applications for the technology:
1. Machines for designing filters, equalizers, etc.
2. Machines for designing relay and switching circuits.
3. Machines which will handle routing of telephone calls based on the individual
circumstances rather than by fixed patterns.
4. Machines for performing symbolic (non-numerical) mathematical operations.
5. Machines capable of translating from one language to another.
6. Machines for making strategic decisions in simplified military operations.
7. Machines capable of orchestrating a melody.
8. Machines capable of logical deduction.
Looking back at his technological forecast from a modern perspective, it’s clear Shannon was a true visionary, with many of his predictions having long since been brought to fruition.
The chess test
In 1997, IBM’s Deep Blue became the first computer system to defeat a reigning world champion in a chess match, beating Garry Kasparov before being retired by its development team.
The complexity of chess and its seemingly infinite number of possible moves made this a watershed moment for AI, and one that was hard-won: almost half a century had elapsed between Shannon’s original conception of a chess-playing computer and Deep Blue’s eventual triumph.
Of course, the computers of Shannon’s day were incapable of dealing with the quantities of data we’re accustomed to now, and learning to intelligently play chess required systems to deal with an almost endless number of variables.
Shannon calculated a conservative lower bound of the game-tree complexity of chess, that is the number of distinct games which could conceivably be played, of 10^120, or 1 followed by one-hundred-and-twenty zeroes.
For comparison, there are generally estimated to be around 10^80 atoms in the observable universe. Clearly then, building a computer able to consider even a fraction of these variables was a monumental challenge, and the forty-seven years which elapsed between the publishing of Shannon’s paper and Kasparov’s defeat should be considered relatively rapid progress.
In fact, Deep Blue’s developers, among others, were laying the foundations for an ultra-accelerated technological boom this century, during which the capabilities of AI would be compounded and increase exponentially.
Building on the chess success
Chess is an extremely sophisticated game, but in recent years computer programmers have overcome many even greater challenges.
The apparent simplicity of ancient Chinese board game Go, which doesn’t have a range of pieces moving in different ways like chess, may lead some to believe this would be an easier game to programme a computer to play.
However, Go is estimated to have a game-tree complexity of around 10^360, meaning there are some 10^240 times as many possible games as in chess.
Clearly then, Go was a worthy challenge for Google’s DeepMind developers, whose programme AlphaGo in 2015 became the first computer to beat a human professional player without handicaps on a full-sized 19×19 board.
In under 20 years, developers had gone from a computer mastering chess with its game-tree complexity of 10^120, to mastering Go with its complexity of 10^360.
The number of variables that AI is finding its way around is increasing exponentially, and shows no sign of slowing down.
The DeepMind team has suggested that AlphaGo is a step towards creating algorithms that can intelligently tackle some of today’s greatest scientific challenges, from designing new medicines to accurately modelling the effects of climate change.
Bots n’ big blinds
Last year, an AI-based computer programme named Libratus was able to beat some of the world’s top pro poker players, prompting one to claim he felt the machine “could see my cards”.
This achievement involved further layers of complexity when compared to chess and Go, due to the inherent characteristics of poker.
Firstly, chess and Go are both skill-based games with minimal elements of chance, whereas poker is a much subtler mix of the two.
In chess and Go, both players also have access to complete information about the game – what pieces their opponent has, what moves they have already played, what moves are available to each player, and so on.
In Texas Hold’em, the information offered to both parties is incomplete, and while they can both see what’s on the flop, turn and river, the lack of information about your opponent’s hand is what makes the game worth playing (thanks, Captain Obvious).
Furthermore, when considering the use of bluffing and elements of randomness in both betting strategy and cards dealt, we can begin to imagine some of the problems developers have faced in creating a poker-playing bot that can keep up with human play, let alone that of a top-rated professional.
Libratus’ success was made possible through machine learning, the process by which the bot is able to ‘practise’, playing against itself and refining its strategies.
Clearly, the millions of simulated hours the machine dedicated to this were enough to outsmart even the world’s most formidable players.
Machine learning allows AI systems to constantly adapt to new information, change their strategies and effectively analyse action that has already taken place.
In fact, it may have been Libratus’ continual learning between games, and its analysis of information from the tournament’s first few days which allowed it to refine its strategies and emerge victorious in the end.
Read part two at igamingbusiness.com.dev.synot.io tomorrow.