About a Self-Learning Computer Game Player

This essay is about a self-learning system I created for Blue Planet Software, owner of the Tetris brand. It assumes familiarity with the Tetris game, but extensive information about the game can be found on Wikipedia.


One day, Henk Rogers (CEO of Blue Planet Software) came to me with what sounded like a fairly simple request: to make it easier to play the Tetris game on smartphones. He suggested something to do with selecting a play with a single mouse-click, rather than directing the piece to its spot through shifting and rotating the piece.


Traditionally, Tetris was being played using a keyboard or a game-pad and it required both keyboard dexterity and quick strategic thinking where to put the ever-faster falling blocks. But smartphones with their touch-screens don’t have tactile keyboards and every solution they had come up with until then proved to be so cumbersome to play, that for the first time in its 25 year existence Tetris was waning in popularity.


The first thing I did was make a program to calculate all the possible places a Tetris piece could be placed in the playing field. And display them below the game so they could be picked using a single mouse-click. Not entirely trivial, but also not too hard to accomplish. This way I found out that generally there are between 9 and 33 possible places to place a Tetris piece, depending on its symmetry.


Giving a player the choice of 33 possibilities on a smartphone is obviously not practical. So this would need to be cut down to a much smaller number, say 3 or 4. But which 3? In order for the system to be accepted by Tetris die-hards, at the very least the best possible move would have to be part of the selection.  How would the program know what would consist as ‘best’? At this point I could probably have dug in and created an Artificial Intelligence (AI) algorithm that used a combination of heuristics and combinatorial search to try to compute the best move. But there are many variations on the Tetris game, all with different scoring rules and slightly different game mechanics. So for each of these variations the AI would need to be tweaked to account for those differences.


Instead of making a custom AI for each variation I decided to do something completely different. I developed an AI that didn’t know any of the rules of the game. Instead it used a simple pattern matcher that looked for good and bad patterns. It would look at the play-field for all the occurrences of the patterns and add to the score for the good patterns and subtract from the score for the bad patterns. All the program would have at its disposal was one good pattern consisting of a single empty point. And one bad pattern consisting of a single occupied point. This referred to the notion that the general object of Tetris is to keep the play-field free from blocks as long as possible. Empty points are good, occupied points are bad. It’s obvious though that these two patterns are next to useless. When this algorithm was used to play the game, blocks would just be placed at random places and the game would be finished in seconds with no score. Sometimes some points would be scored by pure chance.


Now you’d probably wonder what good it did to make an AI like that. Well, the good thing about it is that it lends itself very well for genetic programming. What I did next was make 8 copies of this program and give each of them the two simple starting patterns. And then I would let each of them play a game of Tetris. The program with the lowest score would be deleted. The top two scoring programs would ‘breed’ and its offspring would replace the deleted one. Breeding consisted of taking half of the patterns of the one program and half of the other. And then apply some small random additions and mutations to the patterns. Letting them play, round after round, each time removing the poorest one and replace it with a combination of the best two.


At first, all you’d see was random games that would finish in a fraction of a second and a new round would start. After a few minutes however, you’d start to see some order appear in the game-play. After maybe 15-30 minutes, it would actually start to look like somebody playing Tetris. Let it run for a full day or more, and all 8 programs would play like a top-class Tetris player. 


All this without implicitly programming a single game-rule into the AI. If a different set of rules needed to be used in the game, all that was needed was to start again with the two simple initial patterns and let the programs breed for a day or so to get an expert player.


Given this AI, it became relatively easy to use it to assist in playing Tetris. The game would show an outline of the top 3 non-overlapping plays and all the player needs to do is touch which one he/she wants to play. And if you touch a spot not outlined, the AI would pick the play that would best match where the user touched the play-field. Now an important question was: doesn’t this make the game too easy? That turned out to be not the case. Maybe surprisingly,  extensive tests showed that the same player would score in the same range using this “AI-assisted input method” (patent pending) as when they used a keyboard to play the game. This proves that the game is actually much more about strategy than about dexterity.


Apart from accomplishing what it was originally set out to do, there turned out to be an added benefit to this genetic algorithm. Because it can breed top-level players regardless of the rules, it also turned out to be a great testing tool. Before, any proposed change to the scoring or the game-mechanics would risk the game to become unbalanced. There was always a risk that there turned out to be a very simple but unforeseen strategy that would always result in the maximum score. So with every proposed change, countless hours were required of QA staff playing the game to check for this possibility. Now this can be accomplished by simply letting this AI run for a day, unsupervised.


Electronic Arts, the largest Tetris licensee, adopted a slightly modified version of this method in their Tetris for iOS app and called it OneTouch mode. Generally, reception of the OneTouch mode has been extremely positive by customers and it allows owners of smartphones to enjoy playing one of the greatest video games of all time once again.

Why Natural Language Understanding is hard.

Droid3 2

Natural Language Understanding has been a hot topic for some years. With products like Apple’s Siri and Amazon’s Echo, more and more people interact with technology trough giving it voice commands. To a certain extent these technologies have been quite successful in that they allow a wide range of tasks to be accomplished by speaking to a device. But these technologies also have their short-comings and there are examples about comical interpretations abound on the Internet.


So how do these technologies like Siri actually work? Since they are not open technologies, I can only speculate. But past projects that I have worked on allow me to make an educated guess. At some point I worked at Avatar Reality on the Blue Mars project to build a framework for autonomous agents. That’s a fancy way of saying avatars that are controlled by a computer program rather than a human player. Programming autonomous agents can be roughly split into two parts. One part is about interacting with the virtual world. That involves path-finding, tracking other players and events in the game. The other part is interacting with the other players. And while there were ‘physical’ interactions built in the game, like shaking hands, giving a hug or dancing, the main way characters interact with each other is through language. Which is obviously where the natural language understanding comes in. Some player says something and the autonomous agent will generate a response. Either by performing some sort of action or by saying a response that is relevant to what was said. In case of a textual (or verbal) response, the sentence gets mapped to a set of patterns or templates using something similar to regular expressions. When a match is found, a corresponding reply is emitted. Ideally this response has some of the words or subject of the conversation in it to make it relevant.


Generally, the initial reaction of the human players to these Non-Playing Characters (NPC) is of amusement. But people quickly tire of interacting with them. The most common complaints are that the conversation is repetitive, the answers are irrelevant to what was being said or the NPC abruptly changes subject all the time. It’s not for nothing that nobody has come even close to passing the Turing test. In order to improve the responses a lot of tricks have been attempted. Most of these focus on capturing essential elements from the conversation like what is the object of the previous sentence, what is the subject. What is the general subject of the conversation, and so on. The thinking is that the context of the conversation is important, which it obviously is. But is it actually possible to extract enough information from a short conversation to provide this context? I’m going to argue here that it is in fact not possible and that it’s a big reason why natural language understanding is so hard.

Consider the following sentence: “Bob told John he didn’t pass the test.” So who didn’t pass the test? It’s actually ambiguous. But conversations are full of sentences like these and usually we deal with them anyway. So how would an AI disambiguate a sentence like this? Of course it could look further back in the conversation and see if there’s any hint who did a test recently. This is what was being referred to earlier by examining the context of the conversation. But let’s change the sentence in a few subtle ways, and see what happens. “The teacher told John he didn’t pass the test.” Grammatically there’s not much difference between this sentence and the earlier one. They are actually both equally ambiguous. And yet any human would assume that in the latter sentence, it was John not passing the test. Next: “Bob told dad he didn’t pass the test.” This time, we’d assume it was Bob who didn’t pass the test. So suddenly the roles of the two people in the sentence flipped. Even though grammatically nothing changed. From these examples it’s very clear that context is unlikely to disambiguate these sentences. What is embedded in these sentences is intricate knowledge about the world, the people in it, the roles these people play and the relationships between these people. Everybody knows that teachers have students who take tests. Kids or young people are more likely to be students than older people. So a dad is less likely to be a student than his children. And so on, and so on.


But these things are not set in stone. If a response came like “Don’t worry dad, you’ll pass next time!” what happens? A couple of interesting things actually. First of all, a quick mental readjustment happens to make both sentences make sense. The roles of the people in the sentences quickly changes based on the further input. In both sentences. Secondly, it might have caused you to chuckle. And that is because the first sentence had raised a certain expectation with the audience. And that expectation was next proven false. That results in a surprise with the listener. It’s a surprise, but still not totally illogical or impossible. And that I believe is the essence of humour. A joke raises a strong expectation with the audience and then violates that expectation in a way that still resonates.


So I’d like to finish this with claiming that a good sign of passing the Turing test would be an AI that can take or make a joke.

The Early Beginning of Computer-Go Programs

The Editorial Board of the ICGA Journal invited me to write my story about the old days of computer Go. It was published in ICGA Journal Vol. 36 No. 2. What types of techniques we used at the time? How my program achieve to be a three-time winner of the Computer-Go World Championship. This contribution is largely based on my memory, so the chronology of some events may not be entirely accurate. The first World Computer-Go Championship has been organized by Ing Chang-ki in Taipei, Taiwan, in 1985. I started to participate in 1987. My first victory was in 1989. After three titles (1989-1991) I decided to end my Go-programming competition career.

1. My first computer As a teenager I was a rather poor student who hated school and skipped class whenever I thought I could get away with it. As a result I doubled some years as I really had no idea what I wanted to do with my life. I was a quite fanatic Go player but I had started at too late an age and didn’t have particularly out of the ordinary talent to even begin to consider being a professional player. One day however, my high-school teacher in advanced math arranged with school to allow him to use the school’s administrative computer to give us a beginner’s course in computer programming. This was as close as a religious experience to me as an unreligious person. I was captivated. I immediately began reading any book on computer programming I could find in my environment. They weren’t that many at the time. To me, computers opened a whole new world. The following trick did well. I used my mother’s university library card to order back issues of Byte magazine and Dr. Dobb’s Journal. In one of these I saw a description of a plain Chess program. This gave me the idea that maybe I could give it a try to make a computer program that could play Go.

So, I bought a Commodore 64 with a tape-deck as storage, which was all that I could afford, and I started to write a Go playing program. As we say, not hindered by any knowledge of actual facts. First, I programmed in BASIC, but since that was way too slow, I soon reverted to writing most of the routines in assembler, with just the UI components and some of the higher logic in BASIC. This program essentially consisted of a simple algorithm that computed (1) the influence of the stones across the board, and (2) a set of hard-coded patterns and heuristics. As Guinea-pig I had found a school-mate who had just learned how to play Go and who was willing to test out my latest concoctions. Although the program’s success was minor to nothing, I found to my surprise that people who had just learned the game actually enjoyed playing against the program. They enjoyed trying to figure out how to beat whatever new trick I had added to it.

2. My first ideas on Go programs With computer programming as a new-found passion I finally found the motivation to finish high-school at the age of 20 and I enrolled in the computer-science program at the University of Amsterdam. Just prior to that I went to the European Go Congress in 1985 that was held on the island of Terschelling in the Netherlands. There I met Allan Scarff, author of MicroGo which was probably the first commercial Go-playing program. It played only on a 9×9 board. MicroGo was based on cellular automata, something I didn’t understand the first thing about. Despite the fact that its approach was completely different from my approach, I recognized that the program’s weaknesses were very similar to those of my program. I boldly told Allan Scarff that I thought my program was not very far off in level to MicroGo. Although Allan was clearly very skeptical of my claim, he was extremely generous in spending time with me exchanging thoughts and ideas on computer Go. He obviously had very big ideas of how computers ought to be programmed to play Go and he provided a great deal of inspiration to me. But as another English Go player put it, with inimitable British humor, “Allan always has one brilliant idea too many to make his program really good”.

When I moved to Amsterdam to enter university I ended up sharing an apartment with another Go player, Peter Dijkema. Since I found the university courses easy cake, I spent as much time as I could working on my Go program. I believed that this effort taught me more about computer programming than the actual university courses. My enthusiasm infected Peter to the point that he decided to buy an Atari ST and we struck a deal that I could use the computer to write my Go program if I taught him computer programming in exchange. That was a rather favorable deal to me, as it soon turned out that Peter wasn’t all that interested in learning how to program a computer. However, he was happy to support my computer-Go efforts by letting me use his computer. The Atari was a much more serious computer than the Commodore and it allowed me to rewrite my program in C. This did mean starting anew from scratch. That turned out to be a good thing as I automatically did not repeat all the beginner’s mistakes. I rebuilt the program on a new platform with all the knowledge and ideas I had acquired up to that point. I added a ladder-reading module and a general pattern matcher to replace most of the hard-coded patterns my program contained so far. Around that time quite a few members of the Amsterdam Go Club bought an Atari ST to try their hands at programming. Together with a few already experienced programmers we formed a group of around a dozen members who got convened occasionally to discuss ideas on computer Go. We were also honored with a visit one day by David Erbach who had just decided to start a computer-Go magazine. With e-mail only available through the university and no world-wide-web, these primitive ways of exchanging ideas compared to today’s standards were the only ones available to us.

3. Meeting Bruce Wilcox In the summer of 1986 the European Go Congress was in Budapest, Hungary. On arrival I found out that one of the attendants was Bruce Wilcox, arguably the grand-father of computer Go and to that day possibly the only person known to have developed a full Go-playing program that played on a standard 19×19 board. He was there to give some lectures on the subject and a match was arranged between Bruce’s program Nemesis and Allan Scarff’s MicroGo. On one of the very first days of the congress I stepped into an elevator to take me down to the playing hall and Bruce Wilcox was right there. Barely able to hide my excitement, while riding the elevator down I introduced myself and I told him I was working on a Go program and was in the middle of rewriting it in C. “Oh really?” was his only comment. Although I attended Bruce’s lecture and the computer Go match on which Bruce gave live commentary, I don’t remember that I was given any other chance to talk to him. I did however spend hours in front of the computer that was set up in the playing hall with his program. Either playing myself or observing other people play against it. I came away with one clear conviction: I could do better than that.

The one thing I realized now was that all programs that I knew until that point had no clue about relative strength and weakness of groups. So I decided to focus on using the influence function that I had to extract information about eyes and territory to try to determine which groups were dead or alive, weak or strong. The ladder module helped out to pin-point which stones were tactically safe or not. Groups that were deemed dead would then be removed from the influence function, and I would reiterate the process, with the dead stones now contributing to eye space rather than taking eye space away. This had an added advantage. Provided that it guessed the status of the groups right, the procedure would result in a fairly accurate assessment of the score. I spent the whole year developing that part of the program. My aim was to have it ready just in time for the next big event, the European Go Congress in Grenoble, France.

4. A new challenge In the meantime (1985) a billionaire from Taiwan, Mr. Ing Chang-ki, had entered the Go world by proposing slightly new rules for the game of Go. In order to promote these rules he generously funded the organization of the Computer-Go World Championship. In the beginning the tournament was called the Ing Cup, but later it transferred to WCGC and counting restarted at the 1st In Cup in 1985 (now called 1st WCGC). Moreover, Ing Chang-ki launched the following challenge: a computer program that could beat a professional player would be awarded a million dollars.In 1986, a program made by Robert Rehm and Rob Spreij from Amsterdam had entered that tournament. That tournament was dominated by local participants from Taiwan since it was being held in Taipei. It was out of the question that a poor student like me would be able to participate due to the high travel expenses. Luckily, in order to increase the international appeal and stature of that tournament, Mr. Ing decided to award tickets to top finishers in qualifying tournaments. One such tournament was to be held in Taiwan, Japan, the US and one in Europe. The latter would also be open to Americans and was going to be held during the European Go Congress in Grenoble in 1987.

So, in Grenoble a bunch of Go programmers appeared to participate. There were some I already knew, such as Bruce Wilcox, Allan Scarff and Robert Rehm. But there were also some unknown programs such as the one by Michael Reiss and Janusz Kraszek (and a few more). It started off with a 9×9 tournament and it’s hard to describe how excited I was with the opportunity to exchange ideas with all these other programmers and to be able to show off my new C program of which I was convinced that it was going to do well. Yet, I must have introduced a terrible mistake into my program, just before I left for Grenoble. That mistake caused my program to put its own stones into atari at inopportune moments in a couple of games. I wished I could have sunk into the ground. With my cheeks flushed with shame I kept operating my program while a considerable crowd had gathered around my computer cheering to my program to put yet another of its own groups into atari. Let’s say the tournament was not quite the success that I had hoped for. The next day I worked feverishly to try to pin-point the problem before the 19×19 competition would start. I found the cause only barely in time and could only hope the fix would work and not cause other unexpected problems, as I had no time to thoroughly test it toroughly.

It must have worked as my program proceeded to win its games convincingly, including the one against Bruce Wilcox’s program, in the 19×19 competition. In the final round against Kraszek’s program called Star of Poland things did not go well however. Although my program was clearly winning, Kraszek’s program just continued playing. Even when all points were well divided, it kept adding stones in its opponent’s territory. Something that under Ing’s rules didn’t cost any points. Because my program always finished its games well within the allotted time I had not built in any time management. With each silly move of its opponent it would take the same amount of time to compute a reply, which finally resulted in losing the game on time. Star of Poland became the first European Computer-Go Champion. As consolation prize I secured an invitation to the world championship in Taipei later that year.

5. The Third World Computer-Go Championship (Taipei, 1987) While the result in Grenoble may have been disappointing, the way my program had played in the 19×19 competition had been very encouraging. Moreover, I had a couple of months time to iron out the worst bugs. So I traveled to Taipei with high hopes about how I would do in the upcoming championship. This event was it. Here all the best Go programmers in the world got together to compete. They were brought to Taipei through the generous funding by Mr. Ing and the sponsorship of an upcoming Taiwanese computer manufacturer called Acer. And it worked. The million dollar prize, while obviously out of reach for decades to come, attracted some of the sharpest minds in the world to devote time on computer Go. This resulted in a level of competition I was obviously not quite prepared for. While my program beat a Japanese and Taiwanese program quite convincingly, it lost quite badly to a Taiwanese program called Friday, eventually the winner that year, and to David Fotland’s program called G2. Ending up somewhere in the middle of the pack I was put firmly back on Earth with both feet. After the tournament I was called over by Mr. Yang, secretary of the Ing Foundation and professional Go teacher. He told me he had tried and played extensively with all the programs at the tournament and that he thought from all the participants my program had the most promise. He strongly encouraged me to continue working on it.

I think it was somewhere around this time that Jos Vermaseren came to me. Jos was one of the group mentioned earlier that would occasionally get together to exchange ideas on computer Go. He was also one of the people I regularly provided with a copy of my program in order to obtain feedback. He asked me how much time was taken by the ladder-reading module. My program provided some visual feedback on what it was doing and while Jos was impressed with the pattern-matcher and the group evaluator he was guessing my program was spending an inordinate amount of time on tactical reading. I told him he could well be right about that. Indeed, I thought then that the program spent some 50%-60% of its CPU time on tactics. “So how about it if your tactical module were 100 times faster?” he asked. “That would give your program a considerable boost in the competition.” While I admitted that the tactics module was not quite the most efficient part of the program and could be considerably sped up, I did not think it possible to speed it up by nearly that much. That was just not realistic. “OK, you know what, I’ll race you” he challenged me. We agreed on some test set up and each of us went his way to write a fast ladder-reading module. When we met again a little later it became immediately obvious this was something he had already been thinking and working on for a while. In the meantime I had taken some time to make my ladder-reader a bit faster than it was. However, Jos’ ladder-reader was much faster than mine. Probably by a factor five or so. Not quite by a factor 100 but it was obvious to me that some serious gain was to be made here. So we exchanged fascinating ideas what each of us had done to make it faster and we decided on a re-match at some later time. We went back and forth like this a few times, with Jos usually in the lead. I was clearly challenged and so I was gaining fast on him, occasionally taking the lead. But always Jos would come up with another clever idea to pull ahead. After a while we were hitting a wall of diminishing returns. Yet, by then my tactics module had improved by about a factor 25 in speed, a result I had not deemed possible before we started. Then I had a couple of flashes of insight, resulting in a final speed up of another factor between five to ten. At this point Jos graciously conceded defeat. I think in his mind it was mission accomplished, my ladder-reader was now not 100 times faster than before but 200 times faster. A godsend to a program that had just lost an important match due to time trouble.

6. Further development This ‘race’ marked an important point in the development of my program. Not just because it could now read tactics much faster, but also because it taught me a certain frame of mind and methodical approach towards optimizing. An approach that enabled me to implement faster pattern matching and faster influence calculation later on. A more direct result was that my program was now able to evaluate more positions within the same time. Still, the iterative process of determining strength of groups was quite CPU hungry. One way to reduce the number of positions to evaluate was to associate with patterns not just a single move-candidate but a short sequence of moves. One of the weaknesses of the static evaluation function was that it would not give very accurate scores in unstable situations. Consider a move that puts a big chain into atari. How to score that? Considering the stones dead would be wrong, since they may very well be able to escape. Considering them completely alive when they are able to escape is also not the right answer, because what do you do then in situations where there are more than one chain in atari? My opinion was it was best whenever a move caused the situation to become unstable, to consider the opponent’s answer that stabilized the situation again and then score the board through static evaluation. This is what the move-sequences associated with patterns attempted to accomplish, to leave the board in a stable situation without having to evaluate the intermediate positions.

Armed with these improvements my program easily won all its games at the next European championship in Hamburg, Germany in 1988. This despite the fact that Michael Reiss brought a monster computer to the tournament consisting of a massive parallel computing device containing 50 transputers. So again, I qualified to participate in the World Computer-Go Championship in Taipei. While last year’s experience had made me careful not to expect too much, my program had made such enormous improvements that it was hard not to be optimistic. And indeed my program won its first games, including a game against Cosmos, David Fotland’s new program. But in the one but last round my program had to play Codan made by a Japanese programmer and my program lost. Codan went on to beat last year’s winner while I won my last game, winning third place. Because of the language barrier I didn’t have as much of an idea how the Japanese and Taiwanese programs actually worked. With all the other participants I always used these events as a rare opportunity to exchange ideas. It was the only time in a year one could meet and discuss these topics with others. And while the competitive spirit was fierce at times during these events, I also always felt a strong sense of brotherhood between fellow Go programmers and the willingness to share was great. However, the language barrier mentioned above was a real obstacle for us (Western programmers) as much as for the oriental participants. On the surface their programs didn’t seem to play as well as my program, and yet they were more successful year in year out. My third place was in fact the best result so far at the WCGC by any western program. My own assessment was that the oriental programs played with more purpose. The play may generally not have been very high level, once they decided they wanted to make a group alive or kill a group, they would keep trying against all odds. While this would not be very successful against human players, it was quite a successful approach against computer programs. If a group didn’t manage to make two eyes after the third easy-to-answer threat, those programs continued to try. Their line of reasoning was like there are always chances that the other program would ignore the threat the fourth or fifth time, being distracted by a move it wanted to play somewhere else on the board. To counter this, I decided to focus on what I called goal-oriented play. The static evaluation of my program was clearly superior to other programs, so decided to use it to monitor the difference between two moves. Which groups became stronger, which weaker? What territories were becoming smaller, which bigger? These differences were used to create goals. The goal was always to reverse a change. If a group got weaker, it would create a goal to make that group stronger, and so on. In the meantime my optimization efforts over time allowed my program to evaluate many more positions. Thus, I did away with the move sequences associated with patterns and instead allowed a small local mini-max search when evaluating a move. This search was driven by the goals towards a stable position it could evaluate with reasonable accuracy. I used pass-moves to determine whether to continue reading or not. If a goal didn’t improve when playing two moves in a row, search would be terminated (i.e. the situation is stable). Lastly, I instructed the program not to be greedy. If the program could successfully negate the goal(s) of the opponent’s last move, it would be satisfied with that rather than maybe trying to win even more points somewhere else on the board. These changes meant an enormous advance for my program. Until now, even against much weaker programs, computer games were spectacular see-saw events with stone-dead groups being miraculously revived and groups clearly alive could still end up dead during the dame-filling stage of the game. For my program that was now in the past. It had become a relentless killer of opponent’s groups and a solid defender of its own.

7. Achieving three World Champion titles The third place entitled me automatically to participate in the 5th WCGC, so there was less pressure on me for participating in qualifying events. Yet, I played the 3rd ECGC and won the European Championship (Nis, then Yugoslavia, 1989) rather easily with an unbeaten record. Still, I also participated in the newly organized Computer Olympiad that was now assigned as the qualifying event for the 5th WCGC. Moreover, the Computer Olympiad attracted also programs from the US, such as programs by Anders Kierulf and Ken Chen. There it became apparent that my new developments had introduced a few very bad bugs. While my program did beat Kierulf’s Swiss Explorer, it lost to Star of Poland and to Ken Chen’s Go Explorer. So I ended up in second place behind Kierulf. In a sense I was fortunate to find out about the problems in my program well before the WCGC, so that I had plenty of time to work on stabilizing it. All in all, that fall I had more confidence in my program than ever before and I departed to Taipei with high expectations. And rightly so, as my program won all its games quite convincingly. Although the final game against Bruce Wilcox’s Nemesis had a close ending, I didn’t have any doubts of the outcome throughout the game and thus winning my first title as World Computer-Go Champion.

By this time I had rather neglected my study however, in favor of working on my Go playing software all these years. In order to graduate within the required time I was forced to focus on passing courses instead of programming. I did use the prize money I had won at the World Championship to buy a new computer. The Atari ST was a rather slow hobby computer compared to the high-powered PCs and Macs that were used in the competition those days. So I bought a Mac SE30 that was nearly five times faster than the Atari. Even though I didn’t spend much time working on computer-Go that year, the added power pretty much ensured me having good chances again in 1990. That year the 6th WCGC was held outside of Taiwan for the first time. Instead it was held in Beijing. That was quite remarkable, because at that time China was still quite closed. Moreover, relations between China and Taiwan were outright hostile. The fact that a Taiwanese organization was able to organize an event like this in China was unique. Maybe it was a first sign of China opening up and trying to the ease the tensions with Taiwan. The tournament started promisingly, with both my closest rivals Ken Chen and Kraszek dropping games in early rounds. But then my program had to play Ken Chen’s and something weird happened. All of a sudden my program started to play random moves. To my surprise, arriving in the middle game it stopped playing random moves and it started to play normally again. But by this time the game was already unsalvageable. To this day I still don’t know what caused this to happen. Because all other programs had already lost too and my program won its subsequent games, I still secured first place.

In 1991 I still had to focus on my study. Yet, I managed to improve my program a little bit, but only a little. I felt my program was getting to the end of the line in terms of how strong it could get with the technology it was built on. This became the conclusion of my graduation thesis, that there is a limit to the expert-system type of approach that all Go programs were using. To make significant advances from here a paradigm shift would be required. My speculation at the time was that machine learning would be the answer to that. I presented my thesis during the 7th WCGC in Singapore (1991). My program won very easily in Singapore, making it three championships in a row. It also became the first program to pass the first of Ing’s challenges by beating a young professional player with a large handicap. A few weeks after that my program also won a tournament, without dropping a single game, organized by Japan’s Fifth Generation Project. They had made a Go program part of that project and they wanted to see how it would do against the world’s top programs. In brief the Japanese program did not very well. Not only did it not have any chance against my program, other programs also beat it fairly easily.

8. A turn in my life Being in Japan for that event I decided to stay. The next two years I spent converting my program to a commercial version that would run on a Nintendo game computer. With the money I had made this way and with the backing of a software publisher I started a company. I hired a few programmers with the objective (1) to take a crack at that paradigm shift that according to my thesis was required, and thus (2) to bring computer Go to the next level. However, the software publisher landed into financial trouble and decided to pull the plug, well before we had a chance to make any progress. For me, I decided to end my Go programming there and then. I had dedicated a full decade to it. Enough was enough.

My prediction that a paradigm shift was required turned out to be entirely correct. Where I turned out to be mistaken however, was that it wasn’t machine learning but in fact MCTS. I also would never have predicted that it would take nearly fifteen years to take place. But since the invention of MCTS advances have come quickly, now reaching high amateur level. The time where a computer program will beat a low-ranked professional player doesn’t seem to be far off. Whether increasing numbers of CPUs and incremental improvements on current techniques will be sufficient to make the jump to beating a human World Champion is hard for me to say. It may still take a machine-learning component to take that last large step. Time will tell us, but certainly the future of computer Go has never looked more promising.