What is Tantrix?
The Story so far
This Month's Ranking
Tournaments
Play Tantrix Now
Problems connecting?

HOW DO THE ROBOTS WORK?

The Basics

Almost any game-playing Robot is actually just three relatively small modules which are added to the usual game machinery.
  • A Move Generator: Given a board position, the move generator constructs a list of all possible moves. The number of possible moves is the single largest factor determining how difficult it is for a computer to play a game. In easy games such as Chess, the number is small. In difficult games such as Go, the number can be very large. Tantrix is in the middle.


  • An Evaluator: Given a board position, the evaluator estimates how good the position is for the player who is to move. If the evaluator gives good estimates, then the Robot will probably play well. The quality of the evaluator is the key factor in any game playing program.


  • A Lookahead Manager: which starts a given position, calls the move generator, calls the evaluator for each possible move, and decides to accept that estimate as final to recursively call itself to examine responses, or responses to responses. If the evaluator were perfect, the lookahead manager would be unnecessary (because the evaluator would already have given the highest rating to the best move).


Robot's Move Generator

There's really not much to say - it's a brute force process which tries all six tiles in all possible positions, and remembers which ones are legal moves. In a typical game situation, there are about 50-75 moves. In the endgame, there can be many more, possibly as many as 200.


Robot's Evaluator

The basic evaluation is, for each line of the Robot's colour, to predict what it will eventually be worth, and so for any position to predict what the final score for the game will be. This trick, of trying to predict the final score rather than using the current score is why the Robot is so adept at handling loops.

So, how does the evaluator estimate the final score?

  • First, obviously, each line is considered separately. The maximum estimate for any line is the estimate for the final score.

  • For each line, consider what other, nearby lines are certainly connected, or probably connected. Groups of lines that will eventually be connected are treated as one long line, modified by the probability that the connections will actually be made. The Robot estimates the probability mostly by counting the remaining tiles that fit the connection space. This is an important point: the Robot always counts tiles. You can bet that if the Robot plays a forced connection, it knows if there is still a tile to make the connection.

  • Every group of lines either forms a loop or has two open ends. By default the open ends are predicted to be extended by a fair share of the tiles left containing the desired colour. The Robot doesn't care exactly how or where those tiles will be played, it just guesses that they will be played somehow. This strategy is why the Robot prefers to extend a line rather than try for a small loop, even though the small loop would instantly double the current score. This fair share estimate is greatly reduced if the side is controlled, or likely to become controlled.

  • Goodbot adds a few considerations to the basic evaluation, each of which contributes a small amount to the overall improvement over earlier robots.


    • Be aware of the presence of lookalikes for forced spaces, which affect the likelihood that a connection will be made.
    • Be aware when the last few tiles will certainly end up in your own rack, so probability of controlling them is 100%
    • Add a small penalty for small loops in the opening.
    • Delay playing unfavourable moves as long as possible.
    • Consider forcing multiple tiles to be beneficial before endgame, bad after endgame.

Robot's Traditional Lookahead Manager

Tantrix is unusual because players can make several moves per turn. The Robot considers all forced move sequences in any order, and all forced moves the opponent will have to make before his free moves. Consequently, it has a pretty good idea of how to play in ways that will force other tiles to be played in favourable ways. It does NOT consider what the opponent's free move might be in response, and It does NOT consider what tiles might be drawn to replace the tiles it has played. Also, an absolute limit of 5 (or so) tiles is imposed to keep the number of moves examined down to a reasonable number. The lookahead manager does not attempt to use alpha/beta or any other standard tree pruning methods, because they would be of very little use given the limited depth of the lookahead.


Goodbot's Lookahead

The lookahead manager used by Goodbot is slightly more sophisticated. It removes the low limit of 5 forced tiles, and adds alpha-beta to get the speed back up  (alpha-beta doesn't change the result) It adds the option to look a little deeper during endgame, and to manage it's time to avoid runaway searches when a cascade of forced tiles occurs.


Monte Carlo Lookahead

A radically different approach to building a robot lookahead is becoming popular and is having some success in computer Go, which is much harder for robots than chess. Now it is showing to be quite good in Tantrix as well. Monte is a robot without a brain. It has no evaluator in the traditional sense, it relies on playing games all the way to the end and finding out the actual score. It does this very fast, and many times, to get an average prediction of which moves work best. The art in this "Monte Carlo" approach is to decide which move to spend time investigating. Monte was programmed by Pete Bruns, based on the code from Oliver, but using his own evaluator. This paper descibes his work.





A word from the Author

There are quite a few tunable parameters associated with all these considerations, but actually tuning them is very difficult, because the random factor means a "run" of 200 games is required to validate a new parameter value or a change to the algorithm that applies the parameter. The current state of GoodBot is probably the end of the line from me - I don't know how to make it any smarter, and making it better without being fundamentally smarter would be extremely time consuming.

Both Robot and Goodbot were written by Dave Dyer (ddyer).


2005 Robot improvements

At the end of January 2005, the first version of GoodBot was released into the lobby. This version of GoodBot was a harder-working, gloves-off version of the original Robot, but without any fundamental changes. Advances in computer speeds since the original Robot was created made this development possible. Because of its more "human" playing ability, GoodBot had a floating ranking in the beginning.

In May 2005, a new robot named Oliver was introduced into the lobby. The first official robot battle - Goodbot vs. Oliver - was held on the 11th of that month. Oliver, programmed by Pieter Bolle, Belgium and Alex Biryukov, Israel preformed slightly better than expected by defeating the reigning champion Goodbot with a winning percentage of 64.5% to 30.5%.

Inspired and challenged by Oliver's success, new developments in Goodbot resulted in substantial improvements. A rematch battle between Goodbot and Oliver resulted in Goodbot regaining his title of champion bot with a win of 54.7% to 45.3% (TPs).

2011 Robot improvements

In October 2011, FullMonte, the Monte Carlo robot was unleashed. Instead of trying to look ahead, FullMonte uses raw speed to finish as many games as possible from a given position and then figures out which move resulted in the most wins.

FullMonte challenged Goodbot and won easily - 63% to Goodbot's 32.2% (4.8% of the games ended in draws). The challenge encompassed 200 games and took just under 36 hours.

Along with FullMonte another robot was released at the same time - Monte. Monte can be considered FullMonte's slightly dimmer brother. Oddly enough this is because he is quicker. The only difference between FullMonte and Monte is that FullMonte is set to take about 30 seconds a move on average and Monte only takes around 15 seconds. Both bots are written so that moves may take a bit longer at the start of a match (when there are more potential moves to consider) and a bit shorter at the end of a match.
 More about Monte (1.1 MB)
2016 Robot improvements

In 2016 the most advanced algorithm yet was announced and added to the Robot line-up. Called Darwin and designed by Simon Buhr from Germany, it had no trouble outclassing all comers, including beating Fullmonte in a head to head competition. Darwin won 70.8%, lost 24.8%, with 4.5% of the games drawn.

According to Simon "I would not say that Darwin is near perfect. He is very good at the mid game, but we all see some of his failures especially during the end game or at the beginning (remember game 4 against Niklas). So there is of course room for something better. Dave mentioned the next revolution is maybe up to neural networks and true machine learning like the algorithm behind alphaGo. But I agree that currently Darwin is well suited to produce some realistic game outputs.


Going Forward

The codebase is available as a standard "robot kit". There is still a huge amount of unexplored space. The challenge is all yours!



© Copyright 2020, Colour of Strategy Ltd. Pohara, New Zealand. All rights reserved.
Last update: November, 2020