Share on facebook
Share on twitter
Share on pinterest
Share on email
Share on print

Is Bridge a Solved Game?


Bridge isn’t a solved game. Unlike Chess or Go, there is a huge amount of unknown information: different players have different levels of information and the different bidding styles, conventions, and systems make it impossible to create an optimal bidding system (and it is likely one does not exist).

Read on to find out more…

A Quick Intro to Artificial Intelligence and Games

AI researchers love games because they offer a genuine and tough challenge that can help them create more complex and easily scalable algorithms. These, in turn, can be developed to solve other real-life problems – at least that’s the idea. The main benefit is probably to their marketing department; both IBM and Google received significant press for their achievements in Chess and Go respectively!

It’s now been almost three years since Google’s AlphaGo beat Go, and more than two decades since DeepBlue became the first AI to beat a reigning chess champion (Gary Kasparov) under standard time conditions.

When a computer plays Chess or Go, they rely heavily (but not entirely) on brute force to make the best move. They evaluate every move, and every move that might happen after that, and every one after that… and so on. They follow these decision trees to the end to determine which move presents the greatest opportunity for victory.

This partially explains why Chess was solved 20 years ago but Go has only been solved recently – Go has far more options for moves, so it requires far more computing power to assess.

Is It Just About Brute Force?

No. DeepBlue’s brute force calculations were only effective because they were based on a deep underlying knowledge base of how Chess worked. DeepBlue was a lot more sophisticated than just assuming that a Rook is worth five Pawns, or other similar basic comparisons – its algorithms used more than 8,000 factors to evaluate the strength of any given position.

But this didn’t work for Go, so AlphaGo went a step further. AlphaGo used an artificial neural network to help it figure out what move an expert Go player would make in any situation and then played itself lots of times to help it make small improvements. This enabled it to continually improve until it was capable of the kind of intuition-based thinking expert Go players use.

Of course, it’s a little more complicated than that! I recommend this article at QuantaMagazine as your next step if you’d like to understand a little more about how AlphaGo works.

For us, it’s back to bridge..

Why Is Bridge Harder For a Computer To Solve Than Chess or Go?

One of the reasons Chess and Go are solvable is because, although very complex, they allow players access to complete information. There is nothing hidden – every piece is in its place, and from there a person or computer can work out what might happen if a piece is moved. Bridge isn’t like that.

Both the bidding and the play present problems for computers, we’ll start with the play.

Solving Bridge Card Play With a Single Dummy

No doubt you’ve already come across so-called ‘double dummy’ solutions to bridge hands. This solution is the number of tricks that declarer will win assuming that a) every player can see every card, and b) every player makes the best play possible. These solutions are often only theoretical since they rely on acting on information that a player would never have in a real-life situation. Computers can solve these problems with ease – but that doesn’t help in a real game or bridge.

At the start of a real hand of bridge declarer can see only 26 cards (or 27 – if you include the lead from LHO) which means there’s a lot of information that is hidden. A player has to assess all the information available to them – the bidding, and the cards that have been played so far – to decide the likely best cause of action.

The main problem here is that the best decision doesn’t always create the best result. A clear example of this is when the bidding indicates LHO has nearly all the points so you decide to play for a finesse… and then RHO drops the singleton King and takes the trick. You made the right choice and were punished, while a very new player might just have played their Ace off the top and been rewarded for it.

We’ve all had bridge sessions where we’ve seen a pair we know is very skilled end up surprisingly far down the rankings – and vice versa. That’s because in the short-term the quality of our choices does not always correlate with the results.

Understanding Bridge Opponents

Another problem is that the play of the cards is influenced by a person’s own style and knowledge. A defender might lead a card that doesn’t seem like a great lead. This might be for a number of reasons:

  • They might have a great lead, but not realize it
  • They might have no other leads
  • Their logic and reasoning behind the lead may be at fault

Accounting for a player’s mistakes is extremely hard if you don’t have a lot of experience playing with that person. What percentage of the time do they make a playing mistake? How often is their bidding too aggressive, or too passive? How would this information influence the AI’s decision making?

Could an Artificial Intelligence Create The Perfect Bidding System?

A third significant problem comes with the bidding. It is presumably possible (although not useful) to formulate a perfect bidding system – but only if opponents don’t interfere. For obvious reasons this will not happen at the table, and the range of different bidding systems and styles would make this very hard.

This does raise another interesting questions – is there even such a thing as a perfect bidding system? Certainly one bidding system can perform better at another when directly compared, but is there one that performs better than every other system? I don’t think so – but plenty of others might disagree!

Even if System A beats System B, and System B beats System C  it’s entirely possible System C will beat System A; I think it’s probably considerably more complicated than a direct hierarchy with one definite winner.

So, Will We Ever See an AI Beat Bridge?

I find it hard to see how an AI could ‘solve’ bridge, but it could certainly get good enough to beat expert human players. Unfortunately with bridge considerably less well-known to the general public than Chess and Go, there’s much less demand and publicity involved in creating an AI that can beat it, so we might not see this for a long time.

What do you think?


Share on facebook
Share on twitter
Share on pinterest
Share on reddit
Share on email

Leave a Comment