ScrimismsPresently suffering a dearth of witticisms


A.I.28 Oct 2007

I’ve just put my Master’s thesis, “Structural Representation of the Game of Go”, online. Download it if you dare.

A.I. and Games and Musings15 Aug 2007

I’ve been working on my slides for my thesis defense talk, and it reminded me of this particular observation.

The game of Go presents incredible freedom of choice to its players. At any juncture, a player my place his stone on any unoccupied vertex of a 19 x 19 grid. That means there are 361 possible first moves, and for each one there are 360 possible replies, and 359 possible replies to each of these, etc, until the game finally ends, typically about 300 moves later.

So how many unique games of Go are there? The simple way to describe this number is to write 361!

The exclamation point, a rather fitting piece of math notation, is called a factorial, and what it means is take every number between 1 and 361 and multiply them together. Try it on your calculator: it’ll probably explode. The result is roughly 10^700, which is another way to express the same number, but really doesn’t give much more intuition as to its actual size than 361! did. We are way beyond the realm of what humans are capable of wrapping our little heads around.

Consider this: if you decide to take every possible game of Go and play them out side by side on separate boards, you couldn’t do it. You run out of matter fairly early on in the process. There are, after all, only 10^80 atoms in the universe, and if you used them all to build Go boards, you’d still come up woefully short.

A.I. and Books04 Aug 2007

I’ve just finished reading “Science and the Modern World” by Alfred North Whitehead (he was, among other things, Bertrand Russel’s sidekick on the Principia Mathematica). I’m starting to get a nice little Whitehead collection:

Science and the Modern World is a remarkable book on the history and philosophy of science. It is an adaptation of a series of lectures given by Whitehead in 1925, but it feels as though it could have been published yesterday: I was frequently amazed at how clearly Whitehead expressed ideas that have yet to crystallize for thinkers 80 years on.

For me, the high-point of the book is the end of his chapter called “The Century of Genius”, in which he, in a scant seven pages, lays out exactly how “modern philosophy has been ruined”.

The key to this ruin, he says, is that we treat objects/matter as having only “simple location”—existence at certain points in space, at particular moments in time. He says that this simple (and still widely held) view of matter is responsible for all manner of bugbears from Hume’s problem of induction, to the triumph of a materialistic view of the world that many people instinctively find aesthetically unsatisfying:

“These sensations [cf Locke's secondary qualities: colour, sound, etc. as opposed to primary qualities: mass, shape, etc.] are projected by the mind so as to clothe appropriate bodies in external nature. Thus the bodies are perceived as with qualities which in reality do not belong to them, qualities which in fact are purely the offspring of the mind. Thus nature gets credit for what should in truth be reserved for ourselves: the rose its scent: the nightingale for his song: and the sun for his radiance. The poets are entirely mistaken. They should address their lyrics to themselves, and should turn them into odes of self-congratulation on the excellency of the human mind. Nature is a dull affair, soundless, scentless, colourless; merely the hurrying of material, endlessly, meaninglessly.

“However you disguise it, this is the practical outcome of the characteristic scientific philosophy which closed the seventeenth century.” (p. 54).

His solution to these problems is to change the focus from the reality of “timeless” objects to a reality of processes unfolding in time. “The reality is a process,” he says, “It is nonsense to ask if the colour red is real. The colour red is ingredient in the process of realisation.”

I won’t replay the whole argument here (Go and read the book if you’re interested!), but it has numerous contemporary consequences. To take two:

First, aesthetically, the objects of reality takes on a much more organic flavour: all of the world is imbued with the same vital energy normally reserved to characterizes living things in their evolution over time (and why should life get special status? We are all made out of the same “stuff” as everything else, after all…).

Second, it has huge implications for Artificial Intelligence (my area: my supervisor recommended the book to me) and related information processing endeavors: our current approaches to modeling information about the world treat objects as static, atemporal things with particular fixed properties. If reality is actually made of temporal processes… well, you can see where our state-of-the-art is in danger of falling far short.

How has this been ignored for 80 years?

Fantastic book. I highly recommend it for anyone interested in the structure of scientific thought and its various implications.

A.I. and Games and Musings20 Jul 2007

Chess genius Bobby Fischer once tried to popularize his own version of his game. It replaced the standard starting arrangement of pieces with a randomized back row, making the players’ knowledge of the standard opening plays irrelevant. Fischer was reacting against the trend towards increasing memorization of lines to play among the chess elite; his version of the game would force the players to rely instead on their innate talent.

I think he felt that if one plays moves according to the “book”, one isn’t really playing a game so much as participating in a mechanical process that might as well be automated. Of course, playing chess has increasingly been automated—culminating in the famous Kasparov vs. Deep Blue series in which the super computer defeated the super human. Computers typically don’t play chess openings well, and so Deep Blue employed a “book” of many many game openings, and chose moves from that. Deep Blue, in other words, was playing “from memory”, exactly what Fischer didn’t like human players doing.

I was talking about computer game playing with a chess-playing friend and he remarked that against machines, one plays “anti-computer moves”—that is, unconventional plays that will force the computer to abandon its “book” early and switch to heavy calculations instead. This is what Kasparov tried to do in ’96: force the computer off its script as early as possible.

It’s probably a good thing that Fischer played chess and not checkers. For a number of years, a checkers program by Jonathan Schaeffer from the University of Alberta has been better than the best humans. That program, while essentially unbeatable, was not actually perfect. It is now, though.

I read today that checkers has been “solved”. Schaeffer and his group have crunched the numbers, played out every possible avenue, and have proved that it is always possible to force a draw. You can only win at checkers if your opponent makes a mistake. What’s more, they’ve saved this information in a giant database, which you can “play” against (but never can you win).

It turns out playing checkers doesn’t have much to do with checkers: instead it’s a problem of searching a huge database.

The question I find myself pondering: is checkers any fun anymore? It’s certainly not much fun to play against Schaeffer’s program, but what about against another human?

There are something like 10^20 possible checkers positions. As the human brain only has around 10^11 neurons, it’s a fairly safe bet that no human will ever memorize their way to perfect play. Still, does knowing that, at every juncture, a perfect move has already been found and recorded in a database ruin the game? The checkers player can no longer aspire to invent a perfect game, he can only rediscover what has already been written.

I wonder how long until someone solves Chess…

A.I. and Games and Musings18 Dec 2006

I stumbled on a rather unusual approach to Computer Go. (Go being a popular Asian strategy game and getting computers to play being the subject of my thesis). Computer Go is an interesting research problem for AI because the “standard” min/max search techniques that have worked so well in Chess and other games don’t work.

There are two important reasons for this. The first: the number of moves available to a player in a given turn is much larger in Go than in other games (roughly 10 times the number available in chess, for example), and so considering all available moves, plus all possibly replies to each one, plus all possible rejoinders to each possible reply, etc., becomes unwieldy very quickly (this is known as exponential growth, a phenomenon found in this approach to all games, but for most games the large numbers involved are still tractable for a number-crunching computer). The second: unlike chess and similar games, the question of which of two positions is better in Go cannot be answered easily. Taken together, it means that in Go, there are many more moves to consider, and greater difficulty in “considering” them. The end result is that Computer Go programs play badly, and often slowly as well.

Various people are working on a technique they call “Monte Carlo Go”. (For an outline of “Gobble”, the first program to use this technique, go here). The basic idea is this: to test each candidate move, make that move and then play out the rest of the game making random moves. Do this several thousand times, making note of the final score each time. Choose the move that scores the best.

The advantages are twofold. One, there is no search through an exponentially-growing game tree, since a fixed number of random games are played at each juncture (though this can still be slow if the number of random games is high enough). Two, move evaluation is easy to perform, since the only positions to evaluate are the end of the game, and all that needs to be done is compute the score and see “who won”.

How do such programs play? Quite badly, even by computer standards. However, the original Gobble didn’t “know” anything but the rules of the game. There have been some attempts to introduce Go knowledge to improve playing ability: for example, an early program by this fellow doesn’t choose moves that fill a player’s own eyes (a usually suicidal move). That program plays slightly better, but seems to play a rather odd game of making small amounts of tightly-defended territory in the middle of the board while leaving the edges to its opponents, thus losing badly.

Putting aside playing ability, this approach to Go, while fairly typically of AI techniques, is completely unlike the way an real Go player operates. Even if a real player had the superhuman ability to play out 10,000 random games per second, doing so would not help them nearly as much as playing normally. Not only is this unfeasible for a human player, it is quite unnatural. When was the last time you solved a problem by trying a bunch of random solutions with random results and then picked the “most promising”?

To paraphrase David Parans, I’m starting to think the name “Artificial Intelligence” is very apt, in that AI it relates to intelligence in much the same way “Artificial Flavor” does to flavour: AI goes to great lengths and fakery to create the illusion of the real thing. This is a bit of a shame since it would be much more productive to learn the real principles of intelligence. And yes, there are such things: if intelligent creatures like ourselves can be created by nothing more magical than the process of natural selection, the principles intelligence itself, while highly complex, are not inherently unknowable. To think otherwise is to believe in eyes but to think Optics is ineffable.

A.I. and Games and Musings04 Dec 2005

I’ve been thinking about game AI lately. I suppose this is natural: I like games, and I like AI.

I recently played through Halo single player and was fairly impressed by the enemies. Enemies run for cover, try to flank you, dodge your grenades, retreat when wounded. Having written a little game code now and again, I can appreciate how hard it is to make AI characters move around the game world in an intelligent way, let alone do the things the Halo enemies do.

I found this neat talk given by some of the Halo AI programmers. One of the things that they stressed was that it is less important to make an unbeatable AI and more important to make an understandable AI: The player should be able to tell why the AI character did what it did.

Continue Reading »

A.I. and Musings14 Sep 2005

I was supposed to be reviewing some things about predicate logic for school and ended up reading about Bertrand Russell. Russell, for any who don’t know, is one of the most important philosophers and logicians of the twentieth century.

I’m reading about Russell and I am thinking about progress. Here we are in the twenty-first century; the kings of the world and the lords of history. We’ve mapped the globe and flown in space. We can communicate instantly with anyone anywhere (I often find myself in an msn chat with a group of people from such diverse regions as Bulgaria and Singapore, Poland and Australia). We’ve split the atom (why doesn’t anyone laugh at that? It’s a joke! “A-tom” = in-divisible. And we’ve divided it…). We have invented (discovered?) math and logic.

Funny thing about math:

  • We’ve been counting for at least 10,000 years
  • We grasped the concept of natural numbers 4000 or 5000 years ago. What I mean by this is that we’ve learned to say “1,2,3,4…” instead of “1 sheep, 2 sheep, 3 sheep, 4 sheep” – we have just the (abstract) numbers themselves apart from those things we are counting. Took us 5000 years to come up with that…
  • We’ve been using vectors for about 400 years, thanks to Descartes and his Cartesian system. Vectors are basically grouping several numbers together, for example to give the dimensions of, say, a monolith in terms of length, width and height: (1,4,9).
  • We figured out the basic axiomatic definition of natural numbers around 1900, thanks to Peano and Dedekind.
  • That’s a bit like saying “I’ve been struggling with tying my shoes all my life and I just figured out what shoe laces actually are last Tuesday”.

    Speaking of “progress”: besides being a logician, Bertrand Russell was a great left-wing thinker and activist. He campaigned for women’s suffrage in Britain between 1904 and 1910. Yeah, that’s right, Democracy was invented by the ancient Greeks around 500 BC, and women haven’t even had the vote for 100 years yet (Women were only declared “persons” in Canada in 1929).

    Speaking of ancient Greeks: we still haven’t lived up to their directive of “know thyself”. We don’t understand the workings of our own minds. I like to think that we really only got serious about the question when we started trying to build intelligence from scratch (but A.I. is my area, so I am biased), and that only got started when Alan Turing published “Computing Machinery and Intelligence” in 1950.

    There are a lot of people who think we are living in something known as the “end times“. Personally, I think we’re just getting started.

    A.I. and Musings12 Sep 2005

    Fun fact for the day: our brains store about 1/1,000,000 of incoming visual information.

    Fun facts #2,3,4:

  • We can make CCDs with better resolution than the human eye
  • We can make processors that run faster than anything in the human brain
  • Computer vision is not in the same league as human vision
  • According to my supervisor: “We’re using the wrong technology”.

    He also observed (and I had noticed this as well) that we like to think that our brains/minds work like whatever the best technology available happens to be. In Hobbes’ day, they thought cognition worked like a mechanical watch. In the early 20th century, we were sure our brains were like telephone relays. Now, computers are the hot thing, and we like to think of the human brain as being an organic computer. We’ve even “solved” the mind/body problem: the human mind is the software that is running on the hardware of the brain.

    I wonder what we will invent in the next twenty years…

    What we need more of is Science! Not shoddy science reporting

    A.I. and News08 Sep 2005

    In addition to my yet unassigned marking duties and general plugging away at my Master’s thesis, I am taking exactly one course this term, and I just got home from the first lecture. I don’t actually need to take a course this term as I have completed all of the requirements for my degree, but this one is a new course being offered by my supervisor so I decided to sign up. What is that course, you ask?

    “Inductive Informatics”. If you google that the first thing you will find is my supervisor’s website. This is because my he coined the term. When people ask me what I am doing my thesis on, I usually end up saying something like “Artificial Intelligence, only with an unconventional approach. Specifically, I am applying a new kind of math to the 3000-year-old Oriental game Go, which computers are very bad at playing”. It’s that “new kind of math” that is covered by Inductive Informatics. (NOTE: if you would perfer to continue thinking of me as a nice, normal, well adjusted human being without delusions of mathematics, you should probably stop reading at this point).

    “Get on with it”, you say “what’s this Inductive Informatics all about then?” The short answer is that there is no short answer, and I haven’t even come up with a long answer that I am entirely happy with. Here’s the general flavor: in Computer Science (a.k.a “Informatics”) we study, you guessed it, information. However, (my research group suggests) we don’t spend nearly enough time thinking about how that information is represented, we just grab the nearest formalism (usually vector spaces) and go. And it turns out that numbers have all sorts of shortcomings.

    As to the “Inductive” part – a lot of work in A.I./Machine Learning/Data Mining/Computer Vision/Generally-trying-to-make-computers-more-clever deals with the problem of classification. A common example: should this incoming email message be classified as spam or is it a useful message? A lot of spam filters work by “learning” to sort messages based on a series of examples. Induction is this process of going from a (finite) set of specific examples to a general rule for classifying all (infinite) items. It is, in some sense, the ultimate shortcut – going from the finite to the infinite, and I am sure that you can see how doing it properly would be very very useful, while getting it wrong results in that email from your Aunt Mabel being classified as spam. Unfortunately, most machine “learning” that we have right now does induction poorly, but that’s a rant for another time.

    “Well,” you say, “that’s a lot of nice talk, but what is it that you actually *do* in Inductive Informatics?” Maybe I will write up some details on my research sometime for any interested parties. Until then: My research group has developed a language called the “Evolving Transformation System” (the aforementioned “new kind of math”) for describing objects based on generative history instead of the usual surface-feature approach. Objects are actually epiphenomenal – it is the view of reality as a process that matters. ETS formalizes the methods of going from individual objects to class descriptions, and allows for objects and classes to be naturally described in the same language, which is rather unprecedented. ETS also has built-in facilities for naturally “chunking” information into manageable bits by allowing the construction of a multi-leveled hierarchy of description. I’m sure that all sounds frightfully abstract (actually, it kind of is…) and I am sorry for laying it on you like this. The upside is that I get to spend way more time talking about Heraclitus (and other philosophers) than your average computer science student.

    Anyway, there’s a peak into what I am actually doing with myself these days. We now return to your regularly scheduled programming.