Forever Learning

Forever learning and helping machines do the same.

Archive for the ‘Artificial Intelligence’ Category

Future Felony

leave a comment »

Written by Arthur C. Clarke in 1976, Imperial Earth is set in faraway 2276.

As the beautiful old car cruised in almost perfect silence under the guidance of its automatic controls, Duncan tried to see something of the terrain through which he was passing. The spaceport was fifty kilometers from the city—no one had yet invented a noiseless rocket—and the four-lane highway bore a surprising amount of traffic. Duncan could count at least twenty vehicles of various types, and even though they were all moving in the same direction, the spectacle was somewhat alarming.

“I hope all those other cars are on automatic,” he said anxiously.

Washington looked a little shocked. “Of course,” he said “It’s been a criminal offence for—oh, at least a hundred years—to drive manually on a public highway. Though we still have occasional psychopaths who kill themselves and other people.”

The future sounds fascinating, but I want my Google Driverless Car now.

Advertisements

Written by Lukas Vermeer

September 17, 2012 at 16:51

Understanding

with one comment

Derek Jones posits that “success does not require understanding“.

In my line of work I am constantly trying to understand what is going on (the purpose of this understanding is to control and make things better) and consider anybody who uses machine learning as being clueless, dim witted or just plain lazy; the problem with machine learning is that it gives answers without explanations (ok decision trees do provide some insights).

Problem solving versus solving problems.

As one who specializes in using machine learning, I obviously resent being called “clueless, dim witted or just plain lazy”. However, I feel a larger point should be made here. Success does most definitely require understanding, but not necessarily of how one particular instance of a solution came about.

To be successful in any machine learning effort, one needs to have intricate understanding of what the problem is and how techniques can be applied to find solutions. This is a more general form of understanding which puts more emphasis on the process of finding workable models, rather than on applying these models to individual instances of a problem. Comprehension of problem solving over understanding a particular solution.

Driving a black box.

Consider the following example. To me, the engine of my car is a black box; I have very little idea how it works. My mechanic does know how engines work in general, but he is unable to know the exact internal state of the engine in my car as I am cruising down the highway at 100 miles per hour. None of this “lack of understanding” prevents me from getting from A to B. I turn the wheel, I push the peddel and off we go.

In essence, my mechanic and I have different levels of understanding of my car. But importantly, at different levels of precision, the thing becomes a black box to each of us; in the sense that there is a point where our otherwise perfectly practical models break down and no longer are able to reflect reality. In the end, it’s black boxes all the way down.

Chasing shadows.

Models are merely tools to help you navigate a vastly complex world. Very much like machine learning models, a scientific model might work in many cases, but so does Newton’s law of universal gravitation. We know for a fact that that particular model is definitely wrong; and I sincerely hope many others are just as incorrect.

There will always be limits to our understanding. The fact that we have a model that can help us predict does not necessarily mean we have correctly understood the nature of the universe. All models are wrong, but some are useful.

Reality is simply much too complicated to be captured in a manageable set of rules, but even incomplete (or incorrect) models can provide insight and help us better navigate this world. Machine learning is successful, precisely because it can help us find such models.

[ Peter Norvig has written an excellent piece on this subject in relation to language models. ]

Written by Lukas Vermeer

August 15, 2012 at 11:01

Connecting Four in the Cloud

with 4 comments

CloudIt almost seems like everyone has their head in the cloud these days. And it’s not all just hot air and water vapor. Infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS) are truly revolutionizing the corporate computing industry.

That is why, for the past few months, my good friend Matt Feigal and I have been collaborating with budding startup Cloudular to bring you the next logical evolutionary step in cloud computing. Inspired by the skyward ascent of hardware, middleware and software, we are proud to bring you vaporized wetware; or “artificial intelligence as a service” (AIaaS).

I’m mostly kidding, of course, but here is something I cooked up over the weekend. A web service that plays connect four (based on an earlier post) and is looking for worthy sparring partners.

If you think you can code a better connect four algorithm (and you probably can, especially since I’ve deliberately lobotomized this particular version of my implementation), head on over to github and build your own to compete against mine. All the code and an interface description are available there. The service itself is available on Google App Engine.

I’ve got (part of) my head in the cloud, what about you?

Written by Lukas Vermeer

July 5, 2012 at 13:31

The Middle Way

with one comment

James Taylor is spot-on.

Too many analytic professionals think that only the data speaks and that business rules are, as someone once said to me, “for people too stupid to analyze their data”. Similarly too many IT professionals think that everything can be reduced to business rules or to code using explicit analysis. The reality for most decisions is somewhere in between.

In order to truly achieve business transcendence one must follow the Middle Way.

Written by Lukas Vermeer

May 2, 2012 at 14:59

Predicting Pi

leave a comment »

A few weeks ago, I showed a colleague* my little visual demo of my favorite algorithm for estimating the number Pi. His immediate response was so obvious I am almost ashamed it did not occur to me before he mentioned it.

“RTD could do that!”

Turns out, he is of course right. Although Oracle Real-Time Decisions was certainly not designed for the task, it does a pretty good job of predicting Pi, given the right inputs and some mathemagic.

To reiterate, the idea behind the original algorithm is that we throw a bunch (well actually quite a lot) of darts at a square board (uniformly distributed, of course). We then count the number of darts that land in the largest circle we can fit in this square. The ratio between the darts in the circle and the total number of darts thrown, multiplied by four happens to approximate Pi for reasons explained in an earlier post.

The key thing to understand when using RTD to implement this algorithm is that the ratio described above also represents the odds that a single dart will land in the circle. More easily put, the number of darts that land in the circle divided by the total number of darts thrown equals the chance of a dart landing in the circle. If we can predict the odds of hitting the circle we can predict Pi; and RTD is pretty good at making predictions.

The Setup.

I’ll run through the RTD setup briefly. If you’re interested in a more detailed explanation how I did this, feel free to drop me a line.

RTD can only predict the likelihoods related to choices, so we’ll need a couple of those. For this experiment, we’ll pretend we have a choice of landing the dart inside or outside the circle. We’ll then diligently record whichever of those two happened to be the case each time we throw a dart so that RTD will learn to predict how likely either outcome is.

[ You can click on the screenshots below for a closer look. ]

Two choices modeled in RTD representing a dart landing inside or outside the circle.

The "In Circle" choice

As you can see, I’ve already included the likelihood as a choice attribute to be returned to the calling application. This attribute will be populated by a model aptly named Pi.

RTD model configuration screen for the "Pi" model.

The "Pi" model

The Pi model could hardly be simpler. It will predict the likelihood of mutually exclusive choices (hence the checkbox at the bottom of the configuration) from the choice group Choices.

Note that Pi is configured as a simple so-called Choice Model, not the more common Choice Event Model. The latter, more complex model is more frequently used in practice, because it allows RTD to predict multi-step likelihoods (customer clicked an add, then put the product in his or her shopping basket, then proceeded to checkout, then completed the payment) for positive as well as negative events (e.g. the customer actively declines the offer).

We’ll use a client application similar to the one used in the earlier Pi experiments to simulate the throwing of darts. This application will also show the predicted value of Pi. For this client to tell RTD about darts thrown and RTD to respond with the predicted value we will use a single advisor. The RTD server will generate a web service based on this configuration.

RTD advisor configuration screen with one boolean input parameter.

The "Get Prediction" advisor request

Each request to this advisor will tell RTD about a single dart thrown. The Dart Was In Circle boolean input parameter seems pretty self-explanatory. To process this input and feed the model we will need some simple logic.

Advisor logic to process the boolean input

Advisor logic

This is all that is needed to allow RTD to build a model that can predict the likelihoods for the darts landing inside or outside the circle.

[ Of course we also need so performance goal and decisions to allow the advisor request to return the two choices created, but we’re not really interested in that here. As long as the request returns the In Circle choice and its Likelihood attribute we’re happy. I’ll leave this part of the configuration as an exercise for the reader. ]

The Result.

It takes RTD a while (about a thousand darts or so) to come up with a prediction at all. But when it does, it is pretty much spot on.

The prediction RTD makes seems to be less sensitive to fluctuations resulting from the randomized input. At times, RTD’s guess was even closer to the actual number Pi than the value calculated mathematical function I’d used in previous experiments (and of course, I couldn’t resist taking a screenshot when it was).

A screenshot of the RTD logs and the client application

Results

RTD not only good for 986% ROI, but also for predicting Pi.

And this cake is no lie.

[* I forget who exactly suggested this. Either Alan, Simon or Tim. ]

Written by Lukas Vermeer

October 30, 2011 at 18:34

Connect Four and Minimax

with 3 comments

Computers are incredibly fast calculators. That makes them good at maths, but it does not make them smart. People that program computers have to invent clever ways to exploit that arithmetical ability to achieve intelligent behavior.

Using the force.

Often the simplest way to allow a computer to make the ‘right’ decision is to calculate all possible outcomes and select a path that leads to the best end result. When planning a route from A to B, we consider every possible sequence of turns for every intersection between A and B. This so-called brute-force approach does not require a lot of thinking, it requires a lot of calculating; and computers are pretty good at the latter, less so at the former.

Sometimes it is possible to trim certain paths of exploration because we are certain they will lead to sub-optimal results. For example, when a sequence of turns leads us back to an intersection we have visited before along this route. But even then, this method quickly results in long computation times as the number of possible outcomes grows exponentially with the number of possible decisions the computer has to consider.

This is one of the reasons why we need a super-computer to beat a chess grandmaster. There are a lot of possible moves to consider in a game of chess.

Winning.

Another problem when programming computers to play a two-player zero-sum game like chess, is that half of the decisions in the game are made by the opponent. The computer can decide which moves to make, but it has no control over what the opposing player does in response.

Even worse, assuming they both want to win, the two players have completely diametric goals. When plotting a potential path through solution space, we have to consider that our opponent probably wants to move towards a different solution altogether. The other player is actively trying to stop us from getting to where we’re going.

Mini and Maxi.

One easy way to deal with the uncertainty of the opponent’s decisions when looking ahead is to assume the other player will always make the move that leads to the worst possible result for the computer. The opponent is not trying to win, he is trying to stop the computer from winning. When plotting a path (succession of possible moves) we alternate between making the best (maximum) move when it’s our turn and the worst (minimum) when the opponent is up to make a move.

A game of connect four

Connect Four

This approach is called minimax. It is not particularly clever, but it does the trick and is relatively easy to implement.

Connect Four.

Last year, a Microsoft SharePoint consultant I’d met at a customer site asked me to explain how to implement a computer opponent for a Connect Four game he was creating for Microsoft Windows Phone Mobile XP 7 Vista Home Edition (or whatever they call it nowadays). Instead of explaining how, I decided to build a simple example myself in JavaScript using minimax. I don’t think he ever finished his version of the game.

You can play my version here. Somewhat ironically, it is excruciatingly slow when using Microsoft Internet Explorer; so please consider using any other browser.


Update (June 29th 2012): The code for this example is available on Github.

Update (July 5th 2012): This is now available as a web service. Read this post.

Written by Lukas Vermeer

September 24, 2011 at 15:10

Mission Statement

with 3 comments

After reading Performance Leadership by Frank Buytendijk it occurred to me that a good mission statement might not only be beneficial for organizations. The same idea could also be applied to people.

As a professional motto, a personal mission statement could give coworkers and employers an idea of what drives you to perform and perhaps provide some guidance to your career. It could give others an idea of the kind of person you are, without immediately baring your entire soul in public.

A personal mission statement could be an elevator pitch for your personality.

Frank’s book has a few pointers for creating a good mission statement.

Of course an effective mission statement is not about clever writing, but rather the implementation. However, if a mission statement doesn’t follow a few basic guidelines, it won’t work. First, a mission statement should be to the point. In many cases this means the the statement will be short, but this is not necessarily the case. Furthermore, mission statements need to have external focus; missions statements describe a company’s basic function in society. Next, mission statements must be both specific and broad at the same time. It is vital to be specific on how a company adds value, but in the broader terms of what the products and services achieve for the customers. Lastly, a mission statement needs to be inspiring and truthful; it needs to invite stakeholders to buy into the value the company offers.

In short, a mission statement should be to the point, have external focus; it should be specific and broad, inspiring and truthful. Sounds simple enough, but proves difficult in practice.

After some thought and much introspection, I came up with the following mission statement or myself.

Forever learning and helping machines do the same.

What do you think? Do you have a mission statement?

Written by Lukas Vermeer

May 7, 2011 at 17:15

%d bloggers like this: