Yesterday, a couple of us here at Brainworth went along to the University of Technology Sydney to listen to a talk by Brian Schwab, AI Lead on an unannounced Blizzard project and author of the upcoming book 'The Psychology of Game AI'.
Picture this: you're a guard working for an evil megalomaniac and you're bored at your post when you hear a noise. You investigate, but after five seconds, have nothing to show for it. Given that if you have a false negative, the result is that the base is infiltrated, it gets back to your boss just who let the intruder slip through and you get executed for incompetence, you'd imagine that you'd be pretty keen to explore for more than five mere seconds. Especially when you consider that the false negative option carries the devastating result of.... you having wasted a bit of time on a more thorough search.
Extrapolate this to a guard finding their co-worker's dead body, so they know there's definitely an intruder, and the idea of them simply returning to their patrol makes them unfeasibly robotic and moronic.
Such is the usefulness of a simple plot device, such as a phone ringing forcing the guard to run off on some emergency or other, only to be replaced by a fellow guard who was as yet unaware of your presence, Schwab asserts.
Throughout the talk, Schwab went on to discuss in depth the many shortcuts the human brain takes when making decisions, taking the audience through cognitive biases and assumptions made by the brain which are seldom, if ever, represented in AI.
Event the term AI rubs Schwab the wrong way, with him positing 'Artificial Humanity' as a superior alternative.
Of particular note was the fact that most AI will look at all available options and select the most appropriate one logically based on its efficiency and the likelihood that it will result in the outcome which best matches that AI's objectives (including the possibility that they're going easy on the player due to difficulty settings), whereas most humans will have discarded many of the available options, especially in the heat of the moment, without properly considering each of them.
Biases as simple as moving towards a more colourful or brightly lit area due to its intrinsic appeal would require separately programming those 'faults' into otherwise flawless AI logic.
With AI being the first subject Brainworth intends to tackle, it will be interesting to look at the variations people come up with as we beckon each of our players to create an AI to drive a game of Snake.
Beginning with the player teaching the snake to check the direction of the nearest apple and move towards it, and culminating in a complex and multi-faceted Snake which is able to ensure it doesn't block itself in amongst a winding and lengthy tail, along the way we'll likely see variants even we don't expect, and look earnestly forward to seeing our participants' creations.
When we go one step further and look at having two players' snakes compete against one another to see which has the superior intelligence, then we'll really have something worth paying attention to.