glogoGames offer a predictable controlled environment to develop and test artificial intelligence algorithms. Tic-tac-toe is usually the first adversarial game people learned when young, and so is ideal for a class teaching the basics of writing game playing algorithms. Advanced algorithms tackle playing timeless games like Chess and Go.

While those games are extremely challenging, they fail to represent many of the tasks that are interesting to pursue in artificial intelligence research. For some of these research areas, researchers turn to video games.

I've seen research results presented for playing various classic Atari 2600 arcade games. One example was when Google's DeepMind research algorithm played Breakout in a super efficient and very non-human way by hitting the bricks from behind the wall.

What I hadn't realized until today was that there's a whole infrastructure built up for this type of research. Anybody who wishes to dip their toes in this field (or dive in head first) would not have to recreate everything from scratch.

This infrastructure for putting AI at the controls of an Atari 2600 is available via the Arcade Learning Environment, based on a game emulator and making all the inputs and outputs available in a program-friendly (instead of human-friendly) manner. I learned of this while reading about Maluuba's announcement of their Hybrid-Reward Architecture. They applied their system to an algorithm that learned how to get the maximum score in Ms. Pac-Man.

And if getting ALE from Github is still too much work to set up, people can go to places like the OpenAI gym which has built entire algorithm training environments. All it takes is a working knowledge of Python to access everything that is available.

I'm impressed how barriers to entry have been removed for anybody interested in getting into this field of AI research. The only hard parts left are... well, the actual hard parts of algorithm design.