Artificial intelligence has always been an important part of Blizzard’s games. All of the monsters, space marines, demons, and murlocs we’ve slain have required AI to give us something to play against. Today, Blizzard and some of the clever engineers at Google discussed DeepMind, AI that might change the future of Starcraft.
The panel opened discussing how they built an AI with DeepMind. Their mission is to “solve intelligence, and use it to solve everything else.” Intelligence is, according to their standards, “the ability to learn to perform well over a wide range of environments.”
They aimed for a General-Purpose Learning Machine, which learns automatically from raw inputs, and generally can operate across a wide range of tasks. Additionally, their intelligence needs to learn progressively, improving on everything it does. They call this their Reinforcement Learning Framework, where there an AI has a certain goal (accruing the most points), is placed an environment (such as a game), and the intelligence accomplishes this by managing game aspects like a player would.
The AI’s neuron network interfaces with the video game environments by looking at the pixels humans are familiar with, not accessing backend code or numbers. The Atari 2600 was a perfect test bed for the AI with 50+ classic games from the 1980s. AI was tested to get pixels from the games as inputs, the goal being to simply maximize the score of each game, learning everything from scratch. The kicker here is that there was only one AI system to play all the games–it wasn’t scripted for each game.
Starcraft is a different beast altogether compared to the Atari games. DeepMind doesn’t consider things that it can’t see, unlike a player that remembers things that walked off the screen. Fog of war has many implications here, and the variety of units and races with unique playstyles means there’s a lot for the AI to account for.
Kevin Calderone, software engineer for Starcraft II, comments that the depth of the game is the perfect test for AI development. Previously, Starcraft used all scripted AI, not intelligent, learning AI like DeepMind. DeepMind learns via self-learning, using raw images, and then develops its own strategy, which hasn’t been done before. This leads to some interesting applications, such as:
- AI learning from match replays
- AI learning from its own matches against itself
- Humans playing with AI to learn, or watching AI matches to improve
- Using the AI to simulate matches for balancing, coaching players, and debugging
They will release the official Starcraft II API to help ever a package to help test, develop, and enhance AI. It includes image-based and scripted AI, documentation, example code, and AI vs AI play. It’s set to release in the first quarter of 2017, and a blog post will soon go live with more details on the official site.
- AI for Atari games got really good over time.
- An in-house player was used as a benchmark for human-level gameplay.
- Over the last 2 years, the AI became 10x more proficient at most of the Atari games than humans.
- Pac Man was a challenge for the AI since so many elements were off screen and not available for it to analyze in the pixels, and similar issues occurred with Montezuma’s Revenge
- AI has really mastered Go over the years and challenged top human players of the game (the AI was called AlphaGo)
- Introducing the AI to Starcraft is a challenge for many reasons
- The team wants to make sure that actions per minute (APM) are on par with a human
- The AI will be restricted to the same controls and methods a player is, such as selecting units
- Fair AI makes the game fun because it can be beaten, not because it can dominate every match
- Allowing the AI to run super fast over time has revealed bugs in the game engine, which the team has been able to fix
- The AI will struggle learning with Starcraft because it doesn’t have the same resources that humans do–it will take many thousands of concurrent games to test
- The AI has many implications outside of gaming, such as medicine