Since it was Halloween yesterday, let's start with a tale of horror. Not so long ago there was an experienced team, working with a known platform, and a known engine. They had just scored a popular girl friendly license valued at roughly $160 million. Their game had the potential to hit as big as Pokemon or Nintendogs.
The designer ignored all this. You see, he had always wanted to make a Zelda clone...one with the critical element that has always been missing from all Zelda games: Hardcore jumping puzzles. The designer thought, "Nintendo is smart, but how could they have missed such an obvious improvement?" Sure the license was targeted at tween girls, but tweens like big swords don't they? This was the game he personally had always wanted to play.
The team were contractually obligated to go along with the design. It had been green lighted. There were milestones tied to the completion of each voluminous chapter of the tome like design script. If the team missed the milestones, the penalties were extreme. So they crunched in happy little silos of artists, level designers and programmers, all in accordance to a strict production schedule. It was the only possible way to get all the specified content done in time for the looming holiday ship date.
Finally, as the end drew near, they sent it off to testing. Early reports come back that the jumps in the first few levels were rather clumsy. The designer relied on his gut and sent forth an email containing a new set of parameters that were intended to polish the jump mechanics.
Eventually, a month later, someone got around to playing the last few levels. Uh oh. They relied heavily on laboriously constructed jumping puzzles tuned to the old jump mechanics. The last few levels of the game were massively broken.
It was too late to fix all the early levels so they entered into a death march to rework the last few levels. Lacking time or resources, they busted their budget hiring extra crew to take up the extra workload. The game was still delayed by several months.
Surprise, surprise, the end result wasn’t a very good game. It received miserable scores, but even worse, the core audience who would have bought it on the license alone was completely alienated by the design decisions. They wanted to befriend and grow cute little animals. They didn't want to die repeatedly while being attacked by spiky monsters while scrambling through military obstacle courses.
When the licensee pulled out of the sequel, the team collapsed. The human cost was immense. Home were lost, families relocated, many were so burnt out on game development they left the industry permanently, their passion crushed.
There were a lot of problems in this tale, but the primary one was a blatant failure of the game design process at almost every level. The game designer really didn’t know what he was doing. He thought he was writing a movie script. He thought he was making a game for himself. He had no idea that the game systems he was dabbling in were deeply interconnected.
Most game design that occurs isn’t much better off. It is a combination of craft (such as cloning Zelda) and intuition (such as when he hoped that tweaking the jumping mechanics would fix all his problems.) There is no science here. No predictability.
We have the ability to do so much better. We can create a science of game design.
If we want to modernize game design and move beyond the land of craft and intuition, we need to face and conquer four great challenges. These challenges will define game design for at least the next twenty years and perhaps longer.
- Modeling our players: What do our player really want?
- Generating new games systems: How do we create new mechanics that serve those player needs?
- Metrics: How and what do we test to make sure we are doing the right thing?
- Results: How do we get the results quickly and iterate on them so we can evolve the game quickly?
We are starting to see a smattering of theorists and teams with money to burn tackling these problems. They are creating systems that save their butts from expensive design mistakes. This is damned exciting.
- You’ve got the Halo folks tracking heat maps of where players die. Valve has been relying on metrics for years. Nintendo builds early player tests and kleenex tester right into their dev process.
- On the game systems side you’ve got Raph’s game grammar.
- We are starting to rely on real data to model players moods and reactions with Chris Bateman and Nicole Lazarro’s work.
All these systems are being developed in parallel. You can measure things, but you don’t know what you are supposed to measure. You can write about game grammar, but it never is anything more than a loosely applied system of egghead analysis.
Maybe, just maybe, we can come up with a unified system that tries to answer multiple challenges simultaneously. The connections are all there. We just need to put them together.
In my Seattle laboratory, I'd been working on one attempt. It mixes game grammar, player models and measurement systems into one delightfully unified game design process. I’ve got 10 minutes left to share it with you. Think I can do it?
I started with a player model. Let's assume for a moment that players are naturally inclined to wander about, sucking up tidbits of info in the hope of learning interesting new skills.
From the player model we can construct an atomic feedback loop that describes how they learn all the new skills. This basic atomic loop includes all the fundamental pieces of a game. We are taking the deconstructed analytic elements described in so many books and tying them back together into a functional system.
- You’ve got your game system, that black box in the center of the loops.
- You’ve got your player input
- You've got feedback to the player
- You have the the players cognitive model of the game.
We’ve reduced the scope to a single atom in order to making things managable and
Press button, character jumps. That’s a skill atom.
Once we have a skill atom we can say interesting things about how the player interacts with it. First, skill atoms have states.
- The player can figure out how to jump. They can exercise that skill by jumping on things. That is mastery. We can record this.
- The player can fail to figure out how to jump. They never touch the button again. That’s early burnout.
- They can get bored with jumping and ever jump again. That is late burnout. We can measure this as well.
Skill atoms are chained together. You can visualize them as a directed graph. Later stage atoms depend heavily on early stage atoms.
Want to kill a Koopa? You need to jump on him. Better hope you mastered the jump skill. We can now represent that classic relationship created by Miyamoto ages ago in a visual model. The theory is slowly catching up with the experimentalists.
You can turn these little chains into big chains that describe the entire game. Here’s a skill chain of Tetris.
Skill chains are remarkably flexible and rather easy to apply to almost any game. You look for the actions the user is performing, the skills they are learning and the positive / negative feedback you’ve built into the game. Any game with explicit rewards can be diagrammed.
There are probably a goodly number of you rolling your eyes at this point. You can create pretty diagrams to analyze anything. Here we've got someone who has created a very lovely and describing diagram of a penguin defecating. This is not a helpful diagram.
We ultimately need pragmatic everyday tools, not egghead analytics. The primary reason we create skill chains is to help solve two of our outstanding challenges:
- Get real results quickly
- Choose the right metrics so we aren't wading through huge quantities of data.
Skill chains can be used to create a rapid, iterative test driven game design process.
If we really rapid feedback, let’s build the feedback system into the game from the very beginning. Skip the giant paper tome phase. Start with a playable system that gives you meaningful reports.
The nice thing about skill atoms is that they eminently testable. When you write code that is going to be put in front of player, define your skill atoms. Its the same conceptual ideas behind writing unit tests.
- You have a test framework.
- You write the tests when you write game logic.
- You run the test suite when you run the game logic.
- You get a clean simple report when someone plays the game.
When you write your game systems, you can instrument each and every atom. It is a relatively inexpensive process.
- You labels the rewards
- You label the actions
You know when and atom is touched. You know when it is inactive. All those, states, burnout, inactive, etc you can record.
Remember burnout? The next time someone plays the game, we can visualize burnout directly on our skill chain diagram. You see instantly what atoms folks are missing. Here is someone failing to figure out how to complete a single line in Tetris.
You can also look at the data in terms of feedback channels and activity graphs.
Either way you get quick, easy to decipher feedback.
- Instead of having a team that creates customized visualizations tailored to your game, you can use a more generalized system.
- Instead of sorting through dozens or hundreds of badly organized logs, you can see in a glance where problems are occurring.
This requires a change in your development methodology. You want people to play your game as early as possible and as often as possible. Luckily automated testing of skill atoms reduces the cost substantially compared to traditional manual tests.
- Anytime that anyone, anywhere in the world runs you game, you get valuable play balancing information.
- Build up a database of a thousand players and release your daily builds to three people a day for every single day of your dev cycle.
Once you have rapid, daily feedback in place, you can use the resulting reports to evolve your design iteratively. All this analytical game grammar silliness becomes a foundational feedback system.
- We can regression test game designs now.
- We can fix busted skill atoms and see how things improve the next day.
- What happen when we refactor our designs to make them more testable? I have no idea, but it excites me.
The systems I've described today are just the beginning; rough sketch of the future, if you will.
- Our player models are primitive.
- Our metrics can advance dramatically in their sophistication. We are just starting to tap into biometrics
- Our player testing systems are still expensive to run.
- There are amazing new games waiting to be designed and evolved into stunning experiences.
I love this picture. 1927 5th Solvay conference. Einstein, Bohr, Curie. 17 out of 29 attendees went on to win the Nobel prize.
The first conference was in 1911, almost a hundred years ago. Einstein was the youngest present. Who is the youngest person here? These quirky, brilliant people revolutionized our understanding of physics. Without their work, we wouldn’t have semi-conductors, computers or video games. They were theorists and experimenters not so different than what we have in our industry today. A small group of eggheads changed the world.
I look out at this group and I see the same potential. We’ve got the brains. We’ve got the time.
Let’s make this an amazing weekend."
PS: There was one more group photo shown immediately after the Solvay photo. It however, has been redacted due to national security concerns.