To pass the second phase of the World Bank Initiative’s massive open online course “Risk and Opportunity: Managing Risk for Development” participants were not tested by an essay question or a multiple-choice quiz. Since the MOOC’s focus taught general applications of risk management on a wide range of economic and social levels (household, community, policy, government, etc.), ambiguous answers were acceptable. Though a traditional test would give instructors a good idea of how well participants read and understood the material outlined in the World Development Report (the text this course was structured around), it wouldn’t provide an accurate understanding of how well participants could apply the material in contextualized specific settings.
Risk Horizon was built specifically to rectify that problem. Over the span of three and a half months my team at the Engagement Lab designed, paper prototyped, developed, and implemented a game that required players to apply general concepts they learned in week 2 of the MOOC in order to receive full credit.
As a simplified real time strategy game, players were tasked with developing the world of Alora by building and upgrading resource generating “pods” that helped them reach an end-of-round-goal. In order to succeed, the players had to constantly balance expending their limited wealth and time on the three different actions (protecting their community, generating knowledge, choosing insurance plans) with persistent community development. These game actions acted as general metaphors for real world risk preparation initiatives that policy makers around the world were enacting in their countries, corporations, and communities. The rhetoric revealed through play (and what the World Bank Institute reveals through their MOOC) shows players that for you to thrive, you must find that perfect balance between risk management and healthy development.
A screenshot from a Risk Horizon level
At its most basic, Risk Horizon is a game that throws players into a system and asks them to play, to tinker, to fail, and finally to understand how a fictional system mirrors the real world. The game was primarily designed as a systems-thinking assessment test that could verify if participants could not only understand, but demonstrate the knowledge obtained through lectures, discussions, and readings in a more qualitative fashion. Instead of taking a quiz or a test, they had to help Alora flourish by balancing healthy development with risk management. They had to beat Risk Horizon.
Learning Assessment and Risk Horizon
One of our goals for this project was to show that games are not only good for learning, but if designed correctly, can incorporate assessment directly into the experience. Though digital games for learning and entertainment are much more acceptable than ten years ago, many people either out of ignorance or disbelief still feel that games are not a viable way to assess knowledge acquisition. Even those who believe games are powerful learning tools struggle to incorporate assessment within games since they are generally applied outside of learning experiences rather than internally. Therefore, how can we embed assessments within the experience, so that teachers can understand and track knowledge acquisition through student gameplay?
Knowledge assessment in Risk Horizon occurred through massive amounts of data collection. Our Technical Coordinator Wade Kimbrough researched and built a data collection backend system that allowed us to see general and nuanced playing styles of participants. Our data ranged from broad statistics, such as how many times all players reached the end of the game, to detailed aspects such as what level of development, knowledge, and protection a player had reached by the end of level 3.
Though the large amount of data was invaluable for understanding the game effectiveness on a large scale, it was not a fully sound way to assess knowledge acquisition of individual participants due to the data anonymity. It is not impossible to link data to individual players, but due to our limited development window (after the design was in place, we only had 6 weeks for development!) we simply ran out of time. Therefore in order for participants to receive credit, we implemented passcodes into each level end screen. Whenever a player beat a level, they would receive one of these randomly generated passcodes which they would input into the MOOC, proving how far they progressed. With six total levels in the game, the MOOC team decided that if participants complete level 2, they receive 5% of their quiz grade, while beating level 4 would result in them receiving the full 10%. For example, if a player was able to surpass level 4, they would have successfully demonstrated knowledge acquisition. Since we designed the game’s mechanics around the strategy of balancing the risk management actions with development, this seemed to make sense at the time. But what if someone doesn’t make it to level 4? Does this mean they have not successfully acquired the knowledge outlined in the course?
Though the game was positioned as a mandatory requirement, skipping it did not have severe consequences. The World Bank adeptly incorporated multiple assessments and diversified passing criteria for the MOOC. Even if someone chose to skip the game entirely, they would have to skip an entire lesson (week 1, 3 or 4) in order to complete the course without a certificate of accomplishment. In addition, obtaining a level 2 or level 4 grade achievement was only one part of the game’s assessment: there was also a reflection component where participants wrote an essay commenting on the correlations between the game and real world risk management. These were then graded by peers which resulted in the process becoming much more transparent.
Even in the most ideal holistic grading and assessment scenario, the potential for negative feedback when implementing a new form of assessment is highly likely. When participants first learned they needed to play Risk Horizon to pass a weekly quiz, a significant number of participants voiced their concerns. One of the most prominent complaints we received (aside from user technical issues) consisted of people angry that they had to play a game. Frustrated players were unable to comprehend why their skills in a game would reflect their understanding of the material. Keep in mind this is not the same complaint as someone saying the game was not good (boring, broken, unfair). They were upset by the very notion that playing a game could help the instructors of the MOOC assess whether or not participants could apply learned materials.
Here are some anonymous posts from participants on the game’s forum:
“I don’t understand why playing computer games should count as a result. Give me a case studies and ask me questions afterwards but please, I don’t want to spend hours and hours playing computer games.”
“I do not agree that the Game teaches Strategic Thinking for Better Community Development. It is simply a Computer Game. Like any other Computer Game, the more skilled at Computer Games you are, the higher the levels at which you can be successful.”
“Since I am not a “gamer” (and have very little interest or ability in computer gaming) I find this pedagogical method counter productive. I hope in future iterations of this course, if you choose to include the game, it is optional or perhaps offered as an extra credit exercise for those who do like gaming; but do not count it as part of the grading.”
I believe these reactions can be broken down into two camps:
- Those who are feeling frustrated with the mandatory obligation to express themselves through a medium unfamiliar and inaccessible to them (compounded with the anxiety of doing well or risk a failing grade).
- Those who do not see the value of games applied for learning.
Even if a participant realizes that playing the game poorly won’t substantially affect their grade, the feeling of failure and incapability to succeed can linger. That pent up frustration typically ends up directed towards the medium itself. I wonder if those who complained about the validity of using games as assessment tools actually believe that games cannot be taken seriously as a medium. Perhaps they were simply frustrated by their inability to play the game due to technical limitations or a steep learning curve.
The tone of the forum posts started to turn around as the week progressed. Participants who first expressed extreme frustration or disgust towards the game were beginning to come around to it, even voicing their support to other new dissenters. Participants expressed:
“After about ten attempts I have completed the game. The dynamics are very interesting and, i believe, resonate well with real life. Those of you who are not keen on the game, it is a great real time scenario, and equates experiential learning…”
“…I really enjoyed the game and I was receptive to the experience; I confronted the stressful situation of risks and challenged decisions under constraints. I realized the importance of being fully aware about the community’s future incidents; keeping simultaneously pace of its developing and/or upgrading; I also experienced the necessity of keeping balance among research, protection and insurance.”
“Overall very interesting game and learning experience, mimicking the reality very well – you feel so real when you succeeded or failed…”
Though there were still those at the end of the week who disagreed with the inclusion of the game, the voices for the experience certainly outweighed those who were against.
For our budget and timeline, I believe we created a game that applies the learning content in a fun, effective way. Though the assessment aspects combined with barriers to entry most likely contributed to frustration and dissent, there is also a primary aspect of all assessment that is counter-intuitive to the fun nature of play: the involuntary nature of taking a test. According to one of Huzinga’s four characteristics of play, play must be voluntary. By forcing someone to play, it creates issues of genuine engagement and interest. If assessments are mandatory, yet play needs to be voluntary, how do we find that middle ground to make great games as assessments? I’m sure there are game designers who have great insight into this question, though it is something I am constantly mulling over myself.
Furthermore, even if we set out to create the greatest learning games of the century, those without digital gaming literacy are going to feel intimidated, even angry, when asked to display their knowledge in a medium unfamiliar to them. Traditional learning environments and formal education of the past has done much to separate learning from fun, and games and play from knowledge acquisition. This contributes to generations of people butting up against new radical forms of experiential learning that is most prominent in the 21st century. The argument for games as a valuable medium is over, but showing why games and play are educationally valuable and can effectively model real world problems or issues is a still important. Organizations like Games for Change, design studios like Games Learning Society, scholars like Scot Osterweil and Ian Bogost, and all of the courageous, innovative teachers in classrooms across the country are evidence of the ever-growing movement advocating for the medium. Though Risk Horizon was hard to accept by a significant portion of its audience, it is empowering to know that the case for fun, educational games with mechanics based assessment is constantly growing.