4
Introduction to Gamification

4.1 Introduction

It is a well-known fact that students often complain about traditional lecturing methods of teaching, suggesting that they can be boring and ineffective (Vasuthanasub 2019). While instructors continuously seek to innovate in the ways of teaching and motivating students, they still admit that existing educational lessons and instructional strategies lack incentive and engagement powers (Lee and Hammer 2011). Serious games as a learning tool are one of the most promising approaches. Games can deliver knowledge and strengthen skills such as communication, collaboration, and problem-solving (Dicheva et al. 2015). However, creating and utilizing such a highly engaging classroom atmosphere with serious games is complicated, expensive, and time-consuming. This class implementation requires an integration of appropriate pedagogical contents and certain technical infrastructures (Kapp 2012). Under these situations, another approach many lecturers are now pursuing is gamification.

In recent years, gamification has firmly positioned itself in the commercial sector. Various companies and firms have rapidly adopted it to encourage employee performance, improve corporate management, and thus far promote marketing strategies and customer engagement. For instance, customers can earn stars, points, tiers, discount coupons, and any other forms of reward for visiting retail shops or shopping through the online store via the mobile phone application (Lee and Hammer 2011). This result is driven by its capability to shape and influence consumers’ behavior in a desirable direction. Loyalty programs, such as credit card rewards and frequent-flyer mileage rewards, are often provided as clear case studies of successful gamified mass-market products.

Nonetheless, while the term is gaining ground in the business world, the potential uses of its application in an academic discipline are quite a relatively emerging trend (Dicheva et al. 2015). Traditional schoolings already have several similarities in in-game elements. Students must complete and submit the assignment to get points, and then these points would transform later to letter grades. Students may also receive rewards for desirable behaviors or punishments for improper actions. With these grading and rewarding systems, if students perform well, they will earn an equivalent grade point average and be promoted to a higher level at the end of every academic year. Given all these points, school is already the ultimate gamified experience. However, something about this environment of game-like elements fails to fascinate not all but many students. The typical classroom atmosphere often leads to schooling misconduct and undesirable outcomes, including absence, cheating, withdrawal, incomplete grades, and dropping out. Those students, at any rate, would not describe classroom-based activities in school as playful experiences. Thus, the existing game-like elements do not satisfactorily generate the power of engagement and encouragement.

Today, gamification is becoming more prevalent in the educational system because of its persuasive ability to inspire students and reinforce the learning process and experience (Vasuthanasub 2019). Speaking about this technique in scholastic terms, gamification, defined by Deterding et al. (2011), refers to the use of game design elements, including mechanics, dynamics, and frameworks in non-gaming environments. According to this definition, it is important to be aware that the fundamental concept of gamification is different from serious gaming. While serious gaming is annotated as the use of full context or the end product of games for non-entertainment purposes, the gamified application is emphasized on the employment of merely game elements (Lee and Hammer 2011). Educational gamification focuses on using or adapting game-like culture roles, player experiences, and rule systems to influence students’ behavior. To maximize the potential of gamification, it is significant to understand how this technique can be best deployed in practice. In this case, there are three primary domains of development in which gamification can serve as an intervention, including:

  • Cognitive: games provide a complex set of rules for players to explore and discover through active experimentation. They guide players through unconsciously skillful processes and keep evolving with potentially challenging missions or complex assignments (Koster 2004). Games also provide multiple routes to fulfill main and sub-objectives and allow players to choose their goals to succeed in complex tasks. These are benefits of games with regards to motivation and engagement (Locke and Latham 1990). Therefore, gamification most likely, when applied to teaching, will transform students’ perspectives on schooling. It can help students to perform with an understanding of a clear purpose and true value of the tasks or works. In the case of the best-designed games, the reward for accomplishing a mission or solving a problem is more complex and more complicated (Gee 2008). Gamification hopes to replace the same aspect in schools as well.
  • Emotional: games usually stimulate powerful emotions, ranging from curiosity to frustration to enjoyment (Lazzaro 2004). They can invoke many positive emotional experiences, such as dignity, integrity, respect, and sympathy (McGonigal 2011). Likewise, they can help players persist through negative sentimental encounters, like aggressiveness, egoism, and impatience, and even convert these into optimistic ones. Comparing these advantages to existing game-like environments in school where the stakes of failure are high and the cycles of feedback are extended, on the other hand, students frequently feel hesitant to risk their stakes with few opportunities. No wonder many students experience anxiety instead of anticipation because if they try but fail, it will cost them high stakes (Pope 2003). Gamification is a promising strategy that offers resilient opportunities for students to face failures by reframing those failures as an essential part of schooling. It can also shorten feedback cycles, render low-stakes options to determine learners’ performances, and create an ambiance in which effort can be rewarded. With these intentions, students will perceive failures as learning opportunities instead of feeling fearful, hopeless, and overwhelmed.
  • Social: games allow players to experience new identities and roles and direct them to make decisions based on their in-game positions and perspectives (Gee 2008; Squire 2006). Players can select less explicitly fictional characters to explore their new sides or predominant skills in a safer learning environment. For instance, a shy adolescent may become a governor who leads a dozen mayors (other players) in regional urban planning and city development. Having a solid foundation of school-based identity helps students with schooling in the long term (Nasir and Saxe 2003). Even if they feel like “they cannot do school” (Pope 2003), gamified environments allow these helpless students to try on unfamiliar identities, roles, and tasks. Gamification also helps students to openly identify and confidently present themselves as scholars through gaming sessions. In other words, games can provide social credibility for outstanding performance and public recognition for academic achievement. In addition to recognition, which is usually provided by educators only, gamification can induce students to reward each other with in-game currency. Overall, a well-structured gamification environment would influence students to explore meaningful roles, which are fruitful for learning. By ensuring a playfulness of new identity development and the appropriateness of a rewarding system, educators can convince students to think differently about schooling purposes and not underestimate their potential in school.

The integration of game-like components and conventional education can be complementary but are not always necessary (Lee and Hammer 2011). Indeed, gamification can provide educators with powerful instruments to guide and reward students, drive students to participate in classroom activities, and especially allow them to perform at their peak performance in academic pursuits. It can reveal to students the trends that education in the modern era can be an enjoyable experience. A gamified environment design and implementation design procedure might adsorb instructors’ endeavors, consume extensive resources, or even mislead students, so they should only try and learn when rewarded (Lee and Hammer 2011). Therefore, there are significant risks that gamification and schools could either damage one another or make each other worse. The results of combining gamification and school, at any event, could be a downfall.

4.2 Serious Gaming and Policy Making

Over the last few decades, practitioners and management scholars have increasingly criticized conventional strategy-making methods, arguing that rapidly changing environments require emerging and creative approaches. Serious gaming discipline is increasingly useful within mainstream strategy literature involved with former strategy-making approaches (Geurts et al. 2007). A definition of gaming simulation is given as a representation of a set of key relationships and structural elements of a particular issue or a problem environment, where the behavior of actors and the effects of their decision are a direct result of the rules guiding the interaction between these actors (Wenzler et al. 2005).

Serious gaming is an activity where two or more independent decision-makers are seeking to achieve their objectives within a limited context (Greenblat and Duke 1975). Again, these games are labeled “serious” because their primary objective is educational and/or informative as opposed to pure entertainment – they allow researchers to model problems with societal aspects, which include the management of critical infrastructure systems. The advantage of simulation gaming over traditional computer simulation models is that the stakeholders do not have to be represented by mathematical formulations; instead, they are played by the participants themselves (Bekebrede 2010). Conveying complex systems with serious gaming models saves the model builders the need to build in the psychological assumptions since the stakeholders represent them.

Simulation games have many forms and aim to provide insights for various goals. The common point of each simulation game is that reality is simulated through the interaction of role players using non-formal symbols and formal, computerized sub-models when necessary. This approach allows the participants to create and analyze future worlds they are willing to explore. Lately, large organizations have reported serious gaming simulation uses for their organizational change management efforts (Wenzler 2008).

Duke (1974) argues that formal complexity communication methods are inadequate for future problems due to their exponentially increasing complexity. He believes that “the citizen, policy researcher or other decision-maker must first comprehend the whole – the entirety, the system, the gestalt – before the particulars can be dealt with” (Duke 1974, p. 10). Gaming simulation techniques can handle “many variables” and are distinguished from other techniques by being relatively uncalibrated and intuitive (Duke 1974, p. xv). Each serious game is situation-specific; consequently, they should only be performed within the intended and designed context. Failure to do so will result in poor results.

Moreover, there remains a debate over whether simulation (and gaming) is a standalone academic field of study or a helpful tool that other disciplines can use. The source of the ongoing debates is stemmed from the interdisciplinary nature of these games. Simulation (and gaming) is certainly an advanced tool in various areas like education, business, urban studies, environmental issues, etc., yet, to date, gaming researchers are still working towards a common theory and an established field of academic study (Shiratori et al. 2005).

4.2.1 Gamification: A Brief History

The earliest and the most common uses of simulation gaming are so-called war games dating back to the nineteenth century and involve exploration, planning, testing, and training of military strategies, tactics, and operations in a simulated interactive and sociotechnical environment (Mayer 2009). With the emergence of decision sciences like operations research, systems analysis, and policy analysis, early serious gaming efforts initially received considerable skepticism. However, simulation and gaming methods (or soft systems thinking) became an alternative to formal complexity modeling techniques like systems analysis, systems dynamics, and operational research. These techniques were successfully applied to well-structured problems; yet, when considering the ambiguous and often ill-structured and complex systems, their contribution was limited since adequate theory and empirical data were absent. Serious gaming methods can provide decision-makers with an environment in which the totality of the system and its dynamics are present. With a holistic approach that includes a wide range of perspectives, skills, information, and mental models of the involved parties, the quality of the decision-making environment increases dramatically (CLP 2021; Geurts et al. 2007).

In the late 1940s, the RAND Corporation (Research and Development) created systems and policy analysis methods to improve government decision-making. Although gaming alone was still not considered a scientific approach within the policy analysis toolbox, the decision-making society saw gaming as the “language of complexity,” a beneficial approach to designing computer models. Several European nations, especially the Netherlands, practiced various gaming exercises and gaming styles like spatial planning of the country on a national scale (participants played the roles of private and public investors, governmental licensers, stakeholders, and citizens). In the late 1990s, many scientists leaned into computer-based simulations, given the developments on that platform. They adopted the concepts and technology derived from games for entertainment purposes and developed games like SimHealth (US healthcare simulation), SPLASH (water resource management), and NitroGenius (multiplayer, multi-stakeholder game aiming to solve nitrogen problems). By 2000, games started to be employed for purposes like healthcare, policy making, education, etc., with the adoption of the term “serious games” as an oxymoron (Mayer 2009).

4.2.2 Uses of Serious Gaming

Serious games are developed to serve several different purposes. However, the most crucial contribution of gaming methods is their ability to enhance communication among various actors. This led researchers to utilize gaming methods intensively in complex system exposition where complex systems with social aspects are examined (Duke 1974; Geertje et al. 2005). Policy gaming exercises include understanding system complexity, improving communication, promoting individual and collective learning, creating consensus among players, and motivating participants to enhance their creativity or collaboration (Geurts et al. 2007). Policy games are often used to understand the performance of complex infrastructure systems.

Again, serious gaming methods are often used as an educational technique to train players, from high school students to professional emergency responders (Greenblat and Duke 1975; Shiratori et al. 2003). Additionally, gaming methods are often employed in various fields (e.g. war gaming, business gaming, policy gaming, and urban gaming). Policy gaming exercises assist organizations in exploring policy options, developing decision-making, and strategic change support. Such policy exercises can be used in a variety of problems; from deregulating public utility sectors to reorganizing the Office of the Secretary of Defense, to restructuring cities with urban planning games, to investigating various policy options for global climate change, to restructuring the UK’s National Healthcare System, and crisis management at national levels (Brewer 2007; Crookall and Arai 1995; Geurts et al. 2007; Mayer 2009; Wenzler et al. 2005). Games that are designed for individual learning can be categorized under three main objectives: training participants for a situation/scenario, changing participants’ mental models with increased awareness, and attaining participants’ support. In games where the collective learning is aimed, three categories of objectives are observed: discovering (i.e. understanding a situation and exchanging ideas), testing (i.e. carrying out experiments to check the value or effectiveness of the options), and implementing (i.e. realizing the organizational change for training purposes) (Greenblat and Duke 1975; Joldersma and Geurts 1998).

4.2.3 Serious Gaming in Infrastructure Design Gaming

Arguably, there is the complexity associated with governing large, complex, and interdependent critical infrastructure systems (Ancel 2011). The discrepancies associated with such infrastructure transitions are related to the lack of understanding of the societal aspects of these systems. For that reason, several severe gaming exercises are developed to assist decision-makers, experience system complexity, and train stakeholders. Serious games can represent the multi-level system architecture by proprietary rules at the player level, interaction of the players, and the system levels. The complexity associated with infrastructures (both the technical/physical and social-political levels) is integrated within the gaming platform for stakeholders to experience an abstract representation of the system and make informed decisions (Mayer 2009). Several infrastructure systems are represented using serious games.

Unlike hard-system methods, the gaming and simulation approach is quite flexible and easily adaptable to other quantitative methods, scenarios, and computer models (Mayer 2009). Policy gaming methods can help participants and modelers understand the big picture and identify critical elements of the complex problem. Because of the iterative and experimental nature of these gaming and simulation environments, participants can test different approaches within both a safe environment and a condensed timeframe (CLP 2021). INFRASTRATEGO is an example of a serious gaming-based decision-making tool that encapsulates the Dutch electricity market. Game developers used the game to examine strategic behavior in a liberalizing electricity market while examining the effectiveness of two main types of regulatory regimes. Strategic behavior is using administrative and regulatory processes such as stalling, delaying, or appealing interconnection negotiations, engaging in anti-competitive pricing, or other methods that can be encountered within the liberalization of utility industries. Empirical research indicates that strategic behavior may affect the level playing field and public values negatively. Overall, the game could identify undesirable, unintended, and unforeseen effects of strategic behavior phenomena. Serious gaming enabled monitoring and measurement of strategic behavior as it occurred since participants did not have any fear of litigation and were able to report the development of the strategic behavior, which cannot be observed in real-world situations (Kuit et al. 2005; Wilson et al. 2009).

Similar to INFRASTRATEGO, games like THE UTILITY COMPANY and UTILITIES 21, along with other market, policy, or performance simulation models, are related to the deregulation of utility companies (Wenzler et al. 2005). One example of a fully-computer-based simulation game is SimPort, involving infrastructure planning and land designation for the extension of the Port of Rotterdam. SimPort is used to support the actual decision-making process characterized by a high level of uncertainty, path dependence, and strategic stakeholder behavior, coupled with technical, political, and external factors such as the national and global economies (Geertje et al. 2005; Warmerdam et al. 2006). Furthermore, games like Rescue Team and The King of Fishermen are examples of games geared towards teaching and training business ethics, which were the causes of two major corporate accidents in Japan’s nuclear industry (Wenzler et al. 2005).

4.3 Dealing with Uncertainty and Expert Elicitation

4.3.1 Uncertainty

Uncertainty is one of the core elements that must be considered when analyzing and designing critical infrastructure systems. Moreover, sound risk decision strategy formulations require prior identification and quantification of uncertainties (Chytka 2003). Uncertainty is the inability to determine the actual state of a system and is caused by incomplete knowledge or stochastic variability (Chytka 2003). There are two types of uncertainty in engineering, classified as internal and external. Internal uncertainty is caused by (i) limited information in estimating the characteristics of model parameters for a given, fixed model structure; and (ii) limited information regarding the model structure itself. External uncertainties come from variability in model prediction caused by plausible alternatives, also referred to as input parameter uncertainty (Ayyub 2001; Chytka 2003).

The design and implementation process of sociotechnical systems does not contain specifications, regulations, or codes as in the case of designing traditional engineering systems. Instead, designing for uncertainty requires that policy-makers make decisions in situations where scenarios of competitive forces, shifts in customer preferences, and changing technological environments are largely unpredictable (Cooke and Goossens 2004; Roos et al. 2004). The uncertainty emerges from two sources: knowledge of the system and understanding the social response. As previously suggested, large, complex, and interdependent critical infrastructure systems are often considered wicked (or ill-structured) problems. Therefore, infrastructure planners and designers need to obtain data regarding the future phases of the system transition. And while the required data for developing the sociotechnical transition model and governing risks can be provided by expert judgment and elicitation (Ancel 2011), other approaches exist, including gamification.

4.3.2 Expert Elicitation and Aggregation Methods

Expert judgment can be defined as data given by an expert in response to a technical problem. Experts arepeople who have a background in the subject area and are recognized by their peers or those conducting the study as qualified to answer questions (Meyer and Booker 1987). Expert judgment is used when information from other sources like observations, experimentation, or simulation is not available. Subject matter expert opinions are often employed in the estimation of new, rare, complex, or otherwise inadequately understood cases, future forecasting efforts, or integrating/interpreting existing qualitative/quantitative data (Meyer and Booker 1987). Multiple methods exist regarding the different elicitation techniques, such as group interaction, independent assessment, questionnaires, qualitatively obtained data, calibrating expert judgment data, knowledge acquisition dynamics, and learning process studies (Chytka et al. 2006; Cooke and Goossens 2004; Gustafson et al. 1973; Keeney and von Winterfeldt 1989). Large-scale sociotechnical systems are composed of multiple components involving various stakeholders, technologies, policies, and social factors (Frantzeskaki and Loorbach 2008). The multi-dimensional aspect of the next-generation infrastructure systems requires decision-makers to consider all the complexity and uncertainty associated with such systems (Roos et al. 2004). Decision- and policy- makers often require expert opinions to comprehend and manage the complexity within such systems. The data regarding various subsystems within the meta-system needs to be obtained from a group of experts and combined (or aggregated) in order to assist the decision-making process (Cooke and Goossens 2004). Individual expert assessments are elicited and aggregated by mathematical and behavioral approaches (Chytka 2003; Cooke and Goossens 2004; Singuran 2008). Aggregation algorithms such as the Bayesian method, Logarithmic Opinion Pool, and Linear Opinion Pool are used to combine expert opinions regarding a system with known results. However, behavioral methods and a linear opinion pool were found to be more adequate for future events with unknown results (Ancel 2011).

Bayesian approaches are used for a subjective type of information where knowledge (i.e. probabilities) is a combination of objective (prior) and subjective (obtained from the experts) knowledge. Although subjective expert opinion is integrated into the knowledge, the Bayesian method still requires prior knowledge regarding parameters which doesn’t exist for future events with unknown results (Ayyub 2001; Bedford and Cooke 2001). The opinion pool methods combine the elicited distribution via linear or logarithmic weighted averages. The opinion pools have been used in fields like meteorology, banking, marketing, etc., where the expert’s weighting factors are validated with either historical data or the observance of the event.

The behavioral approaches seek to reach a consensus among the participants through different forms of interaction, including brainstorming, the Delphi method, the Nominal Group Technique (NGT), and Decision conferencing (Ayyub 2001; Cooke and Goossens 2004; French et al. 1992). The Delphi method was heavily used in the 1960s and 1970s on long-range technological innovation forecasting studies and policy analysis. The process involves an initial estimation session, followed by discussions and revision of the initial assessments. Typically the opinions converge to a high degree of consensus following two or three iterations (Meyer and Booker 1987).

The Delphi method is no longer used extensively since it does not carry uncertainty indicators and falls short on complex system forecasts with multiple factors (Cooke and Goossens 2004). The NGT allows expert interaction by presenting and discussing their assessments in front of the group. Following the discussions, each expert ranks the portrayed opinions silently, where the aggregated ranking of these opinions represents the consensus among stakeholders. Scenario analysis revolves around two questions (i) how a certain hypothetical condition can be realized; and (ii) what are the alternatives for preventing, diverting, or facilitating the process? Decision and event trees, along with respective scenario probabilities, are used to predict a future state (Ayyub 2001). Decision conferencing is used to establish context and explore the issues at hand. It is used to facilitate making decisions and reaching consensus on complex issues such as planning the events following the Chornobyl disaster. Decision conferencing is often based upon multi-criteria decision analysis (MCDA) and helps stimulate discussions and elicit issues. Events are often short, two-day conferences where interested parties and experts gather to formulate and implement policy actions to offer the best way forward (French et al. 1992). However, behavioral approaches tend to suffer from different expert personalities leading to the dominance of certain individuals or group polarization.

The mathematical approaches covered earlier are often used to determine the technical parameters of systems, including the performance or safety values of newly developed systems. However, uncertainties resulting from the interdependency of stakeholder groups also have to be considered when modeling critical infrastructure systems. Similarly, behavioral approaches received considerable criticism since participants of these methods had the urge to over-simplify their assumptions. Because complex systems often exhibit strongly counter-intuitive behavior, researchers simply cannot rely on intuition, judgment, and arguments from experts when eliciting behavioral data regarding complex systems. In fact, Linstone and Turoff (1975) suggest: “everything interacts with everything and the tools of the classical hard sciences are usually inadequate. And certainly most of us cannot deal mentally with such a magnitude of interactions” (p. 579). Also, when it comes to employing experts to elicit data, researchers often realized that specialists usually focus on the subsystem and mostly ignore the larger system characteristics (Ancel 2011).

4.3.3 Data Generation for Serious Gaming

A literature review revealed a limited number of studies regarding the use of serious games as a data generation method. A survey by Rosendale (1989) employed role-playing as a data generation method about the use of language in speech act situations. The study was designed to reveal basic characteristics of how invitations within platonic and romantic situations occur. The gaming method was the only adequate method to gather data in these situations because authentic interactions cannot be observed without violating participants’ privacy. Although Rosendale states that the role-play method is a valid and reliable method, the limitations of using this method brought up questions about its validity and ability to represent real-world interactions between humans (Rosendale 1989).

Like Rosendale, Demeter (2007) also suggested using role-play as a data collection method related to apology speech acts by analyzing how apologies occur in different situations. Participants, chosen from English majors at a university in Romania, were engaged in a role-playing environment and asked to apologize within the scenarios presented. The naturally occurring discussions were collected and compared against another method called discourse completion tests (DCT). The author concluded that in some instances, role-playing produced more realistic data since it allowed participants to speak instead of writing their responses. They were more authentic since the scenarios created a natural setting (Demeter 2007). Another qualitative study using role-playing to generate data was conducted by Halleck (2007). The gaming method was used to evaluate a non-native speaker’s oral efficiency using simulated dialogues. The biggest advantage of using role-playing is given as its ability to simulate a real conversation environment without violating participants’ privacy.

Besides generating data for speech act studies, the only study related to data elicitation was the REEFGAME, simulating the marine ecosystems in order to learn from different management strategies, livelihood options, and ecological degradation (Cleland et al. 2012). The data generation ability of the game was limited to the decision-making processes of the stakeholders (fishers), which can be categorized under collective learning regarding complexity, and it was not elaborated on any further.

4.4 Cycles in Gaming

Given the complexity associated with governing complex and interdependent infrastructure systems, a gaming methodology is necessary. Ancel (2011) suggests a three-phase method consisting of pre-gaming, gaming, and post-gaming. Each phase is supported by “add-ons,” including formal expert elicitation methods and ranking tools. With the help of these tools and techniques, data (both quantitative and qualitative) are gathered regarding the problem at hand (Ancel et al. 2010):

  • During the pre-gaming cycle, it is necessary to collect all the gaming variables depending on the modeled system. Such variables include scenarios, stakeholders and their interactions, historical data regarding the system, and information on the parameter(s) upon which the success of the transition process will be measured. The computer-based simulation mechanism keeps track of the process throughout the gaming exercise. Depending on the application, the computer-based simulation can evaluate the risk or reliability of an infrastructure system or keep track of the generation capacity or throughput of a particular utility. Once the adequate numerical simulation mechanism and all the supporting data are collected, the game is developed. Developing the game is an iterative process where versions are often tested by playing with several groups and then fine-tuning.
  • The gaming cycle includes the execution of the gaming exercise with the participation of experts. The game usually starts with the presentation of the scenario to the participants. Participants are asked to perform according to their predetermined roles. Considering the new information they have presented, participants are asked to make collective decisions about the investigated parameters. The decisions are taken as the input variables for the computer-assisted simulation mechanism, where initial conditions for the next step are calculated. The iterative process enables participants to experience and shape the future phases of the transition process. The presence of participants (preferably experts or real stakeholders), social values, norms, and beliefs provide realistic input for social interaction and decision-making.
  • The post-gaming cycle involves data collection and analysis, which surfaced during the gaming cycle. At this level, the elicited data are arranged and presented back to the participants for further analysis and feedback. Although not performed, depending on the type of data elicited, it is possible to use several other types of Commercial-Off-The-Shelf Software (COTS) to organize and analyze the data. The high-level gaming architecture of the expert elicitation methodology within the problem domain context is given in Section 4.4.1.

4.4.1 Serious Gaming: A Foundation for Understanding Infrastructures

Luna-Reyes et al. (2005) state that social and organizational factors can cause up to 90% of information system project failures, resulting in not delivering the expected benefits. For that reason, it is crucial to integrate such societal factors into large-scale infrastructure design processes. As previously mentioned, simulation gaming methods have recently shown promise in large-scale sociotechnical system planning efforts. Their ability to integrate infrastructure development’s social and technical aspects delineates these methods as the most appropriate approach for creating a venue combining computer-assisted stakeholder interaction. In this way, serious games provide insights into how to address issues arising from the interaction of players, roles, rules, and scenarios. Mayer describes serious gaming derived applications as “a hard core of whatever the computer model incorporated in a soft shell of gaming (usually through some form of role-play)” (Mayer 2009, p. 835). To support the case study, the Rapid Risk Assessment Model (RRAM) described below is used as the hardcore computer model to measure throughout the exercise; the more detailed demonstration of gaming methodology. Besides the RRAM, the commercially available decision support software, Logical Decisions® for Windows (LDW) v.6.2, can be used to support decision-making. The software assists the gaming process by helping participants evaluate and prioritize among the available decisions they have throughout the game (Logical Decisions: http://www.logicaldecisionsshop.com/catalog). LDW’s dynamic ranking capability of various alternatives provides real-time support in selecting alternatives according to their parameters (e.g. cost/benefit values, environmental impact, implementation risks, timelines, etc.). The RRAM estimates and quantifies risk values, comprised of separately calculated accident probabilities and their respective consequences. The probabilities within the model are estimated via the Probability Number Method (PNM), and the consequences are approximated via numerical manipulations. The RRAM is supported by historical and expert-elicited data as well as the gaming to generate the risk values throughout the methodology numerically. The RRAM was used as the risk simulation mechanism selected for the case study. However, depending on the problem at hand, this model can be replaced with any adequate software, method, or existing research measuring issues related to network capacity, throughput, financial status, etc. The adaptability of the gaming method allows developers to switch and/or combine different approaches, which will provide a systemic view of the problem.

The RRAM was created through the joint effort of the International Atomic Energy Agency (IAEA), the United Nations Environment Programme (UNEP), the United Nations Industrial Development Organization (UNIDO), and the World Health Organization (WHO) under the United Nations umbrella. The model and the associated method were developed as an affordable solution for a quick turnaround needed to determine risks associated with the handling, storage, processing, and transportation of hazardous materials. The risk assessment methodology (including the PNM approach) was supported by an extensive database containing various types of substances, including flammable, toxic, safety precaution measures, population densities, and environmental factors (IAEA 1996). However, as opposed to answering questions such as the maximum number of fatalities or the effect of distance, the PNM induced risk assessment methodology was more focused on the prioritization of actions in the field of emergency preparedness.

In this case, the risk is defined as the product of the probability of an accident and its respective consequences (Bedford and Cooke 2001). The IAEA study estimates probabilities and consequences separately. The consequences of an accident (e.g. an event caused by storage or transportation of certain hazardous materials) are calculated via simple numerical manipulations, taking into consideration the characteristics of the substance and correcting factors regarding the area, population density, accident geometry, etc. The required data to form the components of the equation are obtained through previous modeling efforts and expert opinions. On the other hand, the probabilities are estimated via PNM, where the likelihood of a particular accident happening is calculated via a dimensionless “probability number,” N, which is transformed into actual probabilities. The probability number is adjusted/updated according to the various correcting factors. The relationship between the probability and N is given via N=|log10P|. Risk is defined as the product of the consequences and the probabilities of unwanted outcomes (hazardous events).

Section 4.3 provided calculations of human casualties (fatalities) associated with an accident and the probabilities of such accidents occurring. The risk to the public from these activities is estimated by combining these two values. The consequences are categorized with respect to the fatalities, and the probability classes are categorized by order of magnitude of the number of accidents per year (e.g. societal risk operational instrument). The consequence-frequency (x-y) diagram is created. The main goal is to obtain a list of activities whose risks must be further analyzed before others. The risk matrix representation is one of the primary outcomes of the method.

4.4.2 Post Gaming: Analysis of Data

Throughout the gaming effort, the discussions and possible negotiations between the opposing parties are significant findings that can lead to different problem-solving approaches. The results of a game run are analyzed to examine if the gaming exercise influenced participants’ beliefs, intentions, attitudes, and behavior, yielding a better understanding of complexity (Joldersma and Geurts 1998). The serious gaming exercise serves as an individual and collaborative learning platform for the stakeholders, leading to an elevated level of knowledge of the system (Wilson et al. 2009). Individual learning occurs during the decision-making process, where each stakeholder group represents its respective point of view. The reflective conversations between the participants enable feedback and help participants build informed judgments. Therefore, realistic interactions among players help the future testing and evaluation of NextGen-related technologies (Ancel 2011; Joldersma and Geurts 1998). Also, like individual learning, collective or organizational education provides insight into the system at hand, such as the resilience of critical infrastructure systems.

Besides collective and individual learning, another main contribution of the gaming methodology is generated data. Considering the nature of predicting future states of complex infrastructure systems, fusing simulation mechanisms with the soft gaming method creates the best possible venue for expert elicitation for cases when the game is played with real stakeholders and subject matter experts. An intuitive mechanism may need to be developed based on the scenario under consideration to collect, sort, and visualize the data. However, since the validity of the extracted data cannot be revealed until the system’s future states are attained, the sole way to check the generated methodology’s internal validity again is by using expert opinions.

Expert feedback is a leading contributor in all phases of the methodology. Experts from all stakeholder groups help shape possible scenarios, provide numerical data regarding future technological enablers, and evaluate the developed methodology in different categories. Expert participation in all three phases of the gaming-based elicitation methodology is prominent since it allows game developers to modify the gaming components constantly by considering participant comments and recommendations. Due to the large level of the system, no one expert is sufficient to gather all the data needed to develop gaming based on the given methodology.

It is crucial to seek seamless integration between the components of this methodology in order to create an efficient representation of the reference system. For example, this chapter suggests gaming cycles (pre-gaming, gaming, and post-gaming) coupled with the use of several models (RRAM, PNM), methods (serious gaming, expert elicitation), COTS solutions (LDW, TopRank®), and data sources to understand and perhaps make a better decision. However, these efforts require seamless integration of the elements discussed. Besides the methodology components, the adequate capturing of the system’s characteristics (e.g. motivation for change, constraints, system context, as well as the societal, technical, and economic aspects) carries vital importance for the validity of the generated data. Because system characteristics vary with the context, the modeler’s steps change from problem to problem. For this reason, adapting this methodology to other infrastructure system transitions most likely requires modifying the contents of the tools and approaches, yet it is important to develop a thorough balance in the methodology integration to capture both societal and technical aspects of large infrastructure transition problems.

4.4.3 Validation in Gaming

The early adopters of gaming were quite skeptical of its ability to test strategies or forecast developments confidently. They concluded the major benefit of the game was to suggest research priorities and identify major problems related to policy and action requirements (Mayer 2009). The main criticism of the field was caused by gaming’s eclectic, diverse, and interdisciplinary nature along with the lack of defining terms and concepts (Gosen and Washbush 2004). However, the failure to implement a sustainable infrastructure model indicated that the multi-dimensional complexity of modern systems required different approaches and design principles (Roos et al. 2004). As an alternative answer, research studies employing gaming methods increased exponentially after the 1970s (Duke 1974).

Although one can come across a vast amount of literature regarding the validity of experimental situations (internal and external validity), measurement instruments (content and construct validity), and the specific research method or its results, the concept of validity regarding simulation games is barely elaborated in the literature (Peters et al. 1998). The validity of gaming usage was mostly investigated regarding its ability to enhance education and training. Researchers studied the specific gaming attributes that contribute to learning outcomes and evaluation of gaming methods’ training effectiveness (Feinstein and Cannon 2002; Gosen and Washbush 2004; Wilson et al. 2009). The simulation approach received several criticisms regarding its ability to serve as an educational tool where the main concerns focused on internal and external validities, for the cases where the changes in the classroom environment or generalizability of the learning effects to outside classroom situations were problematic (Gosen and Washbush 2004). Generally, the validity within the simulation games can be given as the correspondence between the model and the system itself (or the reference system). However, this definition is not very accurate since the level of correspondence between the model and the referent system is unknown; it could mean that the model has a one-to-one representation of the complex system or only a few components of the system are modeled. Additional criteria are necessary to distinguish the level of association between the model and the modeled system (Peters et al. 1998). The conclusions reached via a simulation game should be similar to those that can be experienced in the real-world system (Feinstein and Cannon 2002).

The literature review demonstrated three relevant validation definitions regarding the contents of this research. Peters et al. (1998) review the concept of validity under four criteria, as suggested by Raser (1969): psychological reality, structural validity, process validity, and predictive validity. Greenblat and Duke (1975) describe the types of validity related to gaming models with common sense or face validity, empirical validity, and theoretical validity. Chytka (2003) provides a validation triad containing performance, structural, and content validities to validate her methodology.

Embarking from the definitions of Greenblat and Duke (1975) complemented by Peters et al. (1998), face validity or psychological reality refers to the realistic gaming environment experienced by the participants. For a game to be valid, the environment must portray similar characteristics to the reference system. The empirical validity given by Greenblat designates the closeness of the game structure to the reference system. The definition given by Peters et al. separates empirical validity into two sections: structural validity (i.e. covering the game structure, theory, and assumptions) and process validity (i.e. concerning the information/resource flows, actor interactions, negotiations, etc.). For the simulation to be valid all the game elements (i.e. actors, information, data, laws, norms, etc.) should be isomorphic – elements and relations do not necessarily have to be identical but should be able to demonstrate congruency between them. Finally, the last feature covered by both definitions is related to theoretical validity: the models’ ability to reproduce historical outcomes or predict the future and conform to existing logical principles.

At times the validation triad is necessary (Ancel 2011; Chytka 2003). A validation triad consists of performance, structural, and content validities. These components are elaborated within an unstructured interview process to obtain the validity of the methodology. The performance validity includes the methodology’s efficiency and the uncertainty representation’s usefulness. Structural validity concerns the usability and added value of the method and its applicability beyond the test case. Finally, content validity is involved with the appropriateness of the aggregation method chosen for the study. Table 4.1 provides a summary of validation parameters.

Table 4.1 Gaming validation parameters.

ProponentsValidity parameters
Greenblat and Duke (1975)Common sense (face validity)
Empirical validity
Theoretical validity
Peters et al. (1998)Psychological reality
Structural validity
Process validity
Predictive validity
Chytka (2003)Performance validity
Structural validity
Content validity

4.5 Concluding Remarks

The term “gamification” first appeared online in the context of computer software in 2008 (Walz and Deterding 2015). Gamification did not gain popularity until 2010 (GoogleTrends 2021). However, even before the term came into wide usage, borrowing elements from video games was common, such as for learning disabilities (Adelman et al. 1989) and scientific visualization (Rhyne et al. 2000). The term gained wide usage around 2010, and many began to refer to it when incorporating social/reward aspects of games into software development (Mangalindan 2012). This approach captured the attention of venture capitalists, who suggested “many aspects of life could become a game of sorts [and that these games] … would be the best investments to make in the game industry” (Sinanian 2010). However, there remains no universally accepted definition of gamification.

The debate for the definition of gamification is left to others; the history and the value are much more interesting. To this end, we suggest that the value of gamification is demonstratable at the cognitive, emotional, and social levels. Therefore, gaming can be used as a guide for “players” through unconsciously skillful processes and keep them evolving with potentially challenging missions or difficult assignments. Emotionally, gaming can be used to invoke many positive emotional experiences, such as dignity, integrity, respect, and sympathy. And finally, gaming can serve as a catalyst for “players” to select characters (and situations) that are less explicitly fictional in order to explore their new sides and develop new social skills in a safe-to-learn environment.

However, involvement in gaming requires a significant investment in understanding how gaming works and how it ought to work for practical deployment. At this stage, suffice to say that the examination of gaming through the lens of policy-making, design, expert systems, data generation, and gaming cycles and validation provides the necessary background for applications – discussed in the proceeding chapters. Moreover, it is essential to recall that the current research methodology relies heavily on subjective assessments obtained from experts at all levels (pre-gaming, gaming, and post-gaming phases). Subsequently, the validation parameters of the method require subject matter expert opinions. And yet, it is clear that the value of gaming, in its various names, is here to stay. For example, Gartner’s Top 10 Strategic Technology Trends for 2023 emphasize gaming in IT systems for greater optimization to offer improved data-driven decision-making and maintain the value and integrity of artificial intelligence (AI) systems in production (Groombridge 2022)

Moreover, the metaverse allows people to replicate or enhance their physical activities. This could happen by transporting or extending physical activities to a virtual world or transforming the physical one. It is a combinatorial innovation made up of multiple technology themes and capabilities. In any case, this is an attempt to enable sustainable technologies to enable societal resiliency. In fact, Gartner suggests that investing in sustainable technology can create excellent operational resiliency and financial performance while providing new avenues for growth (Groombridge 2022).

4.6 Exercises

1 Discuss how gaming can be used to enhance decision-making in city development.

2 Discuss how gaming can be used to reduce uncertainty in city development.

3 Discuss the nature and role of validation in city development.

4 How can serious gaming serve as a foundation for understanding infrastructure design?

5 Discuss how gaming can affect policy for resilient city development.

References

  1. Adelman, H.S., Lauber, B.A., Nelson, P. et al. (1989). Toward a procedure for minimizing and detecting false positive diagnoses of learning disability. Journal of Learning Disabilities 22(4): 234–244. https://doi.org/10.1177/002221948902200407.
  2. Ancel, E. (2011). A Systemic Approach to Next Generation Infrastructure Data Elicitation and Planning Using Serious Gaming Methods. PhD., Old Dominion University. http://search.proquest.com.proxy.lib.odu.edu/docview/896960555/abstract/C4484A36FA444018PQ/5.
  3. Ancel, E., Gheorghe, A., and Jones, S.M. (13 October 2010). NextGen future safety assessment game. 2010 MODSIM Conference and World Expo, Hampton, VA. MODISIM World. https://ntrs.nasa.gov/citations/20100036351.
  4. Ayyub, B.M. (2001). Elicitation of Expert Opinions for Uncertainty and Risks (1st ed.). CRC Press.
  5. Bedford, T. and Cooke, R. (2001). Probabilistic Risk Analysis: Foundations and Methods. Cambridge University Press.
  6. Bekebrede, G. (2010). Experiencing Complexity: A Gaming Approach for Understanding Infrastructure Systems. Delft University of Technology. http://resolver.tudelft.nl/uuid:dae75f36-4fb6-4a53-8711-8aab42378878.
  7. Brewer, G.D. (2007). Inventing the future: Scenarios, imagination, mastery and control. Sustainability Science 2(2): 159–177. https://doi.org/10.1007/s11625-007-0028-7.
  8. Chytka, T. (2003). Development of an Aggregation Methodology for Risk Analysis in Aerospace Conceptual Vehicle Design. Dissertation. Old Dominion University.
  9. Chytka, T.M., Conway, B.A., and Unal, R. (2006). An expert judgment approach for addressing uncertainty in high technology system design. 2006 Technology Management for the Global Future – PICMET 2006 Conference, 1, 444–449. IEEE. https://doi.org/10.1109/PICMET.2006.296590.
  10. Cleland, D., Dray, A., Perez, P. et al. (2012). Simulating the dynamics of subsistence fishing communities: REEFGAME as a learning and data-gathering computer-assisted role-play game. Simulation & Gaming 43(1): 102–117. https://doi.org/10.1177/1046878110380890.
  11. CLP. (February 22, 2021). The benefits of using a change management simulation game. CLP. https://www.change-leadership.net/the-benefits-of-using-a-change-management-simulation-game.
  12. Cooke, R.M. and Goossens, L.H.J. (2004). Expert judgement elicitation for risk assessments of critical infrastructures. Journal of Risk Research 7(6): 643–656. https://doi.org/10.1080/1366987042000192237.
  13. Crookall, D. and Arai, K. (eds.). (1995). Simulations and Gaming Across Disciplines and Cultures: ISAGA at a Watershed (1st ed.). SAGE Publications, Inc.
  14. Demeter, G. (2007). Symposium article: Role-plays as a data collection method for research on apology speech acts. Simulation & Gaming 38(1): 83–90. https://doi.org/10.1177/1046878106297880.
  15. Deterding, S., Dixon, D., Khaled, R. et al. (2011). From game design elements to gamefulness: Defining “gamification”. Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments, 9–15. ACM. https://doi.org/10.1145/2181037.2181040.
  16. Dicheva, D., Dichev, C., Agre, G. et al. (2015). Gamification in education: A systematic mapping study. Journal of Educational Technology & Society 18(3): 75–88. https://www.jstor.org/stable/jeductechsoci.18.3.75.
  17. Duke, R.D. (1974). Gaming: The Future’s Language. Sage Publications. https://trove.nla.gov.au/version/45243363.
  18. Feinstein, A.H. and Cannon, H.M. (2002). Constructs of simulation evaluation. Simulation & Gaming 33(4): 425–440. https://doi.org/10.1177/1046878102238606.
  19. Frantzeskaki, N. and Loorbach, D. (2008). Infrastructures in transition role and response of infrastructures in societal transitions. 2008 First International Conference on Infrastructure Systems and Services: Building Networks for a Brighter Future (INFRA), 1–8. IEEE. https://doi.org/10.1109/INFRA.2008.5439669.
  20. French, S., Kelly, N., and Morrey, M. (1992). Decision conferencing and the international Chernobyl project. Journal of Radiological Protection 12(1): 17. https://doi.org/10.1088/0952-4746/12/1/003.
  21. Gee, J.P. (2008). Learning and games. In: The Ecology of Games: Connecting Youth, Games, and Learning (ed. K. Salen) . MIT Press.
  22. Geertje, B., Igor, M., Pieter, V.H.S. et al. (2005).How serious are serious games? Some lessons from infra-games.In: Proceedings of Digital Games Research Association (DiGRA) Conference: Changing Views – Worlds in Play, Vancouver, BC. Digital Games Research Association. http://www.digra.org/wp-content/uploads/digital-library/06278.53186.pdf.
  23. Geurts, J.L.A., Duke, R., and Vermeulen, P.a.M. (2007). Policy gaming for strategy and change. 558. Long Range Planning, 40 (6): 535–558. https://doi.org/10.1016/j.lrp.2007.07.004
  24. GoogleTrends. (2021). Gamification. Google Trendshttps://trends.google.com/trends/explore?date=all&q=gamification.
  25. Gosen, J. and Washbush, J. (2004). A review of scholarship on assessing experiential learning effectiveness. Simulation & Gaming 35(2): 270–293. https://doi.org/10.1177/1046878104263544.
  26. Greenblat, C.S. and Duke, R.D. (1975). Gaming Simulation: Rationale, Design and Applications. Wiley.
  27. Groombridge, D. (2022). Gartner top 10 strategic technology trends for 2023. Gartner. https://www.gartner.com/en/articles/gartner-top-10-strategic-technology-trends-for-2023.
  28. Gustafson, D.H., Shukla, R.K., Delbecq, A. et al. (1973). A comparative study of differences in subjective likelihood estimates made by individuals, interacting groups, Delphi groups, and nominal groups. Organizational Behavior and Human Performance 9(2): 280–291. https://doi.org/10.1016/0030-5073(73)90052-4.
  29. Halleck, G. (2007). Symposium article: Data generation through role-play: Assessing oral proficiency. Simulation & Gaming 38(1): 91–106. https://doi.org/10.1177/1046878106298268.
  30. IAEA. (1996). Manual for the Classification and Prioritization of Risks Due to Major Accidents in Process and Related Industries. International Atomic Energy Agency. http://www-pub.iaea.org/books/IAEABooks/5391/Manual-for-the-Classification-and-Prioritization-of-Risks-due-to-Major-Accidents-in-Process-and-Related-Industries.
  31. Joldersma, C. and Geurts, J.L.A. (1998). Simulation/gaming for policy development and organizational change. Simulation & Gaming 29(4): 391–399. https://doi.org/10.1177/104687819802900402.
  32. Kapp, K.M. (2012). Games, gamification, and the quest for learner engagement. Talent Development 66: 64–68. https://www.td.org/magazines/td-magazine/games-gamification-and-the-quest-for-learner-engagement.
  33. Keeney, R.L. and von Winterfeldt, D. (1989). On the uses of expert judgment on complex technical problems. IEEE Transactions on Engineering Management 36(2): 83–86. https://doi.org/10.1109/17.18821.
  34. Koster, R. (2004). Theory of Fun for Game Design. Paraglyph Press.
  35. Kuit, M., Mayer, I.S., and de Jong, M. (2005). The INFRASTRATEGO game: An evaluation of strategic behavior and regulatory regimes in a liberalizing electricity market. Simulation & Gaming 36(1): 58–74. https://doi.org/10.1177/1046878104272666.
  36. Lazzaro, N. (2004). Why We Play Games: Four Keys to More Emotion without Story. XEODesign, Inc.
  37. Lee, J. and Hammer, J. (2011). Gamification in education: What, how, why bother? Academic Exchange Quarterly 15(2): 1–5. https://www.semanticscholar.org/paper/Gamification-in-Education%3A-What%2C-How%2C-Why-Bother-Lee-Hammer/dac4c0074b6d0d86977313664a7da98e577a898a.
  38. Linstone, H.A. and Turoff, M. (eds.). (1975). The Delphi Method: Techniques and Applications. Addison Wesley Publishing Company.
  39. Locke, E.A. and Latham, G.P. (1990). A Theory of Goal Setting and Task Performance (pp. xviii, 413). Prentice-Hall, Inc.
  40. Luna-Reyes, L.F., Zhang, J., Ramón Gil-García, J. et al. (2005). Information systems development as emergent socio-technical change: A practice approach. European Journal of Information Systems 14(1): 93–105. https://doi.org/10.1057/palgrave.ejis.3000524.
  41. Mangalindan, J. (November 12, 2012). Play to win: The game-based economy. Fortune Tech. https://web.archive.org/web/20121112074424/http://tech.fortune.cnn.com/2010/09/03/the-game-based-economy.
  42. Mayer, I.S. (2009). The gaming of policy and the politics of gaming: A review. Simulation & Gaming 40(6): 825–862. https://doi.org/10.1177/1046878109346456.
  43. McGonigal, J. (2011). Reality is Broken: Why Games Make us Better and How they Can Change the World (Reprint ed.). Penguin Books.
  44. Meyer, M.A. and Booker, J.M. (1987). Eliciting and Analyzing Expert Judgment: A Practical Guide. Society for Industrial and Applied Mathematics.
  45. Nasir, N.S. and Saxe, G.B. (2003). Ethnic and academic identities: A cultural practice perspective on emerging tensions and their management in the lives of minority students. Educational Researcher 32(5): 14–18. https://journals.sagepub.com/doi/10.3102/0013189X032005014.
  46. Peters, V., Vissers, G., and Heijne, G. (1998). The validity of games. Simulation & Gaming 29(1): 20–30. https://doi.org/10.1177/1046878198291003.
  47. Pope, D.C. (2003). Doing School: How we are Creating a Generation of Stressed-out, Materialistic, and Miseducated Students (unknown edition). Yale University Press.
  48. Raser, J.C. (1969). Simulations and Society: An Exploration of Scientific Gaming. Allyn & Bacon.
  49. Rhyne, T.-M., Doenges, P., Hibbard, B. et al. (2000). The impact of computer games on scientific and information visualization (panel session): “If you can’t beat them, join them.” In: Proceedings of the Conference on Visualization ’00, 519–521. IEEE.
  50. Roos, D., de Neufville, R., Moavenzadeh, F. et al. (2004). The design and development of next generation infrastructure systems. 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No.04CH37583), 5, 4662–4666. https://doi.org/10.1109/ICSMC.2004.1401267.
  51. Rosendale, D. (1989). Role-play as a data-generation method.Simulation & Games 20(4): 487–492. https://doi.org/10.1177/104687818902000410.
  52. Shiratori, R., Arai, K., and Kato, F. (2005). Gaming, Simulations and Society: Research Scope and Perspective. Springer. https://doi.org/10.1007/b138103.
  53. Sinanian, M. (April 12, 2010). The ultimate healthcare reform could be fun and games. VentureBeat. https://venturebeat.com/2010/04/12/healthcare-reform-social-games-gamification.
  54. Singuran, G.F. (2008). System Level Risk Analysis of New Merging and Spacing Protocols. Thesis. Delft University of Technology. https://repository.tudelft.nl/islandora/object/uuid%3Adae75f36-4fb6-4a53-8711-8aab42378878.
  55. Squire, K. (2006). From content to context: Videogames as designed experience. Educational Researcher 35(8): 19–29. https://doi.org/10.3102/0013189×035008019.
  56. Vasuthanasub, J. (2019). The Resilient City: A Platform for Informed Decision-making Process. Dissertation. Old Dominion University. https://digitalcommons.odu.edu/emse_etds/151.
  57. Walz, S.P. and Deterding, S. (2015). The Gameful World: Approaches, Issues, Applications. MIT Press.
  58. Warmerdam, J., Knepfle, M., Bidarra, R. et al. (2006). SimPort: A Multiplayer Management Game Framework (eds. Q. Mehdi, F. Mtenzi, B. Duggan et al.), 219–224. University of Wolverhampton School of Computing.
  59. Wenzler, I. (2008). The role of simulation games in transformational change. In: Planspiele für die Organisationentwicklung (ed. W.C. Kritz), pp. 63–74. WVB.
  60. Wenzler, I., Kleinlugtenbelt, W.J., and Mayer, I. (2005). Deregulation of utility industries and roles of simulation. Simulation and Gaming 36(1): 30–44. https://doi.org/10.1177/1046878104273218.
  61. Wilson, K.A., Bedwell, W.L., Lazzara, E.H. et al. (2009). Relationships between game attributes and learning outcomes: Review and research proposals. Simulation & Gaming 40(2): 217–266. https://doi.org/10.1177/1046878108321866.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset