(Editor’s Note: the recent trade book Computer Games and Instruction brings together the leading edge perspectives of over a dozen scientists in the area of videogames and learning, including a very insightful analysis –excerpted below– by Harvard’s Chris Dede. Please pay attention to his thoughts on scalability below, and enjoy!)—
The research overview provided by Tobias, Fletcher, and Dai (this volume) is very helpful in summarizing studies to date on various dimensions of educational games and simulations. The next challenge for the field is to move beyond isolated research in which each group of investigators uses an idiosyncratic set of definitions, conceptual frameworks, and methods. Instead, to make further progress, we as scholars should adopt common research strategies and models—not only to ensure a higher standard of rigor, but also to enable studies that complement each other in what they explore. As this book documents, we now know enough as a research community to undertake collective scholarship that subdivides the overall task of understanding the strengths and limits of games and simulations for teaching and learning. Further, through a continuously evolving research agenda we can identify for funders and other stakeholders an ongoing assessment of which types of studies are most likely to yield valuable insights, given the current state of knowledge.
Research agendas include both conceptual frameworks for classifying research and prescriptive statements about methodological rigor. (For an example of a research agenda outside of gaming and simulation – in online professional development – see Dede, Ketelhut, Whitehouse, Breit, & McCloskey, 2009.) In addition, research agendas rest on tacit assumptions often unstated, but in fact better made explicit, as discussed below. In this chapter, to inform a research agenda for educational games and simulations, I offer thoughts about fundamental assumptions and a conceptual framework that includes prescriptive heuristics about quality. In doing so, my purpose is not to propose what the research agenda should be – that is a complex task best done by a group of people with complementary knowledge and perspectives – but to start a dialogue about what such an agenda might include and how it might best be formulated.Fundamental Assumptions
My thoughts about a research agenda for educational games and simulations are based on five fundamental assumptions. I take the trouble to articulate these assumptions because the beliefs and values that underlie a research agenda often are the most important decisions made in its formulation. Moving beyond “stealth” assumptions about quality to explicit agreements and understandings is central to developing scholarship that does not incorporate many of the problems that beset typical educational research, such as irrelevance and faulty methods (Shavelson & Towne, 2002). My five assumptions posit that any research agenda should focus on usable knowledge; collective research; what works, when, for whom; more than a straightforward comparison of the innovation to standard practice; and a focus on innovations that can be implemented at scale. By “at scale,” I mean that that innovators can adapt the products of research for effective usage across a wide range of contexts, many of which do not have an ideal, full set of conditions for successful implementation.
My first assumption that any research agenda should focus on “usable knowledge”: insights gleaned from research that can be applied to inform practice and policy. I believe in defining research agendas in such a way that scholars not only build sophisticated theories and applied understandings, but also disseminate this knowledge in a manner that helps stakeholders access, interpret, and apply these insights. It is important to note that the process of creating and sharing usable knowledge is best accomplished by a community of researchers, practitioners, and policymakers, as opposed to scholars developing independent findings for other stakeholders to consume. As the chapters in this volume document, in the case of gaming and simulation for learning these stakeholders include K-12 teachers and administrators, higher education, business and industry, medicine and the health sciences, the military, and a vast unorganized group of people who desire better ways of informal learning. A community of researchers, practitioners, and policymakers may also better accomplish collective theory building than now occurs with fragmented creation and distribution of scholarly findings.
As Stokes describes in his book, Pasteur’s Quadrant (1997), usable knowledge begins with persistent problems in practice and policy, rather than with intellectual curiosity. (This is not to disparage purely basic research, but to indicate its limits in immediately producing usable knowledge.) In my experience, too often educational games and simulations are developed because they are “cool” or “fun” — they are solutions looking for problems (“build it and they will come.” If we are to gain the respect and collaboration of practitioners and policymakers, the majority of our research agenda must focus on how games and simulations can aid in resolving perennial educational problems and issues, giving policymakers and practitioners vital leverage in addressing troubling, widespread issues (Carlson & Wilmot, 2006). Stokes makes a compelling case that usable knowledge is a preeminently valuable form of research investment, and Lagemann (2002) makes the case that this strategy is very important for educational improvement.
My second assumption is that, even though individual studies of creative “outlier” approaches is important, collective research is vital for the further evolution of our field. Fully understanding a complex educational intervention involving gaming and simulation and effective across a wide range of contexts may require multiple studies along its various dimensions, each scholarly endeavor led by a group that specializes in the methods best suited to answering research questions along that dimension. Using such a distributed research strategy among collaborating investigators, funders could create portfolios in which various studies cover different portions of this sophisticated scholarly territory, with complementary research outcomes enabling full coverage and collective theory-building. Further, once the efficacy of an intervention is determined via exploratory research, a single large study with a complex treatment is of greater value for research than multiple small studies of individual simple interventions, none of which has the statistical power to determine the nuanced interaction effects described next. [For example, a researcher who wishes to detect a small difference between two independent sample means (e.g., treatment and control) at a significance level of 0.05, requires a sample of size of 393 or more students in each group (Cohen, 1992).]
As an example of steps towards collective research on a single intervention in educational gaming and simulation, in their literature review Tobias et al (this volume) document several studies by different investigators on the Space Fortress videogame. While a common conceptual framework did not necessarily guide these studies, presumably each set of investigators built to some extent on prior research. Beyond a common conceptual framework, developing shared meanings for terms is central to truly collective scholarship. For example, “transfer” is a term that has a variety of meanings. In their research review, Tobias et al (this volume) group studies of transfer, but which of those studies used older definitions of this term and which used emerging formulations (Mestre, 2002; Schwartz, Sears, & Bransford, 2005)? The use of wikis is helpful in teams of investigators evolving a common terminology and conceptual framework, as evidenced by the Pittsburgh Science of Learning Center’s wiki on theory development ( http://www.learnlab.org/research/wiki/index.php/Main_Page ).
My third assumption is that a research agenda should center on what works, when, for whom, going beyond whether or not some educational game or simulation “is effective” in some universal manner (Kozma, (1994); Means, 2006). Learning is a human activity quite diverse in its manifestations from person to person (Dede, 2008). Consider three activities in which all humans engage: sleeping, eating, and bonding. One can arrange these on a continuum from simple to complex, with sleeping towards the simple end of the continuum, eating in the middle, and bonding on the complex side of this scale. People sleep in roughly similar ways;, but individuals like to eat different foods and often seek out a range of quite disparate cuisines. Bonding as a human activity is more complex still: People bond to pets, to sports teams, to individuals of the same gender and of the other gender; fostering bonding and understanding its nature are incredibly complicated activities. Educational research strongly suggests that individual learning is as diverse and as complex as bonding, or certainly as eating. Yet theories of learning and philosophies about how to use interactive media for education tend to treat learning like sleeping, as an activity relatively invariant across people, subject areas, and educational objectives. That is, behaviorists, cognitivists, constructivists, and those who espouse “situated learning” all argue that, well implemented, their approach to instruction works for all learners (Dede, 2008).
As a consequence, many educational designers and scholars seek the single best medium for learning, as if such a universal tool could exist. For example, a universal method for developing instruction is the goal of “instructional systems design” (Dick & Carey, 1996) Similar to every other form of educational technology, some see gaming and simulation as universally optimal, a “silver bullet” for education’s woes (Salen, 2008). As Larry Cuban documents in his book, Oversold and Underused (2001), in successive generations pundits have espoused as “magical” media the radio, the television, the computer, the Internet, and now laptops, gaming, blogging, and podcasting (to name just a few). The weakness in this position is the tacit assumption, pervasive in most discussions of educational technology research, that instructional media are “one size fits all” rather than enabling an ecology of pedagogies to empower the many different ways people learn.
No learning medium is a technology like fire, where one only has to stand near it to get a benefit from it. Knowledge does not intrinsically radiate from educational games and simulations, infusing students with learning as fires infuse their onlookers with heat. However, various theoretical perspectives (e.g., cognitive science, social constructivism, instructional systems design) can provide insights on how to configure these interactive media to aid various aspects of learning, such as visual representation, student engagement, and the collection of assessment data. Determining whether and how each instructional medium can best enhance some aspect of a particular pedagogy is as sensible instrumentally as developing a range of tools (e.g., screwdriver, hammer, saw, wrench) that aid a carpenter’s ability to construct artifacts.
Further, numerous studies document that no optimal pedagogy – or instructional medium – is effective across every subject matter (Shulman, 1986; Becher, 1987; Lampert, 2001). As one example of research on subject-specific pedagogy, David Garvin (2003) documents that the Harvard Law School, Business School, and Medical School have separately strongly influenced how their particular profession is taught, each by espousing and modeling sophisticated “case-method” instruction. Garvin’s findings show that what each of these fields means by case-method pedagogy is quite different and that those dissimilarities are shaped by the particular content and skills professionals in that type of practice must master. Thus, the nature of the content and skills to be learned shape the type of instruction to use, just as the developmental level of the student influences what teaching methods will work well. No educational approach, including gaming and simulation, is universally effective; and the best way to invest in learning technologies is a research agenda that includes the effects of the curriculum, the context, and students’ and teachers’ characteristics in determining which aspects of educational games and simulations work when, for whom, under what conditions necessary for success.
My fourth assumption is that, even though summative evaluations are important, the scholarly focus in the research agenda should expand well beyond the “is there a significant difference in outcome between this intervention and standard practice?” studies that comprise many of the publications in the Tobias et al review (this volume). A vast literature exists documenting the “no significant difference” outcomes characteristic of many such studies (Russell, 1999). Beyond flaws in research design and analytic methods, frequent reasons for lack of a significant treatment effect include an intervention too short in duration to expect a substantial impact or a sample so small that, for lack of statistical power, even a large effect size could not be detected. The use of measures inadequate to detect the significant differences that are occurring is another common problem; for example, paper-and-pencil item-based tests are flawed in their measurement of sophisticated thinking skills, such as scientific inquiry (Resnick & Resnick, 1992; Quellmalz & Haertel, 2004; National Research Council, 2006; Clarke & Dede, in press). Further, even when all these problems are overcome, often the population in the study is narrow, the teacher characteristics are optimal, or the context is unrepresentative; each of these generates major threats to generalizability.
In fact, many of these studies are summative evaluations masquerading as research. There is nothing wrong with developing an intervention and conducting a summative evaluation of its overall impact under typical conditions and with representative populations for its potential use. Design heuristics from evaluations of successful innovations are often useful (Connolly, Stansfield, & Hainey, 2009). Further, evaluating the efficacy of a treatment before conducting elaborate research studies of its relative effectiveness across multiple types of contexts is important in making wise allocations of resources. However, evaluation studies are a poor place to stop in research on an innovation and should be only a small part of a research agenda, not the preponderance of work, as they typically do not contribute much to theory and do not provide nuanced understandings of what works, when, for whom, and under what conditions.
My fifth assumption is that a research agenda for educational gaming and simulation should privilege studies of interventions that can be implemented at scale. Scale is not purely a matter of economic common sense, such as not spending large amounts of resources on students in each classroom having access to a game development company to build what they design, or simulations that involve high ratios of instructors to learners. Research has documented that in education, unlike other sectors of society, the scaling of successful instructional programs from a few settings to widespread use across a range of contexts is very difficult even for innovations that are economically and logistically practical (Dede, Honan, & Peters, 2005).
In fact, research findings typically show substantial influence of contextual variables (e.g., the teacher’s content preparation, students’ self-efficacy, prior academic achievement) in shaping the desirability, practicality, and effectiveness of educational interventions (Barab & Luehmann, 2003; Schneider & McDonald, 2007). Therefore, achieving scale in education requires designs that can flexibly adapt to effective use in a wide variety of contexts across a spectrum of learners and teachers. Clarke and Dede (2009) document the application of a five-dimensional framework for scaling up to the implementation of the River City multi-user virtual environment for middle school science:
This is not to argue that research agendas should not include studies of unscalable interventions – such research can aid with design and help evolve theory – but I believe that the bulk of a research agenda, to produce usable knowledge, should focus on innovations that can scale. As the research review by Tobias et al (this volume) documents, educational games and simulations in general offer desirable affordances for implementation at scale.
I offer these assumptions not as “truths,” but as propositions to be debated in the course of formulating a research agenda for educational gaming and simulation. Others may wish to modify assumptions, to add assumptions to this list, or even to argue that a research agenda should not make any assumptions about what constitutes quality. My point is that any attempt to develop a research agenda should make its underlying beliefs and values explicit, because these are central to determining its conceptual framework.
– Chris Dede is the Wirth Professor in Learning Technologies at Harvard Graduate School of Education. The excerpt above is part of his chapter Developing a Research Agenda for Educational Games and Simulations in the book Computer Games and Instruction, published in 2011 by Information Age Publishing.
–> To Learn More and Order Book via publisher (offers discounts): click on Computer Games and Instruction
–> To Learn More and Order Book via Amazon.com: click on Computer Games and Instruction
Book Description: There is intense interest in computer games. A total of 65 percent of all American households play computer games, and sales of such games increased 22.9 percent last year. The average amount of game playing time was found to be 13.2 hours per week. The popularity and market success of games is evident from both the increased earnings from games, over $7 Billion in 2005, and from the fact that over 200 academic institutions worldwide now offer game related programs of study.
In view of the intense interest in computer games educators and trainers, in business, industry, the government, and the military would like to use computer games to improve the delivery of instruction. Computer Games and Instruction is intended for these educators and trainers. It reviews the research evidence supporting use of computer games, for instruction, and also reviews the history of games in general, in education, and by the military. In addition chapters examine gender differences in game use, and the implications of games for use by lower socio-economic students, for students’ reading, and for contemporary theories of instruction. Finally, well known scholars of games will respond to the evidence reviewed.
TABLE OF CONTENTS