Section 1: Problem
In Erhel and Jamet’s article, “Digital game-based learning: Impact of instructions and feedback on motivation and learning effectiveness,” the authors introduce a variety of concepts related to Digital Game-Based Learning (DGBL). However, these concepts are presented in a way that lacks clear cohesion, resulting in a high cognitive demand on the reader to discern the central DGBL-related issue the study seeks to address. The introduction references a wide body of literature spanning numerous facets of DGBL, suggesting multiple possibilities for exploration but failing to narrow the study’s focus. For example, Section 1.2 examines the conditions under which DGBL affects motivation, summarizing research on intrinsic motivation as well as the motivational frameworks of mastery and performance goals. Notably, these constructs are only weakly linked to DGBL within the context of the article, if at all.
In section 1.3, it appears the authors intended to establish the benefits of digital learning games, yet do so only by contrasting them with traditional instructional media, rather than providing comparisons to other contemporary learning methods or specifically considering the intrinsic value of DGBL. Furthermore, the inclusion of contradictory findings from prior studies undermines the foundation of the research, raising concerns about the strength and clarity of the study’s rationale, as the authors do not refute the conclusions of the opposing studies. This lack of synthesis leaves the research standing on uncertain ground, as the interplay of multiple unexamined variables complicates the direction and purpose of the investigation.
Section 1.4 introduces another dimension under the term “value-added,” but again misses an opportunity to integrate previously discussed motivational constructs with the cognitive processes at play in DGBL and the impact of instructional strategies on both knowledge acquisition and learner motivation. Rather than weaving together these concepts into a comprehensive research framework, the authors focus on comparing general and specific instructions and briefly touch on the effects of framing learning activities as either study-based or entertainment-based.
As a result, the review of related literature remains fragmented, and the connections between motivation, instructional design, cognitive engagement, and knowledge transfer within DGBL are left insufficiently expressed. Ultimately, each topic is considered in relative isolation, without the authors providing a clear synthesis of how these elements collectively inform the study’s research objective.
Despite the authors’ lack of persuasive and informed reasoning and the organizational and conceptual challenges present in their introduction and literature review, the central question of how instructional design influences outcomes in DGBL is both significant and worthy of systematic investigation. Understanding whether DGBL can promote deep learning and foster sustained motivation is a topic of considerable educational importance. To advance the field, these issues should be explored individually and in depth, with an explicit focus on their direct relevance to DGBL. Such targeted inquiry could yield valuable insights for establishing best practices in the use and design of educational technology. Furthermore, systematically examining the role of autonomy, student choice, engagement, and self-efficacy within DGBL using well-established motivational theories has the potential to inform effective game design and maximize the transfer of knowledge through digital gaming environments.
Current research demonstrates that it is entirely possible to systematically study and assess the effectiveness of instructions in DGBL by leveraging established theoretical models, conceptual frameworks, and design principles. These approaches enable researchers and practitioners to evaluate instructional quality and its impact on learning outcomes and motivation of Digital Game-Based Learning (DGBL).
For example, All, Nuñez Castellar, and Van Looy (2015) developed a comprehensive framework grounded in social cognitive theory that allows for systematic assessment of DGBL effectiveness. Their model identifies clear outcome categories of learning, motivation, and efficiency, each with measurable subcomponents such as knowledge transfer, enjoyment, increased motivation, time management, and cost-efficiency. This structure would better facilitate the evaluation of the impact of instructional strategies within DGBL opportunities.
Coleman and Money (2020) further address design principles and student-centered approaches, such as fostering autonomy, active learning, and deep understanding, which can be systematically assessed in DGBL. These theories are connected to established game design frameworks like Gee’s principles (2003) and highlight the ways structured instructional elements contribute to learner empowerment, problem-solving, and comprehension.
Additionally, studies like Woo’s (2013) that incorporate motivational models like ARCS (attention, relevance, confidence, and satisfaction) and multimedia principles, such as Mayer’s, provide evidence that instruction quality within DGBL can be measured. The use of ARCS strategies in DGBL has been shown to positively influence learner motivation, cognitive load, and ultimately learning performance, confirming that instructional effectiveness is both a measurable and improvable aspect of DGBL.
Section 2: Theoretical Perspective and Literature Review
Although the paper includes a literature review that references motivational and multimedia learning research, it does not offer a coherent, fully developed conceptual framework linking these ideas. Additionally, there is no classification of the following:
- Types of games or their mechanics (e.g., simulation, role-play, puzzle, strategy)
- Tenets of student-centered learning (Lea et al., 2003), such as autonomy, reflection, and collaboration
- Active learning techniques (Bishop & Verleger, 2013), such as peer interaction, feedback loops, or experiential reflection
By omitting these elements, the authors’ framework neglects to connect DGBL to student self-efficacy and deeper learning. This gap makes it unclear how DGBL supports the active or constructivist learning principles that are foundational to this learning medium, such as the work of Vygotsky (1978) and Bruner (1966), which establishes that learning occurs through active engagement, social interaction, and meaning-making. This exclusion weakens the authors’ theoretical perspective and its relevance to education technology and pedagogy.
Erhel and Jamet’s (2013) theoretical foundation centers mainly on cognitive and motivational perspectives, specifically, multimedia learning theory (Mayer & Moreno 2003). They also poorly integratemotivational constructs such as learner autonomy and mastery and performance goal orientations, which are central to understanding learner persistence and engagement. While these frameworks help examine the impact of learning and motivation in a digital environment, their application in this study is narrow.
The authors also fail to establish their research within a strong pedagogical or design-based framework for DGBL and do not draw any connections to Gee’s (2003) 13 principles based on learner empowerment, problem-solving, and understanding, which are foundational to understanding how DGBL engages players and creates meaningful learning experiences and learner agency. Without aligning the study to DGBL-specific principles, the research reads like a media-comparison study that positions DGBL as a content delivery modality rather than research that explores DGBL as an interactive, dynamic learning tool that fosters self-efficacy and problem-solving.
Erhel and Jamet’s (2013) study provides a loose link to existing theory and prior research that is mainly descriptive. While the authors reference several studies on feedback and motivation, they do not demonstrate how they relate to or inform their study. The literature review provides a sequence of summaries rather than analyzing and synthesizing learning and engagement concepts and theories, and it offers no thematic evaluation or visual illustration that connects findings across studies.
Another shortcoming is that the study fails to present the multi-dimensional benefits of DGBL adequately and to connect its motivational value to foundational psychological concepts such as Maslow’s hierarchy of needs (1943), Skinner’s reinforcement theory (1953), or Ryan and Deci’s (2000) self-determination theory. Tying this study to these frameworks could have grounded the study in constructs like autonomy, competence, and relatedness and illustrated how digital games with effective instructions influence learning outcomes and intrinsic motivation. Instead, motivation is treated superficially as enjoyment or interest, with little connection to relevant educational and motivational principles and models of learning, academic achievement, and human development.
Similarly, the authors introduce Csikszentmihalyi’s flow theory (1990) as an explanatory concept but fail to define or integrate it meaningfully with other educational frameworks or pedagogical theories. Flow is presented as a desirable psychological state rather than as part of an interrelated system of cognitive and motivational processes. This superficial mention underscores a broader issue of this study, which is the absence of a thematic analysis or deep evaluation of relevant research. The literature review neither organizes findings by theory nor critically analyzes methods or concepts, resulting in a chaotic and confusing theoretical foundation.
This literature review is broad but unfocused and theoretically inconsistent. While it references multiple studies on feedback, motivation, and multimedia learning, it does not integrate these findings into a coherent conceptual structure. The review presents contradictory studies without analyzing or reconciling their differences, leaving readers uncertain about how this study builds upon or contradicts previous findings. It also states both that “no one has so far subjected the (DGBL) games’ instructions to scientific scrutiny” (p. 157) and that “the effects of instruction type on the cognitive processes engaged in text reading have been the subject of extensive research” (p. 258), which seems to be a contradiction that undermines the authors’ arguments and the value of the study.
Furthermore, the literature review ineffectively presents the benefits and effectiveness of DGBL, offering only a surface-level comparison between DGBL and traditional instructional guidance methods. It does not differentiate between types of digital games or evaluate studies based on their purpose, design, or gaming principles employed. Without this differentiation, the review treats all DGBL as a single category, overlooking meaningful differences between simulation-based learning, role-playing environments, problem-solving games, and serious educational games.
The review also lacks sufficient discussion of instructional design within DGBL, which includes the variable this study investigates. There is inadequate coverage of literature addressing the relationship between instructional scaffolding, feedback, and their effects on learner performance and engagement. This omission weakens the rationale for the study’s focus and fails to justify the experimental parameters of the research design.
Erhel and Jamet (2013) do not provide organizational tables or appendices, a summary paragraph of their literature review that identifies overarching themes, or a synthesis of trends that lead to the study’s hypotheses. Overall, it provides a minimally connected list of ideas and references rather than a structured argument. Consequently, the reader must infer how the existing research justifies the design focus on the impact of two specific types of game instruction on intrinsic motivation and learning outcomes. Additionally, without a summary or conclusion section, the transition from the included literature to the authors’ hypothesis is disorganized and conceptually weak. These factors affect the transferability of the study because the review does not clarify how the literature connects to game type, instructional design, or learner outcomes. Therefore, readers cannot determine how the findings might generalize across different learning contexts, gaming environments, or learner demographics.
The research question, which centers around how two different instructional guidance constructs affect learning and motivation, is reasonably clear, but it does not explicitly connect to broader engagement constructs such as autonomy, competence, or relatedness (Ryan and Desi, 2000), nor to DGBL-specific structures. The authors’ question defines what is being tested, but not why the variable matters to the underlying pedagogy and design of games, which population of players, or what type of learning and educational content. Additionally, contrasting only two types of instructional frameworks is helpful, but does not necessarily provide a best-practice model for DGBL development, though it could help refine the options into narrower categories.
Section 3: Research Design and Analysis
Erhel and Jamet (2013) designed two experiments to study how instructions and feedback affect motivation and learning in digital game-based learning (DGBL), utilizing a minimally defined value-added approach to investigate whether learning-oriented instructions improve learning outcomes and increase motivation compared to entertainment-oriented instructions. While the overall goal of testing the role of two types of instructions and feedback is relevant and their design looks solid at first glance, the logic behind it is not fully presented. It also includes contradictory, poorly explained ideas about the effectiveness of DGBL in different learning contexts and does not consider the complexity of DGBL as an interactive learning medium.
One obvious flaw is that the authors claim to study both learning and motivation, but the motivational data of the study rely on enjoyment and interest ratings that do not represent the full range of motivational factors described in self-determinationtheory (Ryan & Deci, 2000). Another weakness of the study is that the authors contrast educational and entertainment game instructions but do not connect them to established instructional models, such as Mayer’s (2003) multimedia learning theory or Gagné’s (1985) established nine events of instruction. Furthermore, because the study focuses on only two possible types of instructions, the experiment does not provide a broad comparison of instruction styles that potentially influence learners’ understanding and motivation.
The study also falls short because it fails to address how instructions affect different levels or types of learning, such as deep, rote, or surface learning, which were mentioned in the literature review. It also does not consider differences among learners, like prior gaming experience or learning preferences. This lack of attention to learning and learner diversity makes it difficult to generalize the findings and weakens the study’s conclusions. As a result, this study does not provide strong evidence about why or how instructional design improves learning outcomes or increases motivation.
In Erhel and Jamet’s (2013) study, university students were recruited to play the digital learning game under different instructional conditions. They were not drawn from a randomized pool or stratified population. Therefore, according to UConn’s (n.d.) definitions, this would be a convenience sample with random assignment after recruitment to experimental conditions.
Despite the adequacy of the sampling methods used by the authors, a few minor sampling and participant issues present themselves in this study. First, the authors state that there was an equal representation of men and women among participants; however, the numbers provided in the methods section for each group do not add up correctly, suggesting an error. For example, there were nine male participants in both the study group and the control group, which equals 18. However, the article cites a total of 22 male study participants. Additionally, the mean age of participants in both groups falls toward the lower end of the stated age range, which may also limit the generalizability of the findings to broader age groups. The authors also do not provide a convincing rationale for excluding participants with prior knowledge of the content area or those who scored more than 3 out of 6 on the pretest in phase 1 of the first experiment. Excluding participants with prior knowledge of the subject matter could compromise the findings, as participants with lower levels may not find the content of the DGBL as relevant as those with higher levels might, which would affect both motivation and outcomes. Because screening decisions directly affect how instruction and feedback influence motivation and learning, the lack of adequate explanation for the exclusion of volunteers raises questions about the study’s transparency. However, there is one positive aspect of the design, which is that the screening questions were developed in collaboration with a physician, adding validity to the scientific nature of the game content. However, this single strength does not offset the study’s contradictions and missing explanations that weaken its overall validity.
Despite the seemingly replicable nature of this study, several contradictions appear in the presentation of its methods and procedures. In Experiment 1, the Materials section states that directions were delivered orally to both groups by a pedagogical agent, although one group was told the game was educational while the other was told it was for entertainment. This suggests that the only variable manipulated was the framing of those instructions. Delivering directions exclusively through one modality, however, limits the study’s transferability because different delivery modes (e.g., auditory versus visual) may influence comprehension, attention, and motivation differently. Contradictorily, the Procedures section indicates that participants read the instructions, creating confusion about whether directions were given orally or in writing. This inconsistency undermines the clarity of the research design and raises doubts about what participants actually experienced during the simulation.
Another inconsistency emerges in the description of the ASTRA simulation procedures. The authors state that “at no point in this second phase were participants told that their knowledge and motivation would be assessed” (p. 159). This ambiguity makes it unclear how instructions were differentiated between the education and entertainment groups, as well as how these orientations were expected to affect learning and motivation. To more effectively examine how instructional framing influences outcomes, a stronger design would have included three groups receiving instructions through the same modality: one with neutral framing, one with learning-oriented framing, and one with entertainment-oriented framing. Such a structure would enhance internal validity by isolating the impact of framing while controlling for delivery mode.
The second experiment also lacks procedural clarity. While it sought to determine how feedback influenced learning strategies under entertainment- and learning-oriented conditions, the authors provide insufficient detail about the methodology and implementation. Specifically, the absence of information about the screening process leaves questions about sampling, participant equivalence, and the overall internal validity of the experiment. Greater procedural transparency would be necessary to fully interpret the findings and assess their generalizability to other DGBL contexts.
A further methodological weakness lies in the knowledge-transfer assessment following the ASTRA simulation. Participants answered eight questions—four requiring paraphrasing of simulation content and four requiring inferential reasoning. While these measures could gauge how digital game-based learning affects learning, the limited number of items constrains the strength and generalizability of the data. Moreover, in the Results section, the authors classify scores as “recall” and “knowledge” rather than “paraphrasing” and “inferring,” creating inconsistency in terminology that could confuse readers and obscure what the experiment was actually measuring.
Although quiz score variances were equal across groups in the recall section, participants who received learning-oriented instructions outperformed others in the knowledge category. However, the study also reports that instruction type did not significantly influence motivation, suggesting either a weak manipulation or the use of measures that lacked sensitivity to detect meaningful motivational differences. Together, these factors raise concerns about both the reliability of the data collection process and the validity of the conclusions drawn regarding instructional framing and motivation.
Section 4: Interpretation and Implications of Results
Erhel and Jamet (2013) acknowledge that the ASTRA simulation used in their study involved low interactivity since participants had limited opportunities to engage with the material beyond selecting responses or answering quiz questions. The authors recognize that this lack of interactivity may have hindered learner engagement and prevented deeper cognitive processing or motivational effects that are typical in more immersive game environments. While they identify this as a design limitation, their discussion of its implications is brief.
The authors also report that participants achieved very high scores on the recall test, resulting in a “ceiling effect” that limited the usefulness of feedback and reduced the precision of the instrument used to detect differences between groups. Because learners made few errors, feedback that was intended to influence learning transfer only minimally affected assessment outcomes. Erhel and Jamet (2013) acknowledge this issue and explain its impact on their findings, linking it directly to the weak effect of feedback on learning performance. However, while they clarified this, their discussion of it was shallow. They do not consider redesigning the quiz to enable a better understanding of possible different levels of knowledge acquisition, nor do they adequately explain the influence of ceiling effects. As a result, the limitation is identified but insufficiently explored.
A third limitation noted by Erhel and Jamet (2013) is that their study relied entirely on offline data, such as post-tests and questionnaires, without collecting process-oriented or behavioral data from the digital environment. The authors suggest that including log files or activity traces could have offered deeper insights into participants’ engagement and the amount of time that they used to process feedback. The authors correctly recognize that this affected their ability to interpret how instruction type and feedback affected learning. Their explanation is transparent and theoretically grounded, but, again, it lacks detail and proposed solutions.
The authors report that for the recall quiz scores there was no significant difference between the learning-instruction and entertainment-instruction groups. For the knowledge questionnaire, paraphrase-type questions showed no significant instruction effect, but inference-type questions did, with the learning-instruction group significantly outperforming the entertainment-instruction group. Regarding motivational outcomes, there were no significant differences between groups. The authors acknowledge that, contrary to their expectations, instruction type did not influence memorization (paraphrase-type) and did not affect motivation. They note that the only significant effect of instruction type was on inference questions due to learning-oriented instructions. They interpret this to mean that the learning-oriented instruction likely promotes deeper processing. However, since the researchers only manipulated instruction in the experiments, their motivational findings could be strengthened. Additionally, in the discussion, the authors do not explore the possible reasons the instruction type failed to affect motivation or paraphrase scores other than admitting their manipulation of data might have been limited.
In Section 3.2, the authors again report that there was no significant difference between the learning-instruction and entertainment-instruction groups regarding recall quiz scores. For the knowledge questionnaire, paraphrase-type questions again showed no difference; however, with the inference-type questions, the entertainment-instruction group significantly outperformed the learning-instruction group. Regarding motivational outcomes, there were no significant differences except that performance-goal-avoidance scores were significantly higher in the learning-instruction group. The authors interpret these findings to mean that the combination of entertainment-oriented instructions with KCR feedback in Experiment 2 produced better inference outcomes than learning-orientated instructions. They also note that the entertainment instructions appear to generate lower fear of failure than learning instructions, though they admit a need for further study.
14. The experiment’s explanation lacks examples, operational definitions, and quantitative detail. The authors mention constructs such as motivation, achievement goals, intrinsic motivation, and deep versus surface learning, but they do not define these terms, measure them comprehensively, or discuss how they relate to one another. Much of the content presented in this phase, such as references to cognitive processes and goal orientations, would have been more appropriately placed in the literature review, where it could have provided conceptual grounding for the study.
The authors also contradict themselves by stating that there has been “extensive research on the effects of instruction type on cognitive processes,” while their literature review provided little or no coverage of such research. In this section, they introduce several new theoretical ideas, including flow theory, motivation, achievement goals, and deep learning; however, they discuss them only in vague terms, using phrases like “significantly affected” without reporting data, definitions, or statistical specifics and without clearly connecting them to the research questions or hypotheses.
Additionally, the authors fail to connect instruction type (educational vs. entertainment framing) with performance outcomes or to explain how these instruction types might influence cognitive processes such as attention, information selection, organization, or integration (Mayer, 2014). This missing connection is critical, as the framing of a game as “educational” or “entertaining” would likely influence intrinsic motivation and cognitive investment differently. Without this theoretical link, readers are left without an understanding of why one type of instruction might promote deeper learning or higher motivation than the other.
Moreover, the authors provide very little interpretation or discussion of the significance of their findings. The results of the first experiment are summarized without clear explanation of what the findings mean in relation to prior research, established theories, or DGBL instructional design. The study also fails to clarify how or why the “educational” versus “entertainment” framing led to different learning or motivational outcomes. Overall, the authors’ conclusions are underdeveloped, and the limited interpretation weakens the study’s contribution to both motivational and instructional theory.
15. Erhel and Jamet’s (2013) study on digital game-based learning has direct relevance to my work developing sex and relationship education (SRE) for autistic youth. Their findings show how the way learning is framed, as either educational or entertaining, and the type of feedback that is provided can significantly shape both motivation and learning outcomes. For my research, this reinforces the importance of designing digital learning environments that balance clarity and structure with curiosity and play. For autistic learners, that balance can make sensitive topics like relationships, consent, and emotional boundaries feel both safe and engaging.
The study also connects with the motivational and instructional theories guiding my work. Keller’s ARCS model and Deci and Ryan’s Cognitive Evaluation Theory emphasize building motivation through relevance, confidence, and autonomy. All of these can be strengthened through adaptive, supportive feedback rather than corrective or evaluative feedback. Gee’s (2003) principles of digital game-based learning highlight how interactive, low-stakes challenges can promote deeper reflection, while Hrastinski’s (2009) theory of online participation reminds me that engagement can also look like thoughtful observation or reflection, not just outward activity. This idea is especially relevant for autistic students who may prefer asynchronous or less socially demanding forms of participation and assessment.
For future curriculum design and research, this study opens the door to exploring how different instructional framings influence learner comfort and curiosity. Overall, Erhel and Jamet’s work reminds me that how we present and respond to learning experiences matters just as much as what we teach. This is especially important when designing inclusive, affirming education for neurodiverse learners.
References
All, A., Nuñez Castellar, E. P., & Van Looy, J. (2015). Towards a conceptual framework for assessing the effectiveness of digital game-based learning. Computers & Education, 88, 29–37. https://doi.org/10.1016/j.compedu.2015.04.012
Bishop, J. L., & Verleger, M. A. (2013). The flipped classroom: A survey of the research. In Proceedings of the ASEE National Conference on Engineering Education (pp. 1–18). ASEE.
Bruner, J. S. (1966). Toward a theory of instruction. Harvard University Press.
Coleman, T.E., Money, A.G. Student-centred digital game–based learning: a conceptual framework and survey of the state of the art. High Educ 79, 415–457 (2020). https://doi.org/10.1007/s10734-019-00417-0
Csikszentmihalyi, M. (1990). Flow: The psychology of optimal experience. Harper & Row.
Erhel, S., & Jamet, E. (2013). Digital game-based learning: Impact of instructions and feedback on motivation and learning effectiveness. Computers & Education, 67, 156–167. https://doi.org/10.1016/j.compedu.2013.02.019
Gagné, R. M. (1985). The conditions of learning and theory of instruction (4th ed.). Holt, Rinehart and Winston.
Gee, J. P. (2003). What video games have to teach us about learning and literacy. Palgrave Macmillan.
Jonassen, D. H. (1999). Designing constructivist learning environments. In C. M. Reigeluth (Ed.), Instructional-design theories and models: A new paradigm of instructional theory (Vol. II, pp. 215–239). Lawrence Erlbaum Associates.
Lea, S. J., Stephenson, D., & Tennant, M. (2003). Higher education students’ attitudes to student-centred learning: Beyond “educational bulimia”? Studies in Higher Education, 28(3), 321–334. https://doi.org/10.1080/03075070309293
Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38(1), 43–52. https://doi.org/10.1207/S15326985EP3801_6
Piaget, J. (1970). Science of education and the psychology of the child. Orion Press.
Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68–78. https://doi.org/10.1037/0003-066X.55.1.68
Skinner, B. F. (1953). Science and human behavior. Macmillan.
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press.
Woo, J.-C. (2014). Digital game-based learning supports student motivation, cognitive success, and performance outcomes. Educational Technology & Society, 17(3), 291–307.
Leave a comment