Frontline Learning Research Vol.8 No. 3 Special Issue (2020) 1 - 9
ISSN 2295-3159

The Promise and Pitfalls of Self-report: Development, research design and analysis issues, and multiple methods.

Luke K. Fryera, Daniel L. Dinsmoreb

aThe University of Hong Kong, Hong Kong
b University of North Florida, USA

Abstract

As a prelude to this special issue on the promise and pitfalls of self-report, this article addresses three issues critical to its current and future use. The development of self-report is framed in Vertical (improvement) and Horizontal (diversification) terms, making clear the role of both paths for continued innovation. The ongoing centrality of research design and analysis in ensuring that self-reported data is employed effectively is reviewed. Finally, the synergistic use of multiple methods is discussed. This article concludes with an overview of the SI's contributions and a summary of the SI's answers to its three central questions: a) In what ways do self-report instruments reflect the conceptualizations of the constructs suggested in theory related to motivation or strategy use? b) How does the use of self-report constrain the analytical choices made with that self-report data? c) How do the interpretations of self-report data influence interpretations of study findings?

Keywords: self-report, multiple methods, vertical and horizonal development, research design and analyses

Corresponding email: fryer@hku.hk Doi: https://doi.org/10.14786/flr.v8i3.623

This SI’s Mission

While self-report measures are ubiquitous across and often central to educational research, they are also often denigrated for a range of reasons. For instance, the reliability of the measures and the validity of the resultant score interpretations are often called into question (e.g., Veenman et al 2006). This has led to calls for moratoria on the use of self-report in some corners. However, rather than discarding or ignoring data generated from self-report measures of cognitive processing, motivation, emotions and beliefs, research is needed to determine when and if self-report measures can contribute to our collective understanding of theory surrounding these constructs. For example, relying solely on self-report to study regulatory processes has contributed little to our understanding of self-regulated learning (Dinsmore et al 2008), however, in other instances self-report may be the only viable manner in which to unearth covert constructions, such as self-efficacy (e.g., Zimmerman, 2000).

This special issue examines the accuracy of interpretations and conclusions drawn from self-reports regarding individuals’ metacognitive and cognitive processing, affect and beliefs, and the analytic choices made. These questions are addressed by an international group of experts examining these constructs from different theoretical and analytical perspectives. The current Special Issue as whole brings three issues that are often noted, but rarely specifically discussed into focus: lateral versus horizontal development of measurement approaches, the critical role of research design and analyses, and the complex role of utilizing multiple measurement methods.

2. Lateral and Vertical Innovation: Both are critical

The first topic to be addressed in this Special Issue is how self-report approaches are advancing both laterally (i.e., improving current methods) and horizontally (i.e., developing new methods). As an analogy, a vibrant city in the late 19th and early 20th century was bustling with horses, horse-drawn trams, and cars that coexisted along the city thoroughfares. Figuring out the best way to get across the city is dependent upon many factors – such as the persons wealth or when they are trying to make their transit. Similarly, the research literature is replete with different vehicles to transport oneself from Point A to Point B – namely how to measure the processes described in this special issue to better understand theory, and ultimately, student learning. As with the city, the best mode to measure these constructs depends on many factors. However, unlike the advances in transportation technology, the advances in and pressure to modernize self-report methods has been weak at best.

This advancement of methods (or lack thereof) for measuring latent constructs critical to educational research (self-report included) can be framed by Horizontal versus Vertical conceptions of development. This is a well-established framework for understanding growth (e.g., economic innovation; Bondarev, & Greiner, 2019) and change (e.g., natural selection; Lawrence, 2005) in a broad array of fields. Horizontal growth refers to innovating towards entirely new approaches, while vertical growth refers to refining and enhancing current methods. This framework fits the current era as educational research is flooded with new (Horizontal) means of measuring students’ cognitive processing (e.g., eye tracking; Chaulic et al. 2020) and meta-cognitive processing (e.g., trace data; Rogiers, et al., 2020). This Horizontal drive for innovation of measurement continues to push into complex areas such as emotion (facial recognition; Chiu, et al., 2019; Dingle, et al., 2016; skin conductance, Järvenoja, et al. 2018; Lehikoinen et al., 2019) and motivation (neuroscience; Hidi, 2016; Mayer, 2017). The considerable momentum behind this drive for alternatives to self-report measurement arise in large part with a longstanding dissatisfaction with their intra-psychic nature and the general lack of Lateral development in these measures.

Given the attention that the horizontal development of these measures has garnered and the lack of lateral development, this special issue addresses the many areas of lateral development that are possible. In other words, this special issue forges new inroads towards further development of self-report measurements across a range of processes. Not only do these empirical pieces suggest lateral development is possible, each starts to take us down this journey and provides evidence that these journeys are likely to be fruitful. From empirical multimethod studies (e.g., van Halen et al., 2020; Rogiers, et al. 2020) to the theoretically-rich commentaries (Pekrun, 2020; van Meter, 2020; Winne, 2020; ), this Special Issue suggests that self-report measures are a unique, valuable – and therefore irreplaceable – source of information about many critical aspects of the learning processes under study here. Clearly, the conclusions drawn from these analyses show that self-report remains critical in our understanding of learning and that educational researchers need to push harder for constant lateral innovation such as these. It safe to say, however, that many of these researchers have not felt this obligation. As noted in Fryer & Nakao’s (2020) contribution, the primary self-report instrument for the majority of educational research is a tool invented in the 1920s. On paper or smartphones, a Likert scale is a Likert scale. We can and should be struggling to improve on the tools of our trade – as is already being done widely across the technology industry (e.g., Google; Lawless & Biel, 2020).

Lateral advances in self-report can take many forms, a few of which are presented in this Special Issue. Addressing perceived weaknesses in the format by either adding to it (Durik & Jenkins, 2020) or changing it (Fryer & Nakao, 2020) are both rungs in the ladder up toward vertical innovation. Addressing how self-report tools are used and their results analysed (Moller, et al) is another means of climbing further up those rungs. A scan of leading journal suggests that the latter approach to lateral innovation is expanding (e.g., Gillet, et al., 2019; Yuen, et al., 2019), while the former – i.e., analysis – is almost unknown. Both are necessary if measurement of constructs critical to education (self-report or otherwise) is to continue to improve.

3. Not compounding self-report error: It is just common sense

The limitations sections of educational research articles are replete with apologies. There are three apologies that in our experience vie for most prevalent: a) the self-reported nature of the data, b) the cross-sectional nature of the research design and, hinging on the first two, c) the rigor of the analyses. It is time researchers stopped apologising for the first when it is necessary and useful, and did something about the latter two. The weaknesses of self-report have been well known for some time. As the present Special Issue has confirmed, however, self-report also has its unique strengths – often left unmentioned in critiques of self-report. Self-report is an important part of research in many areas (e.g., motivation), but should not be the only measurement tool. The supplement of self-report with other observed measures has the potential to improve our understanding of the complex interrelations between the variables described in this Special Issue and learning. This balanced approach to measurement should bring an end to self-report’s inclusion in limitations sections. The unresolved issue for the field is that many researchers continue to compound the weaknesses of self-report with cross-sectional designs and inappropriate analyses. These are at least partially linked, as authors, struggling to get published, are desperate to use popular analytical methods. The best example of this mis-use is structural equation modelling (SEM) with cross-sectional data sets. The often small sample sizes being utilised means that researchers are forced to use mean-based SEM rather than latent-based SEM (for an introductory overview of each and their differences see Kline, 2011). This pairing of common analytical mis-steps, compounds the inherent weaknesses of self-report: like building a tall, narrow building in an earthquake prone area.

These two research design issues amplify the self-report concerns cited previously in separate but connected ways. First, educational research is generally seeking to explain learning or learning related processes; processes which are by their very nature developmental and require longitudinal examinations of that development. Research using snapshots of the learning experience can make meaningful contributions to educational research, but only if the limitations of these static designs are taken seriously and appropriate analyses employed. Ginns et al (2017) is an example of exactly this kind of theoretically robust, carefully structured cross-sectional research. Second, self-report within educational research is generally employed to measure latent constructs. It therefore makes sense to employ analyses that treat self-report data as though it were representing latent constructs. This seems especially pertinent to survey self-report which generally measures constructs with multiple items. For fine grained analysis of subtle aspects of the learning process or interventions having nudge like effects (small but meaningful over time), the error imbibed by averaging across multiple survey items is a serious, and too often ignored issue. Novice readers and those skimming through articles, are prone to conflate the often mean-based and latent-based SEM. Path analysis, in addition to its inherent mistreatment of latent variables, prevent fully forward analysis due to a lack of degrees of freedom, resulting in the picking and choosing of regressive connections. This additional author-induced source of error is akin to the file-drawer bias (i.e., you don’t get the whole picture). Readers of articles that utilize mean-based SEM are too often presented with a cropped picture, only showing connections which support the researcher’s aims. What is commonly referred to as path analyses is just one example of how self-reported data can be mishandled, and lead to exacerbating their inherent weaknesses.

It is important to restate that cross-sectional designs can make a limited contribution to research, but researchers need to acknowledge their limitations and not draw conclusions that their data does not support. Researchers seeking to make a strong contribution to an area of educational research where self-reported measures are an important part of quality research design should strive to employ designs that can capture developmental processes and analyses that recognise latent for what it is: unseen. For a detailed and balanced discussion of this issue, we encourage a careful review of Martin (2011).

4. The promise of multiple methods

As discussed previously, many of the papers in this special issue use multiple measures to present a more complete picture of the complex interrelations between constructs. However, we should be careful to distinguish between the aims of engaging in this process and the analyses of these multiple measures, which has often been referred to as triangulation (Godfroid & Spino, 2015). We see three potential paths here: using measurements to identify the same aspect or aspects of a constructs, using measurements to identify complementary aspects of a construct, or both. We offer an analogy here to help the reader better understand these paths.

The first path, identifying the same aspect would be akin to using multiple measurements of sound to identify the pitches (how high or low a note sounds) of the notes in a melody (i.e., the part of a tune you might hum). One might use a well-trained ear and an electronic tuner to do this. If both the listener’s ear and the tuner are accurate, they should agree on the pitches – maybe the tune starts out and ends on “middle C”. Similarly, when examining one of the constructs in this Special Issue, say metacognition, this first path would be akin to saying that our multiple measurements are indeed measuring the same aspect of metacognition – that they should agree. This approach would often be analysed using a multi-method multi-trait analysis (MTMM; c.f., Campbell & Fiske, 1959). Here, we expect the same techniques used to measure the same construct to “agree” more often than those techniques used to measure different constructs. For example, if retrospective self-report and physiological measurements are used to measure reading comprehension and mathematical achievement, the self-report and physiological measurements of the same construct (e.g., reading comprehension) should correlate more closely than the two self-report measurements of the two different constructs. The latter would be an example of a methods effect, while the former would demonstrate that both measurements are measuring the same aspect or construct.

However, the melody of a piece of music is often not the only aspect of a musical composition. In a symphony, for instance, the melody is often accompanied by other lines of music as well (which would be harder for the novice to hum). Thus, it might be necessary to use different techniques to identify the sounds present. While a well-trained ear would be able to pick out the chords composed by these multiple musical lines, a simple tuner would not. This issue gets even more complex when thinking about a Bach fugue for example, that layers multiple melodies and countermelodies to create a rich tapestry of sound (c.f., J. S. Bach’s Toccata and Fugue in D minor, BWV 565). This more complex conceptualization describes the second path here – are we using multiple measurements to better describe the rich symphony of a process at play? In other words, are there multiple aspects of a particular construct that some measurements are better at tapping than others? For instance, if an MTMM analysis demonstrated that two different measurements of the same construct did not correlate well, does that mean they are inaccurate or does that mean that the construct under investigation is multi-faceted in the same way that Bach’s fugues are multi-faceted?

The third path – and the one that we recommend – is considering both of these routes as these multiple measurements are considered. In other words, when do our measurements measure the same aspect of a construct and when do they measure different aspects of a construct. For example, although strategy use is considered a construct within a domain, different aspects of that strategy use (i.e., quantity, quality, and conditional use) have been demonstrated to be related to learning in different ways (Dinsmore, 2017). Thus, how can we operationalize our theoretical conceptions of strategy use in meaningful ways to build and use theory? This is particularly important as we think about the development (e.g., changes) of these processes as they unfold over time. Like our Tocatta and Fugue in D minor example, it is quite possible that we begin with a simple melody, but then morph into a more complex interweaving of voices as the development of the piece progresses.

5. Empirical contributions

To explore how we can improve self-report measurements or use them in concert with other measurements, eight empirical studies were conducted. These studies examined the validity of score interpretations and future of self-report measurements. These studies each addressed at least two of the Special Issue’s three central questions: 1. In what ways do self-report instruments reflect the conceptualizations of the constructs suggested in theory related to motivation or strategy use? 2. How does the use of self-report constrain the analytical choices made with that self-report data? 3. How do the interpretations of self-report data influence interpretations of study findings?

Durik and Jenkins’s (2020) test of the role of certainty with self-report surveys. They build on literature connecting attitude to behaviour, seeking a new perspective on the relationship between interest and behaviour. This paper tests the relationship between level of interest and certainty of that self-report. This is then extended to examination of the connections between certainty and related behaviour. Durik and Jenkin’s is a rare attempt at Vertical innovation with interesting preliminary implications for survey methods and interest research theory. This research needs to be followed up with different participants and variations on their research design.

Chauliac et al. (2020) employed eye-tracking to assess the cognitive processes participants undertake while completing a quantitative questionnaire. They aimed to establish linkages between participants eye movements and their questionnaire answering behaviour. This research yielded no simple answers but lays a foundation for further research into the processes underlying questionnaire response behaviours – namely, in helping to figure out if the questionnaires and eye movements were measuring similar or different aspects of those underlying reading processes.

Making a case for the multimethod approaches that recognise both the value and weakness of self-report, Vriesema and McCaslin (2020), bring survey self-report and observations together in their article. Their results suggest that there is clear alignment between self-report and classroom observations of student groups at ages as young as grade three. Their findings support the use of self-report as part of robust research design for a broad range of ages.

Rogiers et al. (2020) employed think aloud protocols to further explore person centered survey self-report findings regarding secondary school students' text-learning strategies. Results from this combination of retrospective and concurrent approach to self-report pointed to the validity of self-reports. The latter approach provided an additional, nuanced, often ignored perspective on the frequency and sequence of students’ strategies. This article reviews how this pair of self-report methods offers researchers a unique bifocal perspective on student learning experience.

Iaconelli and Wolters (2020) address an area of survey research which is often noticed but rarely engaged with: Insufficient Effort Responding to surveys. This research tests whether “insufficient effort responding” (IER) to survey question is a meaningful threat to survey data validity. As important as their findings, which point to IER as more nuisance than threat, are their recommendations for survey research when reporting their findings.

Toward Vertical innovation of survey self-report in this mobile age, Fryer and Nakao (2020) present an experimental test of four survey formats (Likert, Visual Analogue Scale, Slide, and Swipe). A series of analyses on the resulting data set encourage more work with continuous formats like Slide and Swipe. Nearly a century on from the inception of formats like Likert and VAS, the authors suggest it is time for researchers to look up and embrace our touch-based future.

Van Halem et al. (2020) presented a study triangulating survey self-reports of self-regulated learning with online traces of students learning behaviours. They confirm that aptitude-based self-reports cannot accurately capture complex SRL alone. Their findings suggest that self-report measures and online data regarding SRL are complementary in predicting students’ study success. Results demonstrate that both perspectives explain a unique proportion of students’ academic performance.

Moeller et al. (2020) aimed to make students’ course feedback more meaningful to instructors. They did this through research design and analyses that separate the broader learning situation from the individual’s reported experience. This separation makes it possible to track the subjective and objective development of learning experiences across a course. In addition to demonstrating how their methods might support individualised learning, Moeller et al.’s study raised the critical role of multiple methods and including objective measures in self-report centered research.

6. The Commentaries

In addition to these empirical contributions, three international experts have weighed in on how this Special Issue's articles make substantive contributions to the extant research literature. Each focus on at least two of the three guiding questions for the special issue. Winne (2020) takes a conceptual approach by focusing on what self-report data are. While there is much discussion in the literature about our conceptualizations of constructs, there is much less discussion about how we conceptualize the measurements themselves. Winne tackles this thorny issue. Winne argues – and we agree – that without a better conceptualization of self-reports, there is little evidence that participants can get better at responding to them. In turn, the better participants are at responding to these types of measurements, the better the interpretations of these data will be. Pekrun (2020) makes the case for the importance of self-report data, and like Winne touches on what they are. Pekrun extends this discussion by focusing primarily on how to improve the validity of the score interpretations of self-report data. Finally, Van Meter (2020) tackles all three questions. At the heart of her commentary is a deep dive into when self-reports are useful and how they can be leveraged to best help us build and refine theory. Her theoretically-driven set of conditions for when and how to use self-report offer both younger and more experienced researchers alike a useful framework to guide their choices of self-report measurements.

7. Implications of the Special Issue

This brings us full circle back to how the empirical and commentary articles have together addressed this Special Issue’s focal points. The eight empirical contributions stretched across the theoretical domains of self-regulation, interest and cognitive processing strategies, but still presented a coherent picture of the validity and future of self-report. 1. In what ways do self-report instruments reflect the conceptualizations of the constructs suggested in theory related to motivation or strategy use?

A common theme across Rogiers et al (2020), van Halem et al. (2020) and Vriesema et al. (2020) is that the retrospective survey self-report of attitudes and dispositions are an important often unique part of understanding future learning experience and outcomes. However, for more comprehensive, dynamic conceptualisations to be drawn, additional online measurement is critical. This online measurement might be self-report (TAP) or observed (trace or observations), both perspectives have the potential to expand our understanding of students’ strategies and motivations for learning. The answer to the SI’s question is therefore that instruments do matter, and the path toward more robust conceptualizations is multimethod research designs. Any questions about whether those methods should be self-report or not can be set aside. 2. How does the use of self-report constrain the analytical choices made with that self-report data?

The wide variety of contributions to this special issue demonstrate it is the broader question of research design that determines analytical choices as much or more than how self-report is used. Experimental, repeated measures, variable/person-centered analyses and an array of mixed methods arrangements exemplify the full range of analytical tools available to researchers. Researchers are strongly encouraged to focus less on well-known issues with self-report, and instead look to the designs they are embedded within and analyses employed. 3. How do the interpretations of self-report data influence interpretations of study findings?

While each of this Special Issue's articles addresses this question in some form, the syntheses of the three commentaries address it best. All three of these commentaries point out the need to clearly understand the core construct (e.g., Pekrun, 2020), the measurement itself (Winne, 2020), and the conditional nature of their use (Van Meter, 2020). It is critical that the interpretations of self-report data are situated within theoretical frameworks of the core constructs that are being measured. For instance, interpretations of self-report data around interest (e.g., Durik & Jenkins, 2020) will be qualitatively different than those around feedback (Moeller et al., 2020). In other words, the self-report itself must change to allow better interpretations; even survey formats must remain flexible to innovation (e.g., Fryer & Nakao, 2020).

8. Concluding thoughts

The impetus of this special issue was borne out of our frustration as younger scholars with the tools used to study the covert, complex processes at the heart of this special issue. Having seen self-report used poorly and hearing the calls from scholars at all stages of their career calling for a moratorium on self-report, we wanted to expand the discussion beyond simply deciding whether we should forge ahead with the same old tools in the same old way or abandon them all together. Rather, we wanted a vehicle in which scholars could reflect on the way these tools are used and use them more appropriately. We were fortunate to have a number of scholars willing to contribute high-quality empirical studies to this effort. The three commentaries then provided excellent avenues for extending these conversations and hopefully spurring more deep conversations about these thorny issues. We hope that readers of this special issue will be as satisfied with the result as we were in helping to curate them.

References


Bondarev, A., & Greiner, A. (2019). Endogenous growth and structural change through vertical and horizontal innovations. Macroeconomic Dynamics, 23, 52-79. https://doi.org/10.1017/S1365100516001115
Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56, 81-105.
Chauliac, M; Catrysse, L. ; Gijbels, D. & Donche V. (2020). It is all in the surv-eye: can eye tracking data shed light on the internal consistency in self-report questionnaires on cognitive processing strategies? Frontline Learning Research. 8 (3), 26 – 39. https://doi.org/10.14786/flr.v8i3.48
Chiu, M. H., Liaw, H. L., Yu, Y. R., & Chou, C. C. (2019). Facial micro‐expression states as an indicator for conceptual change in students' understanding of air pressure and boiling points. British Journal of Educational Technology, 50, 469-480. https://doi.org/10.1111/bjet.12597
Dingle, G. A., Hodges, J., & Kunde, A. (2016). Tuned In emotion regulation program using music listening: Effectiveness for adolescents in educational settings. Frontiers in Psychology, 7, 859. https://doi.org/10.3389/fpsyg.2016.00859
Dinsmore, D. L. (2017). Towards a dynamic, multidimensional model of strategic processing. Educational Psychology Review, 29, 235-268. https://doi.org/10.1007/s10648-017-9407-5
Dinsmore, D. L., Alexander, P. A., & Loughlin, S. M. (2008). Focusing the conceptual lens on metacognition, self-regulation, and self-regulated learning. Educational Psychology Review, 20, 391-409. https://doi.org/10.1007/s10648-008-9083-6
Durik, A. M. & Jenkins J. S. (2020). Variability in Certainty of Self-Reported Interest: Implications for Theory and Research. Frontline Learning Research. 8 (3) 85-103. https://doi.org/10.14786/flr.v8i3.491
Fyer, L. & Nakao K. (2020). The Future of Survey Self-report: An experiment contrasting Likert, VAS, Slide, and Swipe touch interfaces. Frontline Learning Research, 8 (3),10-25. https://doi.org/10.14786/flr.v8i3.501
Gillet, N., Morin, A. J., Huyghebaert, T., Burger, L., Maillot, A., Poulin, A., & Tricard, E. (2019). University students' need satisfaction trajectories: A growth mixture analysis. Learning and Instruction, 60, 275-285. https://doi.org/10.1016/j.learninstruc.2017.11.003
Ginns, P., Martin, A. J., & Papworth, B. (2018). Student learning in Australian high schools: Contrasting personological and contextual variables in a longitudinal structural model. Learning and Individual Differences, 64, 83-93. https://doi.org/10.1016/j.lindif.2018.03.007
Godfroid, A., & Spino, L. A. (2015). Reconceptualizing reactivity of think‐alouds and eye tracking: Absence of evidence is not evidence of absence. Language Learning, 65, 896-928. https://doi.org/10.1111/lang.12136
Hidi, S. (2016). Revisiting the role of rewards in motivation and learning: Implications of neuroscientific research. Educational Psychology Review, 28(1), 61-93. https://doi.org/10.1007/s10648-015-9307-5
Iaconelli, R. & Wolters C.A. (2020). Insufficient Effort Responding in Surveys Assessing Self-Regulated Learning: Nuisance or Fatal Flaw? Frontline Learning Research. 8 (3) 104 – 125. https://doi.org/10.14786/flr.v8i3.521
Lawless, K. A., & Riel, J. (2020). Exploring the utilization of the big data revolution as a methodology for exploring learning strategy in educational environments. In D.L. Dinsmore, L. K. Fryer, & M. M. Parkinson (Eds.), Handbook of strategies and strategic processing, (pp.296-316). New York: Routledge.
Lawrence, J. G. (2005). Horizontal and vertical gene transfer: The life history of pathogens. Contributions to Microbiology, 12, 255-271.
Kline, R. B. (2011). Principles and practices of structural equation modeling (3 ed.). New York: Guilford Press.
Martin, A.J. (2011). Prescriptive Statements and Educational Practice: What Can Structural Equation Modeling (SEM) Offer? Educational Psychology Review. 23. 235-244. https://doi.org/10.1007/s10648-011-9160-0
Mayer, R. E. (2017). How can brain research inform academic learning and instruction? Educational Psychology Review, 29(4), 835-846. https://doi.org/10.1007/s10648-016-9391-1
Moeller, J. ;Viljaranta, J.; Kracke, B. & Dietrich, J. (2020). Disentangling objective characteristics of learning situations from subjective perceptions thereof, using an experience sampling method design. Frontline Learning Research, 8(3), 63-84. https://doi.org/10.14786/flr.v8i3.529
Pekrun, R. (2020). Self-report is indispensable to assess students’ learning. Frontline Learning Research, 8 (3), 185–193. https://doi.org/10.14786/flr.v8i3.627
Rogiers, A.; Merchie, E. & Van Keer H. (2020). Opening the black box of students’ text-learning processes: A process mining perspective. Frontline Learning Research, 8(3), 40 – 62. https://doi.org/10.14786/flr.v8i3.527
Van Halem, N., van Klaveren, C., Drachsler H., Schmitz, M., & Cornelisz, I. (2020). Tracking Patterns in Self-Regulated Learning Using Students’ Self-Reports and Online Trace Data. Frontline Learning Research, 8(3) 140-163; https://doi.org/10.14786/flr.v8i3.497
Van Meter, P. (2020) Measurement and the Study of Motivation and Strategy Use: Determining If and When Self-report Measures are Appropriate. Frontline Learning Research, 8(3), 174–184. https://doi.org/10.14786/flr.v8i3.631.
Veenman, M. V., Van Hout-Wolters, B. H., & Afflerbach, P. (2006). Metacognition and learning: Conceptual and methodological considerations. Metacognition and Learning, 1, 3-14. https://doi.org/10.1007/s11409-006-6893-0
Vriesema, C.C., & McCaslin, M. (2020) Experience and Meaning in Small-Group Contexts: Fusing Observational and Self-Report Data to Capture Self and Other Dynamics. Frontline Learning Research, 8 (3), 126-139. https://doi.org/10.14786/flr.v8i3.493
Winne, P. (2020) A Proposed Remedy for Grievances about Self-Report Methodologies. Frontline Learning Research. 8 (3) 164 -173. https://doi.org/10.14786/flr.v8i3.625
Yuen, A. H., Cheng, M., & Chan, F. H. (2019). Student satisfaction with learning management systems: A growth model of belief and use. British Journal of Educational Technology, 50(5), 2520-2535. https://doi.org/10.1111/bjet.12830
Zimmerman, B. J. (2000). Self-efficacy: An essential motive to learn. Contemporary Educational Psychology, 25, 82-91. https://doi.org/10.1006/ceps.1999.1016