Frontline Learning Research Vol.8 No. 3 Special issue (2020) 174 - 184
ISSN 2295-3159

Commentary: Measurement and the Study of Motivation and Strategy Use: Determining If and When Self-report Measures are Appropriate

Peggy N. Van Metera,

aPennsylvania State University, USA

Abstract

The goal of this special issue is to examine the use of self-report measures in the study of motivation and strategy use. This commentary reviews the articles contained in this special issue to address the primary objective of determining if and when self-report measures contribute to understanding these major constructs involved in self-regulated learning. Guided by three central questions, this review highlights some of the major, emergent themes regarding the use of self-report. The issues addressed include attention to evidence for construct validity, the need to consider broad methodological factors in the collection and interpretation of self-report data, and the innovations made possible by modern tools for administering and analyzing self-report measures. Conclusions forward a set of conditions for the use of self-report measures, which center on the role of theoretically-driven choices in both the selection of self-report measures and analysis of the data these measures generate.

Keywords: self-report, self-regulated learning, motivation, strategies, strategy use

Info corresponding author email: pnv1@psu.edu DOI https://doi.org/10.14786/flr.v8i3.631

Student learning can be understood through a lens of self-regulation, which explains learning as involving dynamic, cyclic processes that are both self- and goal-directed (Zimmerman, 1990). The self-directed component of this definition is central to understanding models of self-regulated learning (SRL) because these models posit that learning is influenced by the learner’s own choices and abilities to apply effortful, effective learning processes. As such, the study of motivation and strategy use is a centrepiece in the study of SRL. If one is to understand a learner’s choices and abilities, then one must understand the motives and strategic operations on which these rest. One must, that is, be able to answer questions such as, “What factors influence learners’ motivational states?”, “Which strategies do learners apply?”, and “How do motivation and strategy use influence learning?”. Although each article in this special issue contributes empirical evidence that advances our understanding of motivational and strategic processes and just how we might answer these questions, individually they differ with regard to the specific constructs of interest. Vriesema and McCaslin (2020), for example, report on the processes of identity formation in small group learning. Moeller et al.(2020) study the emotions and beliefs of interest for class activities while Rogiers, et al. (2020) examine how profiles of learning are associated with dynamic strategy use during text learning. Altogether then, these articles do give insights into a variety of SRL constructs associated with motivation and strategy use and each can be interpreted in the context of the corresponding construct-specific research literature. The focus of this special issue, however, is not on these constructs per se, but rather how these constructs are measured and how they are understood through the lens of that measurement. Specifically, the purpose of this special issue is to examine the use of self-report measures and address the challenge of determining “when and if self-report measures can contribute to our collective understanding of theory surrounding constructs.” (Fryer & Dinsmore, 2020). The articles in this issue represent different ways of answering that call. Articles by Fryer and Nakao (2020), Iaconelli and Wolters (2020), and Chauliac et al. (2020) for example, adopt a measurement approach and focus on factors that can influence the reliability and validity of self-report data. Other articles; namely those by Durik and Jenkins (2020) and Moeller et al. (2020); explore methods to enhance the evidentiary value of self-report data. A final grouping of articles sought to establish the need for self-report data by demonstrating the benefits of using these instruments in pursuit of theoretically compelling components of learning. Included in this grouping are articles by Vriesema and McCaslin (2020), van Halem et al. (2020), and Rogiers et al. (2020).

Despite these differences, what unites these articles is shared attention to the set of central questions that drive this special issue. Specifically, author teams were tasked with addressing some combination of three questions that concern the utility of self-reports for the study of motivation and strategy use. These central questions ask about (1) the alignment of self-report methodology and theoretical conceptualizations of constructs, (2) the influence of self-report methodology on the interpretation of study results, and (3) the connection between self-report methodology and analytic choices. The articles in this issue present data obtained through particular programs of research and, as such, each article offers some particular view on the answers to these questions. The goal of this commentary is to look across those specifics and offer a more synthetic perspective; one that draws across constructs and methodologies to highlight themes around these questions and draw conclusions about what this body of articles suggests for the future of self-report use. Toward that end, my comments will admittedly overlook differences with regard to the specific constructs represented in this set of papers and instead treat each as representative of the set of constructs associated with SRL. The remainder of this commentary is organized around the three central questions and addresses some of the major themes that emerged from the articles in this special issue.

In what ways do self-report instruments reflect the conceptualization of the constructs suggested in theory related to motivation and strategy use?

On the face of it, this is a rather straightforward question about an aspect of construct validity. That is, do the measures align with, and therefore reflect, the theoretical conceptualizations of the constructs (Edwards & Bagozzi, 2000)? Construct validity is critical to the relationship between measurement and theory because it is measurement that provides the operational definition of a construct. Whereas theoretical descriptions of a construct may be abstract and difficult to pin down, an operational definition is the specific way in which the construct is measured, including the exact prompts to which participants respond and the ways that data is collected. Consequently, if one wants to know what is meant by theoretical terms such as identity formation (Vriesema & McCaslin, 2020) or interest (Fryer & Nakao, 2020), one need only look to how those constructs are operationalized. In this regard, establishing this aspect of construct validity requires three elements (1) a clear articulation of the theoretical conceptualization, (2) a clear articulation around the measurement methodology, and (3) a coherent mapping between the conceptualization and the methodology. Efforts toward establishing construct validity can also feed into a cycle of theory and measurement refinement. That is, confidence in the validity of a measure is increased when obtained data behave in theoretically consistent and predictable ways, but innovations in measurement can also reveal evidence of phenomena that stimulate refinement and development of theoretical accounts.

The articles in this special issue provide a number of examples of how this form of validity can be established when using self-report measures. Most specifically, the authors achieve this by carefully and explicitly defining the constructs of interest in the context of guiding theoretical frameworks, and tying these definitions to the measurement instrument. While several articles provide examples of how this can be done, just two will be presented as illustrations here. First is the study by Rogiers et al. (2020), in which they examined the text learning strategies of middle school students. The purpose of this study was to “fully map and understand individual students’ learning” (p. 1) using the research context of students studying to learn from an expository text to engage in this mapping. SRL is the theoretical framework that guides this research and, consistent with the definitions used throughout this issue, Rogiers et al. defined SRL as involving adaptive, flexible strategy use in dynamic, iterative phases. Further, Rogiers et al. stated that there are individual differences in how learners employ strategies and in their perceptions of this strategy use. Most central to the theoretical conceptualization, Rogiers et al. also reasoned that these individual differences could provide insight into the dynamic, adaptive ways that learners employ strategies during learning. Their use of two different self-report measures follows from this conceptualization. First, participants thought aloud while engaged in the text learning task with the resulting protocols revealing of the dynamic strategic processes employed during the task. Second, after reading, participants completed a self-report survey, which queried the task-specific cognitive and metacognitive strategies used during study. This survey measure identified meaningful individual differences and served to group participants into different profiles of strategy use (e.g., integrated strategy user). The value of both forms of self-report data was realized by using the profiles of strategy use to guide interpretation of think aloud data. In brief, consistent with theoretical conceptualizations, Rogiers et al. were able to use self-report measures to demonstrate that different types of strategy users employ dynamic SRL processes in different ways.

A second example of how articles demonstrate the connection between conceptualizations of a construct and self-report measures of that construct can be found in Moeller et al.’s (2020) study of situational interest in a college course. This article defined interest as a motivational and emotional state that fluctuates over time, and measured these fluctuations as situational expectancy and task value (i.e., expectancy-value; Eccles, & Wigfield, 2002). Moreover, the authors argued that, at any point in time, these states are a function of (1) stable personal traits, (2) situational personal perception, and (3) objective components of the situation. In order to follow from this theoretical conception then, measures of interest must capture and distinguish all three sources of this variance. The use of a self-report interest survey, which was administered periodically during class, was a logical choice in this context because survey responses allow the capture of individuals’ perceptions. It was the manner in which Moeller et al. employed the survey, however, that permitted the strong connection between the theoretical conceptualization of situational interest and the self-report measure. While the reader is referred to that article for a full explanation of the methodology, a short summary here will suffice: Course students completed the survey at multiple time points with multiple students intentionally sampled at each time point. This sampling pattern then permitted examination of both objective evaluations (i.e., group means) and personal perceptions (i.e., deviation from the mean). Ultimately, the use of the self-report measure was validated when Moeller et al. were able to parse the variance in individuals’ time-point interest reports into the three theoretically predicted sources of variance.

In sum, the articles in this special issue demonstrate that self-report measures can not only reflect conceptualizations of constructs, they can do so in theoretically compelling ways. These efforts toward construct validity feed the mutually reinforcing cycle of theory development and methodological refinements. Rogiers et al.’s finding of relations between profiles based on learners’ perceptions of strategy use and their dynamic application of those strategies, that is, furthers understanding of individual differences and SRL. At the same time, Moeller et al.’s study advances theory regarding the personal and objective sources of situational interest; a motivational construct central to understanding SRL. In the context of this special issue, however, where the challenge is to determine when and if self-report data contributes to the understanding of constructs, there is another layer to the question of how self-report reflects theoretical conceptualizations. Specifically, in this context, it is not sufficient to show that some measurement choice is consistent with theoretical definitions or even that the self-report data accounts for some theoretically interesting variance. Instead, this task calls on us to consider when and if self-report data provides insight into some phenomenon that is not gained by another measurement approach. In other words, we are challenged to show not only that self-report measures can reflect conceptualizations of motivation and strategy use, but also that some self-report methodology is uniquely suited to doing so.

A partial response to this challenge can be obtained by pointing back to the constructs of interest. Specifically, when the construct of interest is a learner’s perception of intra-psychic states (e.g., beliefs, motivations), then it is sensible to suggest that the best way to uncover these perceptions is to ask the learner (Fryer & Nakao, 2020). In addition to this argument, however, articles in this special issue lay out an even more convincing reason for using self-report measures. Namely, self-reports are a justifiable measurement tool because data from these measures offer unique explanatory power when it comes to understanding motivation and strategy use. Again, two studies from this special issue can be used to illustrate this point. The first example is the study by Vriesema and McCaslin (2020), which sought to understand the processes of identity formation in small group settings. Guided by a co-regulation theoretical model, the authors collected self-report data on students’ perceptions of how they engaged with the members of their group as well as pre- and post- anxiety and emotional adaption profiles. Observational data of group interactions was also collected and analyzed to show the actual interaction pattern that took place in the groups over a series of six lessons. The analysis of this data demonstrates that more is learned about identity formation and co-regulation from both self-report and observational data than from either source alone. While pre-group self-reports of emotional adaptation were predictive of some co-regulation styles, for example, certain co-regulation styles were predictive of post-group emotional adaptation profiles.

The value of self-report methodology is also demonstrated in the study by van Halem et al. (2020), which shows that data obtained from these measures offers unique explanatory insights. In this study, trace data was collected over a period of eight weeks as students in a college statistics course accessed online course resources. Using the theoretical framework of SRL, the authors point out that these trace data provide insights into behavioral aspects of learning; these traces are “observable evidence of particular cognitions…where a cognitive process is applied” (p. 3) At the same time, however, these traces do not indicate just what those cognitive processes are. One student, for example, may access some resource because it covers content from a class that was missed while another student may access that same resource because they did not understand the content when it was covered in class. To gain insight into the processes underlying these behavioral traces, van Halem et al. had participants complete a self-report survey of SRL behaviors (i.e., Motivated Strategies for Learning Questionnaire; MSLQ; Pintrich et al. 1993) in the fourth week of the course. At the end of the course, analyses showed that, when both trace and self-report data were included, some MSLQ sub-scales accounted for variance in grades above and beyond that accounted for by the behavioral data. In short, like the research on identity formation in small groups, this study shows that a self-report measure can explain important aspects of motivation and strategy use that would not be captured in the absence of the measure.

In summary, the research teams represented in this special issue demonstrate three specific ways in which self-report instruments reflect theoretical conceptualizations of motivation and strategy use. First, across the set of articles, one can see that self-report instruments and methodologies can operationalize constructs in theoretically consistent ways. Second, these measures can generate data that not only behaves in theoretically predictable ways, but also offers refinements to the conceptualization of constructs. Third, self-report measures can reflect conceptualizations by capturing patterns and variance in motivation and strategy use that are not obtained through other means. While these answers justify the use of self-report from a conceptual standpoint, they cannot be completely disentangled from more specific methodological choices associated with the use of self-report. The two remaining questions posed by this special issue provide the opportunity to address some of these points.

How do the interpretations of self-report data influence interpretations of study findings?

This second question, which asks how the interpretations of self-report data influence the interpretations of study findings, is similar to the first question in that it can be understood as addressing an aspect of validity. Namely, validity is not determined by some measure itself but rather by the degree to which the interpretations drawn from the scores on that measure can be justified (Messick, 1995). In this respect, the interpretations of a study’s findings are valid when the data sources on which those findings are based have been interpreted in valid ways. This logical argument then calls for a particular view on the question that frames this section: To understand how self-report data influences the interpretation of study findings, we must understand the factors that influence the reliability and validity of the data derived from these measures. Also similar to the previous section, there are two different perspectives we can take on this question. The first perspective concerns the factors that may influence the reliability and consequently, the validity of scores from self-report measures. This perspective is primarily concerned with potential sources of error in self-report measurement of motivation and strategy use. The second perspective takes a more conceptual view and considers the ways in which the methodologies of collecting self-report data influence the interpretations of that data. This perspective draws attention to the broader theoretical and contextual factors that influence how scores can be interpreted.

With respect to the first perspective, several studies in this special issue examine sources of error in self-reports and how those sources can be understood or reduced. One potential source of error, which is examined in the study by Fryer and Nakao (2020), is the format of the response scales and interfaces used to record participant’s responses. This examination was prompted by the body of work on survey instruments, which suggest that the response scales themselves can impact the nature and reliability of scores. Participants in this study were graduate students enrolled in a course on teaching and, throughout the course, these participants responded to surveys assessing their interest in class activities. To examine response formats as a potential source of measurement error, this study had participants complete self-report surveys that asked the same questions, but used four different interfaces: labelled categorical scales, visual analog scales (VAS), swipe, and slider. These surveys were administered at six time points throughout the semester so that all participants responded using each of the interfaces and, at any one time point, all four interfaces were used. This design permitted comparisons across the different interfaces to determine if any significant differences in response patterns could be tied to differences in the interface. On the whole, the results suggest that response interfaces are not a significant source of error. Each of the measurement methods yielded acceptable levels of reliability and there were no differences in either the mean scores across the measures within the six time points or differences in the factor structures of the measures. Although details in the findings lead Fryer and Nakao (2020) to suggest that the Swipe method shows promise and the VAS method is the weakest, the totality of the data indicates that scores obtained from each of the response formats and interfaces can be validly interpreted.

Another potential source of error, one that has been suggested throughout the history of self-report surveys, is insufficient effort on the part of respondents. According to this view, the results of self-report surveys are tainted by participants who either do not put forth the cognitive effort to answer survey questions or bias the results by responding in unserious ways. Two articles in this special issue address this concern by examining data related to participants’ survey response patterns. First, as part of a larger study, Chauliac et al. (2020) collected eye tracking data while college students responded to a task-specific survey on processing strategies (i.e., Inventory of Learning Styles; Vermunt & Donche, 2017). The time and frequency of eye fixations on any given question were interpreted as indicators of effort while the consistency of an individual’s within-scale responses were taken as an indicator of within-person reliability. Analyses showed a relationship between fixations and response variability wherein participants who spent the most time on an item were also most likely to show only small degrees of variation in item responses. In other words, these participants showed patterns indicative of reliable responding. By contrast, participants who spent the least amount of time were most likely to either select the same categorical response for each scale item or show extreme variability; i.e., poor reliability.

Chauliac et al.’s (2020) finding are complimented by the research of Iaconelli and Wolters (2020), which also examined indicators of insufficient effort responding, but extends this work by presenting techniques to detect these participants in large scale data collections. This study involved nearly 300 college students in a course designed to improve their SRL and, at three different times in the course, all participants completed self-report surveys tapping into their dispositions, beliefs, and behaviors. Consistent with Chauliac et al. (2020), Iaconelli and Wolters posit that the validity of a measure is threatened if respondents exert too little effort while answering questions. Further, they posit that these response patterns can be detected by examining indicators of effort (i.e., time) and consistency in response patterns. While the reader is referred to the article itself for details on these indicators, there are three main conclusions relevant here. First, there are some participants who show insufficient effort. But, two, these participants comprise only a small percentage of the total sample and their inclusion, at least in a large data set, does not significantly alter either the mean or the internal consistency of the data set. Third, although each of the three surveys had some participants who gave insufficient effort, this did not appear to be a stable individual difference. Instead, if a participant did exhibit insufficient effort, this tended to occur on only one of the three surveys.

As summarized here, the articles in this issue report evidence that self-report surveys can and do provide reliable indicators of variables associated with motivation and strategy use. Neither the response format nor a lack of respondent effort introduced sufficient error variance to question the interpretations that are drawn from these instruments. Under these conditions then, we can conclude that study findings based on these self-report measures can be interpreted in the intended ways. The second perspective on this question of how self-reports influence study findings, however, encourages us to look at a broader set of factors that influence the validity of self-report data interpretations. These broader factors include the totality of the context in which the measure is administered including theoretically-motivated methodological factors. To illustrate this, consider the article by Durik and Jenkins (2020), which ultimately concludes that the person-domain context must be taken into account when interpreting the results from self-report surveys of learner interest. In a pilot study and two experiments, college students responded to self-report surveys that assessed their interest in different domains (e.g., math and psychology) and also indicated the likelihood that they would pursue future learning in these domains (Study 1 and 2). The purpose of this research was to explore how self-report can be used to better explain the relationship between interest and behavior and, toward that goal, Durik and Jenkins had participants also respond to questions gauging their certainty in provided interest ratings. Factor analyses showed that interest and certainty comprise separate factors, indicating that respondents are able to distinguish these two beliefs. Analyses also showed that the level of certainty moderated the relationship between reported interest and future behavior with the interest-behavior relationship markedly stronger for participants with high levels of certainty. Altogether, the data presented in this article shows that, at least in the study of interest, (1) participants’ ability to provide accurate, predictive self-reports depends on how certain they are about these reports and (2) certainty varies with exposure to the domain. Considering this in light of the question of how self-report influences the interpretation of study findings, this research highlights the need to attend to the broader context in which the measure is administered; in this case, the context of the person-domain relationship.

Another methodological factor that emerged as important to the interpretations of study results is the timing of self-report administration. Although there are different types of self-reports possible, each requires participants to respond to some query on the basis of their memory for the relevant information (See the articles by Chauliac et al. and Iaconelli & Wolters, this issue for a discussion of these models) and each is administered prospectively, concurrently, or retrospectively. In this respect, self-report responses provide a snapshot in time (Durik and Jenkins, 2020). Yet, because effective learning processes are understood as dynamic, flexible, and adaptive; a challenge to the use of self-report data is the need to show how a snapshot can shed light on active motivational and strategic operations. This special issue presents one possible answer to this: Researchers can enhance the validity of interpretations by attending to the timing in which self-report measures are administered and incorporating this timing into the interpretation of study findings.

To illustrate this point, consider the study by van Halem et al. (2020) in which trace data was collected from students as they accessed online materials throughout an eight-week statistics course. Participants also completed a self-report measure of SRL in the fourth week (i.e., MSLQ). As described previously, study results showed that both the trace and self-report data accounted for variance in students’ final course grades. Additionally, however, van Halem et al., also examined relations between trace and self-report data for each of the eight course weeks and found that the relationships were the strongest in the weeks preceding completion of the self-report survey and weak in the periods thereafter. In short, the timing of the self-report measure influenced the nature of the resulting data and thus, must be incorporated into the interpretation of study findings: Self-report can be an accurate snapshot of students’ memories for what they have done in a course but are not necessarily prognosticators of future behavior, at least not with respect to SRL as measured by the MSLQ.

The influence of timing in the administration of self-report measures is also demonstrated in the study by Rogiers et al. (2020). As explained in the previous section, middle school students in this study thought aloud while reading expository text and, immediately after reading, completed a self-report survey of the strategies used. In this respect, both concurrent and retrospective self-report measures are used with the retrospective survey placed close in time and in direct reference to the just-completed SRL event. Again, this timing influences how the data can be interpreted. First, because the survey immediately followed the SRL event, results can be interpreted as valid representations of learners’ perceptions of their strategy use and; second, concurrent think alouds reveal the pattern in which strategies were used. Finally, Rogiers et al. were able to use the profiles that emerged from retrospective self-reports to guide data mining and uncover differences in how individuals deploy SRL processes. The timing matters here because it is the time-based relationship of the two self-report measures that permit the data and study findings to be interpreted in this way.

The two methodological factors covered here, person-domain relations and timing, are just two of the contextual factors addressed in this special issue that should be considered when interpreting study findings. Vriesema and McCaslin’s research on identity formation in groups, for example, demonstrates that this development must be understood in the context of the specific group’s dynamics; i.e., individuals are nested in groups. Exactly how data is collected should also be taken into consideration. Iaconelli and Wolters (2020) show this in their examination of insufficient effort responding. Recall these authors found that, while insufficient effort responding does occur, these occurrences have a negligible effect on the data set. These authors, however, were careful to point out that the surveys were completed as part of homework assignments in participants’ course on SRL. As a result, it is possible that insufficient effort responding was infrequent in this study because participants had a high degree of investment. Higher, that is, than one might expect from study participants who complete a survey only to receive course extra credit for study participation (e.g., Durik & Jenkins, 2020).

Taken as a whole, the current set of articles provide at least two important insights into the ways that the interpretation of self-report data can and should be used to interpret study findings. First, self-report data sources can be trusted to provide reliable and valid indicators of studied constructs. Although there is some error in these measures, evidence culled from these studies provide confidence that this is no greater a problem for self-report measures than other types of data collection methods that rely on human responses. Second, the broader contextual and methodological factors of measurement administration matter. Although the studies reported here shed light on some of these factors, no doubt there are many more that warrant attention. As a summary though, one can conclude that the methodology around the administration of self-report measures influences the interpretation of resulting data and consequently, influences the interpretation of study findings.

How does the use of self-report constrain the analytical choices made with that self-report data?

This final question can be understood to specifically address data analytic concerns rather than issues related to construct conceptualizations and the interpretation of findings covered in the first two questions. With these boundaries in mind, the short answer to this question is that the self-report nature of this data does not place constraints on analytic choices above and beyond what must be considered with other data sources; constraints such as scales of measurement, distributions, and floor or ceiling effects. Indeed, what is most striking in relation to this question are the creative and innovative ways in which self-report data can be collected and analyzed. Of course, it has long been argued that a chief benefit of self-report data is the ability to collect data from large sample sizes and this benefit is only increasing with technological advances in digital delivery systems (Fryer & Nakao, 2020). Beyond this ease, however, the articles in this issue highlight two valuable connections between the use of self-report data and subsequent analytic choices.

The first connection that emerged is how advances in both measurement delivery and statistical analytic tools permit self-report data to be collected and analyzed in increasingly creative and sophisticated ways. As authors in these special issue articles have noted, self-report measures have traditionally been delivered in paper-and-pencil form and resulting data treated in aggregated, variable-centered ways: Group means attest to some agreed upon (i.e., averaged) descriptor of a construct and scores are treated to traditional forms of comparisons and correlations. By contrast, today’s researcher has access to much more sophisticated tools to deliver measures as well as to parse variance and model data patterns. Consider, for example, the study by Moeller et al. (2020) that examined both personal and objective contributions to fluctuating states of interest. Thus far, this commentary has described this research in terms of the studied construct and empirical findings. An examination of the study methods, however, illustrates how innovation in the delivery and analysis of self-reports is expanding our understanding of motivation and strategy use in SRL. Specifically, these authors leveraged online delivery mechanisms and innovative experience sampling methods to collect the self-report data from groups of participants in context and over time. Once collected, the application of cross-classified multilevel modelling permitted participants’ time-point self-report data to be parsed into the three sources of variance that were predicted by the theoretical framework of situational interest used in this study. In short, Moeller et al. were able to apply modern research tools to the collection and analysis of data in a manner that advances theoretical understanding. Moeller et al.’s work is not the only illustration of the ways that self-report data can be meaningfully analyzed given the tools currently available. The studies by both Iaconelli and Wolters (2020) and Fryer and Nakao (2020), for example, show how the online delivery of self-report measures can yield not only participants’ responses but also indicators of invested effort (i.e., time). Others took advantage of recent techniques to detect patterns within data sets and used these methods to mine for dynamic, iterative, SRL cycles (Rogiers et al., 2020); classify participants according to profiles of individual differences and group co-regulation dynamics (Vriesema & McCaslin, 2020); and disentangle the relationships between interest, certainty, and behavior (Durik and Jenkins, 2020).

Altogether, this special issue shows that the choice to use self-report data opens the door to a great number of analytic choices, but the most promising of these may be the use of self-report data alongside other, complimentary data sources. As previously summarized, self-report data can have unique explanatory power when combined with other data sources in the study of motivation and strategy use. That previous discussion, however, was narrowly focused on how self-report reflects conceptualizations of theoretical constructs, and did not address methodological and analytic dimensions of this point. With respect to the current question though, one can see significant potential in the use of self-report measures in conjunction with additional measures of motivation and strategy use. For instance, self-report measures can play an important role in mixed methods SRL research in which qualitative process data can be combined with quantitative scores derived from self-report surveys. Just such an approach is demonstrated in the studies by both Rogiers et al. (2020) and Vriesema and McCaslin (2020) in which qualitative process data was collected and coded in addition to the administration of self-report surveys. Ultimately, these data sources were combined to shed light on how self-reported individual differences related to SRL processes in either individual (Rogiers et al., 2020) or group (Vriesema & McCaslin, 2020) settings. In addition to these two studies, articles in this special issue show other ways of combining self-report survey responses with additional forms of process data such as eye fixations (Chauliac et al., 2020), trace data (van Halem et al., 2020), and response times (Iaconelli & Wolters, 2020).

In sum, this section can be closed by returning to the answer offered at the opening; namely, the self-report nature of some data does not place constraints on analytic choices above and beyond what must be considered with other data sources. Instead, what does constrain both the choice of measures and data analysis methods, are the theoretically-based conceptualizations of the construct and the questions that drive the research. As the articles in this issue show, self-report data can be analyzed in a wide variety of ways with innovations paving the way for breakthroughs in both measurement and theory. Certainly, one must be concerned with the psychometric properties of scores and the match between the data set and the assumptions of a particular analysis. Beyond these constraints, however, self-report data has, and can continue to be, analyzed in ways that yield relevant insights into the individual differences and processes of motivation and strategy use.

Conclusion and Final Remarks

The articles in this special issue shed light on a number of theoretical constructs associated with motivation and strategy use, but the main objective of this collection is to examine the self-report methodology used to study these constructs. The task for the articles in this issue, including this commentary, was to use three organizing questions to “determine when and if” (Fryer & Dinsmore, 2020) self-report measures positively contribute to the study of theoretical SRL constructs. This final conclusion section focuses on this task by considering first, the question of “if” self-report measures can contribute followed by the question of when this might be true.

The question of “if” self-report measures can be used calls for answers to two relatively straightforward questions: (1) Is there evidence that scores on self-report measures can be reliable and valid indicators of motivational and strategic constructs? and (2) Is there evidence that self-report measures provide explanatory power in the study of motivational and strategic constructs? Across all of the articles in this special issue, the answer to both of these questions is, “Yes”. One bit of evidence in support of this affirmative response is found in demonstrations that self-report measures yield scores with acceptable reliability and adequate psychometric properties. Iaconelli and Wolters (2020), for example, showed that insufficient effort responding had little impact on a full data set and Moeller et al. (2020) showed how theoretically-driven analysis can increase the amount of variance explained in self-report data. Additional evidence from this set of articles comes from the repeated demonstrations that self-report measures play an important role in capturing and understanding theoretical constructs. In short, this body of research advances our understanding of motivation and strategy use and this is due, in large part, to the use of self-report measures. From the data presented in these studies, we learned about SRL phenomena such as situational fluctuations in motivational states, the relationship between group dynamics and identity formation, individual differences in the dynamic application of strategy use, and the role of domain exposure and certainty in understanding the influence of interest on behavior. In short, the research in this special issue supports the conclusion that self-report measures do indeed have an important role to play in the study of motivation and strategy use. And, this is true whether one is focused specifically on psychometric measurement properties or theoretically-driven conceptualizations.

The second part of our task, the task of determining “when” self-report measures contribute to the study of motivation and strategy use, raises questions about the conditions under which self-report may or may not be appropriate. Indeed, the articles in this special issue raised concerns about several of the limitations of self-report measures. For example, because self-report measures capture a snapshot of a learner’s perceptions, these instruments may be better at explaining a learner’s past than predicting that learner’s future (e.g., van Halem et al., 2020). Likewise, when self-reports are in the form of surveys, they capture variance associated with motivation and strategy use, but do not effectively capture dynamic aspects of SRL (e.g., Vriesema & McCaslin, 2020). Finally, like any other measure, self-reports are not immune to potential sources of error such as individual differences (Moeller et al., 2020; Durik and Jenkins, 2020) and insufficient effort responding (e.g., Chauliac et al., 2020). These limitations notwithstanding, it is possible to draw conclusions about when self-reports are likely to advance the study of SRL constructs. Specifically, there are three conditions under which self-report measures can be effectively used: (1) when the measure aligns with theoretically-driven conceptualizations of the construct, (2) when measure selection is driven by alignment with theoretically-driven research questions, and (3) when measure administration, data analysis, and results interpretations are grounded in theoretically-driven choices. In sum, self-report measures can contribute to the study of motivation and strategy use when a close coupling of the measure and relevant theory allows for a mutually beneficial cycle of refinement and development.

With these recommendations in mind, I will close with one final thought that points out the primary weakness of this commentary; namely, the choice to synthesize across the specific constructs studied in each article and group them under the broad umbrella of SRL. While this choice permitted general conclusions to be drawn about the use of self-report in the study of motivation and strategy use, it also meant that attention was not paid to possible construct-measurement interactions. That is to say, interactions in which the conditions for how and when self-report measures are best used vary according to the construct under study. Reading this set of papers, for example, raises a number of interesting questions about these possible interactions; e.g., whether the degree of certainty influences the prognostic abilities of self-reported strategy use in the same way that it influences measures of interest, if the rates of insufficient effort responding are consistent across SRL constructs, how the methods used to evaluate the effects of classroom activities on interest could be used to evaluate how those activities stimulate strategic processes. Despite the lack of attention given here to possible construct-measurement interactions such as these, their exploration offers a direction for future research. Carrying out this work has the potential to shed light on not only the use of self-report measurement tools, but also the theoretical conceptualizations in which they are grounded.

Keypoints

References


Chauliac, M., Catrysse, L., Gijbels, D., & Donche, V. (2020). It is all in the surv-eye: Can eye tracking data shed light on the internal consistency in self-report questionnaires on cognitive processing strategies? Frontline Learning Research, 8(3), 26–39. https://doi.org/10.14786/flr.v8i3.489
Durik, A., & Jenkins J. (2020). Variability in certainty of self-reported interest: Implications for theory and research. Frontline Learning Research, 8(3), 86–104. https://doi.org/10.14786/flr.v8i3.49
Eccles, J. S., & Wigfield, A. (2002). Motivational beliefs, values, and goals. Annual Review of Psychology, 53(1), 109-132. https://doi.org/10.1146/annurev.psych.53.100901.135153
Edwards, J. R., & Bagozzi, R. P. (2000). On the nature and direction of relationships between constructs and measures. Psychological Methods, 5(2), 155. https://doi.org/10.1037/1082-989X.5.2.155
Fryer, L. K., & Dinsmore, D. L. (2020). The promise and pitfalls of self-report: Development, research design and analysis issues, and multiple methods. Frontline Learning Research, 8(3), 1–9. https://doi.org/10.14786/flr.v8i3.623
Fryer, L. K., & Nakao, K. (2020). The future of survey self-report: An experiment contrasting Likert, VAS, slide, and swipe touch interfaces. Frontline Learning Research, 8(3), 10–25. https://doi.org/10.14786/flr.v8i3.501
Iaconelli, R., & Wolters, C. A. (2020). Insufficient effort responding in surveys assessing self-regulated learning: Nuisance or fatal flaw?Frontline Learning Research, 8(3), 105–127. https://doi.org/10.14786/flr.v8i3.521
Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons' responses and performances as scientific inquiry into score meaning. American Psychologist, 50(9), 741. https://doi.org/10.1037/0003-066X.50.9.741
Moeller, J., Dietrich, J., Viljaranta, J., & Kracke, B. (2020). Disentangling objective characteristics of learning situations from subjective perceptions thereof, using an experience sampling method design. Frontline Learning Research, 8(3), 63–85. https://doi.org/10.14786/flr.v8i3.529
Pintrich, P. R., Smith, D. A., Garcia, T., & McKeachie, W. J. (1993). Reliability and predictive validity of the Motivated Strategies for Learning Questionnaire (MSLQ). Educational and Psychological Measurement, 53(3), 801-813. https://doi.org/10.1177/0013164493053003024
Rogiers, A.; Merchie, E., & van Keer, H. (2020). Opening the black box of students’ text-learning processes: A process mining perspective. Frontline Learning Research, 8(3), 40–62. https://doi.org/10.14786/flr.v8i3.527
van Halem, N., van Klaveren, C. P. B. J., Drachsler, H., Schmitz, M., & Cornelisz, I. (2020). Tracking patterns in self-regulated learning using students’ self-reports and online trace data. Frontline Learning Research, 8(3), 142-164. https://doi.org/10.14786/flr.v8i3.497
Vermunt, J. D., & Donche, V. (2017). A learning patterns perspective on student learning in higher education: state of the art and moving forward. Educational Psychology Review, 29(2), 269-299. https://doi.org/10.1007/s10648-017-9414-6
Vriesema, C. C., & McCaslin, M. (2020) Experience and meaning in small-group contexts: Fusing observational and self-report data to capture self and other dynamics. Frontline Learning Research, 8 (3), 128–141. https://doi.org/10.14786/flr.v8i3.493
Zimmerman, B. J. (1990) Self-Regulated Learning and Academic Achievement: An Overview, Educational Psychologist, 25(1), 3-17, https://doi.org/10.1207/s15326985ep2501_2