Frontline Learning Research Vol.6 No. 2 (2018) 81 - 91
ISSN 2295-3159

The Future of Learning by Searching the Web: Mobile, Social, and Multimodal

Yvonne Kammerera, Saskia Brand-Gruwelb, Halszka Jarodzkab

aLeibniz-Institut für Wissensmedien, Tübingen, Germany
b Welten Institute – Research Centre for Learning, Teaching and Technology, Open University of the Netherlands, The Netherlands

Article received 6 February 2018/ revised 16 April / accepted 25 August/ available online 11 October

Abstract

Recent technological developments related to the World Wide Web including mobile computing, social media, and online videos are shaping the way we learn. As argued in the present commentary, the majority of educational psychological research that has examined how individuals learn by searching the Web, however, has not kept up with this pace. Therefore, the goal of this commentary is to discuss how recent technological developments might affect how learners acquire knowledge through Web search and to provide a respective research agenda. Specifically, we will focus on the use of mobile devices and digital assistants, social networking sites, and online videos, and the opportunities and challenges they present to learners. In addition, we suggest that future research should study the ongoing learning processes during Web search in greater detail. We believe that examining the research questions raised in the present commentary will uniquely contribute to the literature on Web-based searching and learning.

Keywords: Internet; World Wide Web; Web search; learning; new technologies

Info: Mail: y.kammerer@iwm-tuebingen.de DOI:https://doi.org/10.14786/flr.v6i2.343

Introduction

Providing vast amounts of information in various representation formats (e.g., text, pictures, video, audio), the World Wide Web, or shortly the Web, has become a major medium for learning both in formal and informal learning settings. With learning we mean the process of building and refining knowledge structures. Basically, people can learn anything on the Web – and they indeed do so. They search the Web to complete school or university assignments about course-related issues (e.g., Purcell, Heaps, Buchanan, & Friedrichs, 2013) or to inform themselves about socio-scientific issues such as climate change (e.g., Horrigan, 2006), political debates (e.g., Epstein & Robertson, 2015), or health and nutrition issues (e.g., Amante, Hogan, Pagoto, English, & Lapane, 2015), and also to learn procedural skills (e.g., Eickhoff, Teevan, White, & Dumais, 2014).

Individuals no longer do this only at home at their desktop PCs or laptops, but anywhere and anytime (e.g., while travelling) by using mobile devices like smartphones and tablets with ubiquitous Internet access (e.g., Ito, 2009). Moreover, AI (artificial intelligence)-powered digital assistants such as Apple’s Siri, Amazon’s Alexa, or the Google Assistant even allow to search via voice search and to receive spoken responses. In addition to the use of search engines, people also increasingly inform themselves through social networking sites, for instance, about political (e.g., Smith, 2013) or health issues (e.g., Vance, Howe, & Dellavalle, 2009). These technological developments are about to substantially change the way how users acquire knowledge on the Web. However, the majority of educational psychological research within this field has not kept up with this pace. In the present commentary, we argue that this must change for our research to remain relevant in today’s rapidly changing technological society.

Past research on learning by searching the Web

A growing body of quantitative research concerning how individuals in formal or informal learning situations search the Web (for recent overviews, see e.g. García-Rodicio, 2015; Salmerón, Strømsø, Kammerer, Stadtler, & Van den Broek, 2018; Walhout, Oomen, Jarodzka, & Brand-Gruwel, 2017), shows the relevance of this topic in educational sciences. Commonly, learning by searching the Web is described as a sequence of processing steps that unfold in an iterative manner (e.g., Brand-Gruwel, Wopereis, & Walraven, 2009; Gerjets, Kammerer, & Werner, 2011; Kiili et al., 2018; Leu, Kinzer, Coiro, Castek, & Henry, 2013): First, identifying and defining the information need and generating respective search terms; second, locating relevant information resources by evaluating and selecting links from search engine result pages (SERPs); third, scanning and briefly evaluating the information presented in the resources; fourth, thoroughly processing and extracting content from resources identified as useful; and fifth, comparing, integrating, and synthesizing information from several resources to prepare the final task outcome (in the user’s mind or externally). A major research goal has been to investigate how individuals select and evaluate potentially useful websites from the innumerable information resources available on the Web in order to construct a coherent mental representation of the issue at hand (cf. Brand-Gruwel et al., 2009; Rouet & Britt, 2011). Increased time spent on relevant and reliable websites and reflections regarding the credibility of information providers have been shown to be positively related to learning outcomes (García-Rodicio, 2015; Goldman et al., 2012; List, Grossnickle, & Alexander, 2016; Mason, Ariasi, & Boldrin, 2011; Wiley et al., 2009). A common finding, however, also is that – especially if prior knowledge about the search topic is low – individuals tend to focus on the first few search results presented by a search engine and mainly evaluate whether or not the search results address the issue they are looking for (e.g., Brand-Gruwel, Kammerer, van Meeuwen, & Van Gog, 2017; Salmerón, Kammerer, & García-Carrión, 2013; Walhout et al., 2017). Furthermore, particularly younger students (e.g., from grades 5 to 7), seem to rely more on superficial cues such as highlighted keywords when accessing websites rather than on the underlying semantic information contained in the search result descriptions (Keil & Kominsky, 2013; Rouet et al., 2011). Moreover, students of all ages spontaneously tend to pay rather little attention to source features during Web search, that is, to information about who provides the content and for what reason (e.g., Gerjets et al., 2011; Kiili, Laurinen, & Marttunen, 2008; Paul, Macedo-Rouet, Stadtler, & Rouet, 2017; Stadtler & Bromme, 2007; Walraven, Brand-Gruwel, & Boshuizen, 2009; Wiley et al., 2009). Since information published on the Web does not necessarily undergo any quality control, this creates risk for acquiring one-sided, biased, or even false information (cf. Goldman et al., 2012).

Notably, however, the above cited research has focused mostly on learning from text-based websites that students accessed via search engines on desktop PCs or laptops. Given the technological developments described in the introduction, however, it is of vital importance to now go a step further and investigate how learners select and evaluate information resources when using such new devices and media formats. The goal of the present commentary is to formulate a respective research agenda. Specifically, we will focus on the use of mobile devices and digital assistants, the use of social networking sites, and the use of online videos, and the opportunities and challenges they present to learners. In addition, we argue that future research should study the ongoing learning processes during Web search in greater detail.

1. Searching by using mobile devices and digital assistants – anywhere and anytime

Apart from desktop PCs and laptops, nowadays, individuals increasingly use mobile devices like smartphones and tablets to inform themselves on the Web, for instance when completing school assignments (Purcell, Heaps, Buchanan, & Friedrichs, 2013). Due to the smaller screen size that only allows the simultaneous display of very few search results and also only smaller parts of websites, mobile searches might be even more perfunctory and superficial than searches on desktop computers or laptops. Previous large scale studies indicate that in mobile searches users click on very few results and often seem to only read the information provided in the search results snippets rather than opening the linked web page (e.g., Kamvar & Baluja, 2006). Further descriptive findings by Djamasbi, Hall-Phillips, and Yang (2013) suggest that paid search results (i.e., ads that are presented above the actual search results) might attract more attention on mobile phones than on desktop computers. Kim, Thomas, Sankaranarayana, Gedeon, and Yoon (2015) compared users’ search behavior when using a search engine interface presented on a small screen (to simulate the screen of a mobile device) or on a large screen (i.e., a regular computer screen) to conduct simple fact finding tasks. Results showed that participants had greater difficulties to extract information from search results pages on the smaller screens. However, in terms of the accuracy of finding correct answers, no differences between the screen conditions were shown. Yet, it is reasonable to assume that for more complex learning tasks that require comparing and integrating information from various websites rather than finding a single correct answer, the screen size should matter more.

Another upcoming technology is the use of AI-powered digital assistants such as Apple’s Siri, Amazon’s Alexa, or the Google Assistant. Instead of a list of different search results (that potentially could be evaluated, compared, and integrated), they provide their users only with a single spoken response to their query. This means, that only one result from the SERP is selected and presented to the users, which makes them even more dependent of the search engine algorithm.

Research Agenda

2. Evaluating information encountered on social networking and question-and answering sites

Seeking answers from members of one’s social networks or on social question-and-answering (Q&A) sites has become a fast and popular way to obtain information on various topics. Many individuals trust information from social networking sites (SNS) even more than information provided by search engines (Morris, Teevan, & Panovich, 2010). Yet, how individuals learn from SNS and Q&A sites is still an underexplored topic. Given that the intentional or unintentional spread of misinformation by humans or by social bots is an increasing problem in social media, considering author credibility is crucial when reading messages in SNS and Q&A sites. Studies indicate that both school students and adult users rely more on answers from (self-declared) experts than from non-experts, at least when several alternative messages are provided (Winter & Krämer, 2012; Salmerón, Macedo-Rouet, & Rouet, 2016). However, whereas university students seem to prefer answers from expert authors that refer to an external source of information, primary school students seem to prefer expert messages that report a personal experience (Salmerón et al., 2016). Yet, the actual learning outcomes have not been examined in these studies; neither did study participants have strong opinions on the topics addressed in the task materials.

Previous research has shown that individuals’ pre-existing attitudes or opinions about a topic can heavily bias their information retrieval. Individuals seem to preferably access websites that provide attitude-consistent rather than counterattitudinal information (e.g., Frost, Casey, Griffin, Raymundo, Farrell, & Carrigan, 2015; Knobloch-Westerwick, Johnson, & Westerwick, 2015; Schwind & Buder, 2012; Schwind, Buder, Cress & Hesse, 2012), to judge proattitudinal information as more trustworthy (e.g., Bråten, Salmerón, & Strømsø, 2016; Schwind & Buder, 2012; Strømsø, Bråten, & Stenseth, 2017; Van Strien, Kammerer, Brand-Gruwel, & Boshuizen, 2016), and also to better remember it (e.g., Frost et al., 2015; Maier & Richter, 2014; Schwind et al., 2012).

Such predominant exposition to and reliance on attitude-consistent information might be further promoted by personalized web algorithms that present to each user information in line with his or her previous interests, thereby creating so-called “filter bubbles” that isolate users from opposing information (Pariser, 2011). Furthermore, online social networks can create “echo chambers” (e.g., Bakshy, Messing, & Adamic, 2015; Boutyline & Willer, 2017), in which like-minded people share and reinforce their personal views and spread one-sided or even false information. However, it should also be noted that some recent studies indicate that the problem of echo chambers and filter bubbles on the Internet might be overstated (e.g., Dubois & Blank, 2018; Haim, M., Graefe, A., & Brosius, 2018).

Research Agenda

3. The use of multimodal learning materials and the acquisition of procedural knowledge

The Web is a huge repository for multimodal materials (i.e., a combination of visual and auditory information; such as videos). Adolescents, for instance, frequently use the video-sharing platform YouTube when completing school assignments (Purcell et al., 2013). Apart from conveying declarative knowledge, e.g., in the form of online lectures or documentaries, also a plethora of online tutorials and how-to videos exists on the Web to convey procedural knowledge. Decades of research have shown that this is indeed a very efficient way of learning (Renkl, 2014; Van Gog & Rummel, 2010). Furthermore, research from the field of multimedia learning indicates that particularly for procedural tasks and skill acquisition, the use of visual materials in addition to text might be important for learning (Van Genuchten, Scheiter, & Schüler, 2012). The authors found that the multimedia effect, that is, that individuals learn better from text and pictures than from text alone (cf. Mayer, 2009) was larger in procedural tasks than in conceptual and causal tasks. On the contrary, however, pictures can also be misleading in that they pursue the learner to be less critical about the accompanying text (Isberner, Richter, Maier, Knuth-Herzig, Horz, & Schnotz, 2013; Lenzner, Schnotz, & Mueller, 2013; McCabe, & Castel, 2008; Oegren, Nystroem, & Jarodzka, 2016).

While there exists a lot of research on how individuals learn with single instructional videos (e.g., Merkt, Heier, Weigand, & Schwan, 2011; Merkt & Schwan, 2014) or how they integrate textual and pictorial information within documents (e.g., Eitel, Arndt; & Scheiter, 2013; Schüler, Arndt, & Scheiter, 2015), previous research that examined learning during Web search has mostly focused on textual documents. Only recently, researchers began addressing this topic. For instance, Salmerón, Sampietro, Delgado, Ziegelstein, and Fajardo (2017) examined how primary school students evaluate information from textual websites and personal online videos. Their results indicate that students more critically evaluated the trustworthiness of personal videos than of textual websites and better remembered who provided the information when working with two modalities (i.e., one text-based resource and one video resource) rather than only one (i.e., two texts or two videos). Learners’ integration processes across a set of multimodal (i.e., videos) or multimedia (i.e., text and picture) information resources, however, seem to be comparable to integration processes across purely textual information resources, as indicated by recent research by List (in press) and Schüler (in press).

Research Agenda

4. Analysis of the learning sequence and learning increment during Web search

A final issue that we would like to address is a methodological one. Most research on learning during Web search measures some kind of learning outcome, in the form of a knowledge test (and respective knowledge gains from pre- to post-test) or an essay, for which the number of arguments raised or the breadth and depth of topic coverage are analyzed (cf. Wilson & Wilson, 2013). In addition, process measures during the search process are increasingly investigated by means of navigation logs, eye-tracking data, or verbal protocols (see e.g., Argelagós, Brand-Gruwel, Jarodzka, & Pifarré, 2018). Typical measures are the number and type of search results selected from a SERP or the time spent on relevant or reliable over irrelevant or unreliable websites (e.g., García-Rodicio, 2015; Goldman et al., 2012; Kammerer & Gerjets, 2014; Mason, Junyent, & Tornatora, 2014; Wiley et al., 2009), the number and type of search results fixated on a SERP or the respective total fixation time on search results (e.g., Brand-Gruwel et al., 2017; Hautala, Kiili, Kammerer, Loberg, Hokkanen, & Leppänen, 2018; Kammerer & Gerjets, 2012; Walhout et al., 2017), the order in which the search results are visually inspected (e.g., Kammerer & Gerjets, 2014; Şendurur, & Yildirim, 2015), or the number and kind of verbal utterances (e.g., about the type or credibility of sources) mentioned by the individuals during search results selection or while reading the websites (e.g., Anmarkrud et al., 2014; Bråten, Ferguson, Strømsø, & Anmarkrud, 2014; Gerjets et al., 2011; Greene, Copeland, Deekens, & Seung, 2018; Kammerer, Bråten, Gerjets, & Strømsø, 2013; Greene, Yu, & Copeland, 2014; Mason, Boldrin, & Ariasi, 2010a, 2010b, 2011; Walraven et al., 2009). However, how the navigation sequence (or the navigation path) in which certain kinds of websites are accessed affects the learning outcome has hardly been examined yet. Research on learning with instructional hypertext-based learning environments has shown that the navigation path, that is, the order in which pages of the hypertext system are accessed, indeed affects learning outcomes. For instance, a coherent navigation path (i.e., subsequently accessing web pages that are semantically related) is beneficial for learning, particularly when prior knowledge about the topic is low (e.g., Salmerón, Cañas, Kintsch, & Fajardo, 2005; Salmerón, Kintsch, & Cañas, 2006).

Furthermore, previous educational psychological research has hardly investigated how learning increases during Web search and how this in turn influences the subsequent search behavior. First empirical evidence from the field of Web and data science indicates that the knowledge users gain during a search session constantly affects their ongoing search and evaluation processes, such as, their use of query terms and their selection and evaluation of websites (Eickhoff et al., 2014; Gadiraju, Yu, Dietze, & Holtz, 2018). Eickhoff et al. (2014), for instance, examined, both for declarative and procedural searches, how query complexity and the time spent reading a website increased during the time course of a search. Moreover, Westerwick, Kleinman, and Knobloch-Westerwick (2013) showed that prior attitudes differently affected the selection of attitude-consistent and attitude-inconsistent websites during the course of users’ Web search. Thus, future educational psychological research should also use more fine-grained analyses, e.g., by splitting up the search session into several time intervals.

Research Agenda

Conclusion

In sum, we argue that with the proposed research agenda our research can keep pace with recent technological developments such as mobile computing and digital assistants, social networking sites, and instructional online videos. To empirically examine how they shape learners’ search and evaluation behavior on the Web is vitally important for educational psychological research. We believe that examining the research questions raised in the present commentary will uniquely contribute to the literature on Web-based searching and learning. Moreover, they can give educational insights into how to foster students’ Web search practices, which are essential skills for twenty-first century learning (e.g., Kiili et al., 2018; Van de Oudeweetering, & Voogt, 2018). For instance, research might provide insights regarding which digital tools and formats are particularly suited for specific learning purposes and specific types of learners. Moreover, with technology increasingly doing the thinking for the users, from an educational perspective it is of great importance to make students aware of how the technology and the underlying algorithms work and how to use digital media and tools in a meaningful, reflective, and self-regulated way.

Keypoints

References


Amante, D. J., Hogan, T. P., Pagoto, S. L., English, T. M. & Lapane, K. L. (2015). Access to care and use of the Internet to search for health information: results from the US national health interview survey. Journal of Medical Internet Research, 17, e106. doi:10.2196/jmir.4126
Anmarkrud, Ø., Bråten, I., & Strømsø, H.I. (2014). Multiple-documents literacy: Strategic processing, source awareness, and argumentation when reading multiple conflicting documents. Learning and Individual Differences, 30, 64-74. doi:10.1016/j.lindif.2013.01.007
Argelagós, E., Brand-Gruwel, S., Jarodzka, H. & Pifarré, M (2018). Unpacking cognitive skills engaged in web-search: how can log files, eye movements, and cued-retrospective reports help? An in-depth qualitative case study. International Journal of Innovation and Learning, 24, 152-175. doi: 10.1504/IJIL.2018.10014361
Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348, 1130-1132. doi:10.1126/science.aaa1160
Boutyline, A., & Willer, R. (2017). The social structure of political echo chambers: Variation in ideological homophily in online networks. Political Psychology, 38, 551-569. doi:10.1111/pops.12337
Brand-Gruwel, S., Kammerer, Y., Van Meeuwen, L., & Van Gog, T. (2017). Source evaluation of domain experts and novices during Web search. Journal of Computer Assisted Learning, 33, 234-251. doi:10.1111/jcal.12162
Brand-Gruwel, S., Wopereis, I., & Walraven, A. (2009). A descriptive model of information problem solving while using internet. Computers & Education, 53, 1207-1217. doi:10.1016/j.compedu.2009.06.004
Bråten, I., Ferguson, L. E., Strømsø, H. I., & Anmarkrud, Ø. (2014). Students working with multiple conflicting documents on a scientific issue: Relations between epistemic cognition while reading and sourcing and argumentation in essays. British Journal of Educational Psychology, 84, 58-85. doi:10.1111/bjep.12005
Bråten, I., Salmerón, L., & Strømsø, H. I. (2016). Who said that? Investigating the plausibility-induced source focusing assumption with Norwegian undergraduate readers. Contemporary Educational Psychology, 46, 253-262. doi:10.1016/j.cedpsych.2016.07.004
Djamasbi, S., Hall-Phillips, A., & Yang, R. R. (2013). An examination of ads and viewing behavior: An eye tracking study on desktop and mobile devices. In Proceedings of the 19th Americas Conference on Information Systems (Vol. 1., pp. 350–355).Chicago, IL:AIS Electronic Library. Retrieved from https://digitalcommons.wpi.edu/uxdmrl-pubs/36
Dubois, E., & Blank, G. (2018). The echo chamber is overstated: the moderating effect of political interest and diverse media. Information, Communication & Society, 21, 729-745. doi:10.1080/1369118X.2018.1428656
Eickhoff, C., Teevan, J., White, R., & Dumais, S. T. (2014). Lessons from the journey: a query log analysis of within-session learning. In Proceedings of the Seventh ACM International Conference on Web Search and Data Mining (pp. 223-232). New York, NY: ACM. doi:10.1145/2556195.2556217
Eitel, A., Scheiter, K. & Schüler, A. (2013). How inspecting a picture affects processing of text in multimedia learning. Applied Cognitive Psychology, 27, 451-461. doi:10.1002/acp.2922
Epstein, R., & Robertson, R.E. (2015). The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections. Proceedings of the National Academy of Sciences, 112: E4512–E4521. doi:10.1073/pnas.1419828112
Frost, P., Casey, B., Griffin, K., Raymundo, L., Farrell, C., & Carrigan, R. (2015). The Influence of Confirmation Bias on Memory and Source Monitoring. The Journal of General Psychology, 142, 238-252. doi:10.1080/00221309.2015.1084987
Gadiraju, U., Yu, R., Dietze, S., & Holtz, P. (2018). Analyzing knowledge gain of users in informational search sessions on the web. In C. Shah & N. J. Belkin (Eds.), Proceedings of the 2018 Conference on Human Information Interaction & Retrieval CHIIIR (pp. 2-11). New Brunswick, NJ: ACM. doi:10.1145/3176349.3176381
García-Rodicio, H. (2015). Students’ evaluation strategies in a Web research task: Are they sensitive to relevance and reliability? Journal of Computing in Higher Education, 27, 134–157. doi:10.1007/s12528-015-9098-1
Gerjets, P., Kammerer, Y., & Werner, B. (2011). Measuring spontaneous and instructed evaluation processes during web search: Integrating concurrent thinking-aloud protocols and eye-tracking data. Learning and Instruction, 21, 220-231. doi:10.1016/j.learninstruc.2010.02.005
Goldman, S.R., Braasch, J.L.G., Wiley, J., Graesser, A.C., & Brodowinska, K. (2012). Comprehending and learning from Internet sources: Processing patterns of better and poorer learners . Reading Research Quarterly, 47, 356–381. doi:10.1002/RRQ.027
Greene, J.A., Yu, S.B., & Copeland, D.Z., (2014). Measuring critical components of digital literacy and their relationships with learning. Computers and Education, 76, 55-69. doi:10.1016/j.compedu.2014.03.008
Greene, J. A., Copeland, D. Z., Deekens, V. M., & Seung, B. Y. (2018). Beyond knowledge: Examining digital literacy's role in the acquisition of understanding in science. Computers & Education, 117, 141-159. doi:10.1016/j.compedu.2017.10.003
Hautala, J., Kiili, C., Kammerer, Y., Loberg, O., Hokkanen, S., & Leppänen, P. H. (2018). Sixth graders’ evaluation strategies when reading Internet search results: an eye-tracking study. Behaviour & Information Technology, 37, 761-773. doi:10.1080/0144929X.2018.1477992
Haim, M., Graefe, A., & Brosius, H.-B. (2018). Burst of the Filter Bubble?: Effects of personalization on the diversity of Google News. Digital Journalism, 6, 330–343. doi:10.1080/21670811.2017.1338145
Horrigan, J. (2006). The internet as a resource for news and information about science. Pew Research Center’s Internet & American Life Project. Retrieved from http://www.pewinternet.org/Reports/2006/The-Internet-as-a-Resource-for-News-and-Information-about-Science.aspx
Isberner, M.-B., Richter, T., Maier, J., Knuth-Herzig, K., Horz, H. & Schnotz, W. (2013). Comprehending conflicting science-related texts: graphs as plausibility cues. Instructional Science, 41, 849–872. doi:10.1007/s11251-012-9261-2
Ito, M. (Ed.). (2009). Hanging out, messing around, and geeking out: Kids living and learning with new media . Cambridge, MA: MIT Press.
Kammerer, Y., Bråten, I., Gerjets, P. & Strømsø, H.I. (2013). The role of Internet-specific epistemic beliefs in laypersons’ source evaluations and decisions during Web search on a medical issue. Computers in Human Behavior, 29, 1193-1203. doi:10.1016/j.chb.2012.10.012
Kammerer, Y. & Gerjets, P. (2012). Effects of search interface and Internet-specific epistemic beliefs on source evaluations during Web search for medical information: An eye-tracking study. Behaviour & Information Technology, 31, 83-97. doi:10.1080/0144929X.2011.599040
Kammerer, Y., & Gerjets, P. (2014). The role of search result position and source trustworthiness in the selection of web search results when using a list or a grid interface. International Journal of Human-Computer Interaction, 30, 177-191. doi:10.1080/10447318.2013.846790
Kamvar, M., & Baluja, S. (2006). A large scale study of wireless search behavior: Google mobile search. In Proceedings of the SIGCHI conference on Human Factors in computing systems (pp. 701-709). New York, NY: ACM. doi:10.1145/1124772.1124877
Keil, F.C., & Kominsky, J.F. (2013). Missing links in middle school: developing use of disciplinary relatedness in evaluating Internet search results. PLoS ONE, 8: e67777. doi:10.1371/journal.pone.0067777
Kiili, C., Laurinen, L. & Marttunen, M. (2008). Students evaluating Internet sources: From versatile evaluators to uncritical readers. Journal of Educational Computing Research, 39, 75–95. doi:10.2190/EC.39.1.e
Kiili, C., Leu, D. J., Utriainen, J., Coiro, J., Kanniainen, L., Tolvanen, A., Lohvansuu, K. & Leppänen, P. H. (2018). Reading to Learn From Online Information: Modeling the Factor Structure. Journal of Literacy Research, 50, 304-334. doi:10.1177/1086296X18784640
Kim, J., Thomas, P., Sankaranarayana, R., Gedeon, T., Yoon, H.J (2015). Eye-tracking analysis of user behavior and performance in Web search on large and small screens. Journal of the Association for Information Science and Technology, 66 , 526–544. doi:10.1002/asi.23187
Knobloch-Westerwick, S., Johnson, B. K., & Westerwick, A. (2015). Confirmation bias in online searches: Impacts of selective exposure before an election on political attitude strength and shifts. Journal of Computer-Mediated Communication, 20, 171-187. doi:10.1111/jcc4.12105
Lenzner, A., Schnotz, W. & Mueller, A. (2013). The role of decorative pictures in learning. Instructional Science, 41, 811–831. doi:10.1007/s11251-012-9256-z
Leu, D. J., Kinzer, C. K., Coiro, J., Castek, J., & Henry, L. A. (2013). New literacies: A dual level theory of the changing nature of literacy, instruction, and assessment. In D. E. Alvermann, N. J. Unrau, & R. B. Ruddell (Eds.), Theoretical models and processes of reading(6th ed., pp. 1150–1181). Newark, DE: International Reading Association. doi:10.1177/002205741719700202
List, A. (in press). Strategies for comprehending and integrating texts and videos. Learning and Instruction. doi:10.1016/j.learninstruc.2018.01.008
List, A., Grossnickle, E. M. & Alexander, P. A. (2016). Undergraduate students' justifications for source selection in a digital academic context. Journal of Educational Computing Research, 54, 22–61. doi:10.1177/0735633115606659
Maier, J., & Richter, T. (2014). Fostering multiple text comprehension: How metacognitive strategies and motivation moderate the text-belief consistency effect. Metacognition and learning, 9, 51-74. doi:10.1007/s11409-013-9111-x
Mason, L., Ariasi, N. & Boldrin, A. (2011). Epistemic beliefs in action: Spontaneous reflections about knowledge and knowing during online information searching and their influence on learning. Learning and Instruction, 21, 137-151. doi:10.1016/j.learninstruc.2010.01.001
Mason, L., Boldrin, A. & Ariasi, N. (2010a). Epistemic metacognition in context: Evaluating and learning online information. Metacognition and Learning, 5, 67-90. doi:10.1007/s11409-009-9048-2
Mason, L., Boldrin, A. & Ariasi, N. (2010b). Searching the Web to learn about a controversial topic: Are students epistemically active? Instructional Science, 38, 607-633. doi:10.1007/s11251-008-9089-y
Mason, L., Junyent, A. A. & Tornatora, M. C. (2014). Epistemic evaluation and comprehension of web-source information on controversial science-related topics: Effects of a short-term instructional intervention. Computers & Education, 76, 143-157. doi:10.1016/j.compedu.2014.03.016
Mayer, R. E. (2009). Multimedia principle. In R. E. Mayer (Ed.), Multimedia learning(2nd ed., pp. 223–241). New York, NY: Cambridge University Press.
McCabe, D. & Castel, A. (2008). Seeing is believing: The effect of brain images on judgments of scientific reasoning. Cognition, 107, 343–352. doi:10.1016/j.cognition.2007.07.017
Merkt, M., & Schwan, S. (2014). How does interactivity in videos affect task performance? Computers in Human Behavior, 31, 172-181. doi:10.1016/j.chb.2013.10.018
Merkt, M., Weigand, S., Heier, A., & Schwan, S. (2011). Learning with videos vs. learning with print: The role of interactive features. Learning & Instruction, 21, 687-704. doi:10.1016/j.learninstruc.2011.03.004
Morris, M. R., Teevan, J., & Panovich, K. (2010, April). What do people ask their social networks, and why?: a survey study of status message q&a behavior. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 1739-1748). New York, NY: ACM. doi:10.1145/1753326.1753587
Ögren, M., Nyström, M. & Jarodzka, H. (2016). There's more to the multimedia effect than meets the eye: Is seeing pictures believing? Instructional Science, 1-25. doi:10.1007/s11251-016-9397-6
Pariser, E. (2011). The filter bubble: what the Internet is hiding from you. New York: Penguin Press.
Paul, J. M., Macedo-Rouet, M., Stadtler, M., & Rouet, J.-F. (2017). Why attend to source information when reading online? The perspective of ninth grade students from two different countries. Computers and Education, 113,339-354. doi:10.1016/j.compedu.2017.05.020
Purcell, K., Heaps, A., Buchanan, J. & Friedrich, L. (2013). How teachers are using technology at home and in their classrooms. Pew Research Center’s Internet & American Life Project. Retrieved from http://www.pewinternet.org/2013/02/28/how-teachers-are-using-technology-at-home-and-in-their-classrooms/
Renkl, A. (2014). Towards an instructionally-oriented theory of example-based learning. Cognitive Science, 38, 1-37. doi:10.1111/cogs.12086
Rouet, J.-F., & Britt, M. A. (2011). Relevance processes in multiple document comprehension. In M.T. McCrudden, J. P. Magliano, & G. Schraw (Eds.), Text relevance and learning from text(pp. 19–52). Greenwich, CT: Information Age Publishing.
Rouet, J.-F., Ros, C., Goumi, A., Macedo-Rouet, M. & Dinet, J. (2011). The influence of surface and deep cues on primary and secondary school students' assessment of relevance in Web menus. Learning and Instruction, 21, 205-219. doi:10.1016/j.learnintruc.2010.02.007
Salmeron, L., Cañas, J. J., Kintsch, W., & Fajardo, I. (2005). Reading strategies and hypertext comprehension. Discourse Processes, 40, 171–191. doi:10.1207/s15326950dp4003_1
Salmerón, L., Kammerer, Y., & García-Carrión, P. (2013). Searching the Web for conflicting topics: Page and user factors. Computers in Human Behavior, 29, 2161-2171. doi:10.1016/j.chb.2013.04.034
Salmerón, L., Kintsch, W. & Cañas, J. J. (2006). Reading strategies and prior knowledge in learning with hypertext. Memory & Cognition, 34, 1157–1171. doi:https://doi.org/10.3758/BF03193262
Salmerón, L., Macedo-Rouet, M., & Rouet, J-F. (2016). Multiple viewpoints increase students' attention to source features in social question and answer forum messages. Journal of the Association for Information Science and Technology, 67 , 2404–2419. doi:10.1002/asi.23585
Salmerón, L., Sampietro, A., Delgado, P., Ziegelstein, K., & Fajardo, I. (2017, October). Young students’ evaluation of multiple and multimodal documents. Paper presented at the Annual Workshop on Multiple Documents Comprehension. Leibniz-Institut für Wissensmedien, Tübingen, Germany.
Salmerón, L., Strømsø, H. I., Kammerer, Y., Stadtler, M., & Van den Broek, P.W. (2018). Comprehension processes in digital reading. In M. Barzillai, J. Thomson, S. Schroeder, & P.W. Van den Broek (Eds.) Learning to read in a digital world (pp. 91-120). Amsterdam: John Benjamins Publishing Company.
Schüler, A. (in press). The integration of information in a digital, multi-modal learning environment. Learning and Instruction. doi:10.1016/j.learninstruc.2017.12.005
Schüler, A., Arndt, J., & Scheiter, K. (2015). Processing multimedia material: Does integration of text and pictures result in a single or two interconnected mental representations? Learning and Instruction, 35, 62-72. doi:10.1016/j.learninstruc.2014.09.005
Schwind, C., & Buder, J. (2012). Reducing confirmation bias and evaluation bias: When are preference-inconsistent recommendations effective–and when not?. Computers in Human Behavior, 28, 2280-2290. doi:10.1016/j.ch.2012.06.035
Schwind, C., Buder, J., Cress, U., & Hesse, F.W. (2012). Preference-inconsistent recommendations: An effective approach for reducing confirmation bias and stimulating divergent thinking?. Computers & Education, 58, 787-796. doi:10.1016/j.compedu.2011.10.003
Şendurur, E., & Yildirim, Z. (2015). Students’ web search strategies with different task types: an eye-tracking study. International Journal of Human-Computer Interaction, 31, 101-111. doi:10.1080/10447318.2014.959105
Smith, A. (2013). Civic engagement in the digital age. Pew Research Center’s Internet & American Life Project. Retrieved from http://www.pewinternet.org/2013/04/25/civic-engagement-in-the-digital-age/
Stadtler, M., & Bromme, R. (2007). Dealing with multiple documents on the WWW: The role of metacognition in the formation of documents models. International Journal of Computer-Supported Collaborative Learning, 2 , 191-210. doi:10.1007/s11412-007-9015-3
Strømsø, H. I., Bråten, I., & Stenseth, T. (2017). The role of students’ prior topic beliefs in recall and evaluation of information from texts on socio-scientific issues. Nordic Psychology, 69, 127-142. doi:10.1080/19012276.2016.1198270
Van de Oudeweetering, Karmijn, & Voogt, Joke (2018). Teachers’conceptualization and enactment of twenty-first century competences: exploring dimensions for new curricula. The Curriculum Journal, 29, 116-133. doi:10.1080/09585176.2017.1369136
Van Genuchten, E., Scheiter, K., & Schüler, A. (2012). Examining learning from text and pictures for different task types: Does the multimedia effect differ for conceptual, causal, and procedural tasks? Computers in Human Behavior, 28, 2209–2218. doi:10.1016/j.chb.2012.06.028
Van Gog, T. & Rummel, N. (2010). Example-based learning: Integrating cognitive and social-cognitive research perspectives. Educational Psychology Review, 22, 155-174. doi:10.1007/s10648-010-9134-7
Van Strien, J. L. H., Kammerer, Y., Brand-Gruvel, S., & Boshuizen, H. P. A. (2016). How attitude strength biases information processing and evaluation on the web. Computers in Human Behavior, 60, 245-252. doi:10.1016/j.chb.2016.02.057
Vance, K., Howe, W. & Dellavalle, R.P. (2009). Social internet sites as a source of public health information. Dermatologic Clinics, 27, 133-136. doi:10.1016/j.det.2008.11.010
Walhout, J., Oomen, P., Jarodzka, H. & Brand-Gruwel, S. (2017). Effects of task complexity on online search behavior of adolescents. Journal of the Association for Information Science and Technology, 68, 1449-1461. doi:10.1002/asi.23782
Walraven, A., Brand-Gruwel, S., & Boshuizen, H. P. (2009). How students evaluate information and sources when searching the World Wide Web for information. Computers & Education, 52, 234-246. doi:10.1016/j.compedu.2008.08.003
Westerwick, A., Kleinman, S.B., & Knobloch‐Westerwick, S. (2013). Turn a blind eye if you care: Impacts of attitude consistency, importance, and credibility on seeking of political information and implications for attitudes. Journal of Communication, 63, 432-453. doi:10.1111/jcom.12028
Wiley, J., Goldman, S., Graesser, A., Sanchez, C., Ash, I. & Hemmerich, J. (2009). Source evaluation, comprehension, and learning in internet science inquiry tasks. American Educational Research Journal, 46, 1060-1106. doi:10.3102/0002831209333183
Wilson, M. J., & Wilson, M. L. (2013). A comparison of techniques for measuring sensemaking and learning within participant-generated summaries. Journal of the American Society for Information Science and Technology, 64 , 291–306. doi:10.1002/asi.22758
Winter, S., & Krämer, N. C. (2012). Selecting science information in Web 2.0: How source cues, message sidedness, and need for cognition influence users' exposure to blog posts. Journal of Computer-Mediated Communication, 18, 80–96. doi:10.1111/j.1083-6101.2012.01596.x