Scientific Reasoning and Argumentation: Advancing an Interdisciplinary Research Agenda in Education

Frank Fischera,  Ingo Kollara, Stefan Uferb, Beate Sodiana, Heinrich Hussmannc, Reinhard Pekruna, Birgit Neuhausd, Birgit Dornere, Sabine Pankofere, Martin Fischerf, Jan-Willem Strijbosa, Moritz Heenea & Julia Eberlea,d

a Ludwig Maximilians University of Munich, Department of Psychology, Germany

b Ludwig Maximilians University of Munich, Department of Mathematics, Germany

c Ludwig Maximilians University of Munich, Department of Informatics, Germany

d Ludwig Maximilians University of Munich, Department of Biology, Germany

e Katholische Stiftungsfachhochschule München - University of Applied Sciences, Germany

 f Ludwig Maximilians University of Munich, University Hospital, Institute for Medical Education, Germany

a-f Munich Center of the Learning Sciences, Germany

                                                                                                          

Article received 24 February 2014 / revised 1 April 2014 / accepted 19 May 2014 / available online 16 June 2014

Abstract

Scientific reasoning and scientific argumentation are highly valued outcomes of K-12 and higher education. In this article, we first review main topics and key findings of three different strands of research, namely research on the development of scientific reasoning, research on scientific argumentation, and research on approaches to support scientific reasoning and argumentation. Building on these findings, we outline current research deficits and address five aspects that exemplify where and how research on scientific reasoning and argumentation needs to be expanded. In particular, we suggest to ground future research in a conceptual framework with three epistemic modes (advancing theory building about natural and social phenomena, artefact-centred scientific reasoning, and science-based reasoning in practice) and eight epistemic activities (problem identification, questioning, hypothesis generation, construction and redesign of artefacts, evidence generation, evidence evaluation, drawing conclusions as well as communicating and scrutinizing scientific reasoning and its results). We further propose addressing the domain specificities and domain generalities of scientific reasoning and argumentation as well as approaches for facilitation. Finally, we argue for investigating the role of epistemic emotions, the role of the social context, and the influence of digital technologies on scientific reasoning and argumentation. 

Keywords: scientific reasoning; argumentation; epistemic emotions; collaboration; technology

Corresponding author: Frank Fisher, frank.fischer@psy.lmu.de

http://dx.doi.org/10.14786/flr.v2i3.96


1.                  Problem

To participate in the knowledge society and to benefit from the unprecedented open access to a vast volume of scientific knowledge requires a broad set of skills and abilities that have lately been labelled as 21st century skills (e.g., Trilling, & Fadel, 2009). These include skills and abilities to use scientific concepts and methods to understand how scientific knowledge is generated in different scientific disciplines, to evaluate the validity of science-related claims, to assess the relevance of new scientific concepts, methods, and findings, and to generate new knowledge using these concepts and methods. The acquisition of these complex competencies is considered a main goal and outcome of K-12 and higher education. However, contemporary knowledge about what constitutes these competencies and how they can be facilitated is scattered over different research disciplines.

In order to develop a better understanding of these competencies, we propose to build on three existing strands of research. First, research on the development of scientific reasoning (e.g., Koslowski, 2012); second, research looking at the processes and products of scientific argumentation (e.g., Chinn & Clark, 2013) from the fields of educational psychology, education, as well as science education and other subject education disciplines. Third, there is a broad range of approaches to support and facilitate scientific reasoning and argumentation (SRA) in educational contexts (e.g., Furtak, Seidel, Iverson, & Briggs, 2012). In this article, we will first provide an overview of the main topics and key findings of these three strands of research. Building on these findings, we outline the deficits of existing research and address five aspects that exemplify where and how research on SRA needs to be expanded.

2.                  Key Findings of Previous Research

2.1       Development of Scientific Reasoning

Research on scientific reasoning amongst laypeople has its roots in developmental psychology. Inhelder and Piaget (1958) assumed that scientific rationality was a model of the ideal human reasoning, that is, a person who reflects on theories, builds hypothetical models of reality, critically and exhaustively tests for all possible main and interaction effects between variables, and objectively and systematically evaluates evidence with respect to a claim. In a series of studies they showed that the scientific reasoning of preadolescent children was severely deficient, whereas significant improvement took place in adolescence. These findings led them to claim the stage of “formal operational thought” as the highest stage of cognitive development. This view has since been heavily criticised, as it neither adequately captures adult reasoning nor its development (Kuhn & Franklin, 2006).

Neither the lay adult nor professional scientists conform to a model of domain-general, ideal scientific rationality. Rather, adult reasoning abilities are heavily dependent on domain-specific knowledge and context (e.g., Kruglanski & Gigerenzer, 2011). This is found for laypersons, but professional scientists are equally influenced by their prior knowledge and theoretical biases (Dunbar, 1995). Similarly, children’s scientific reasoning is context and task dependent and does not differ fundamentally from adult scientific reasoning (Koslowski, 1996, 2012; see Zimmerman, 2000, 2007).

The “layperson as scientist” metaphor, which focuses on processes of intentional knowledge seeking to test theories and hypotheses and to evaluate evidence with respect to a hypothesis or theory (Kuhn & Franklin, 2006), has proved to be a productive framework for research into scientific reasoning. However, broad models of scientific reasoning that incorporate early competencies are only now emerging (Kuhn, & Franklin, 2006; Sodian & Bullock, 2008). For example, Kuhn (1991) showed that differentiation of theory and evidence poses a major problem for many lay adults in complex, real-world argumentation. However, even young elementary school children can differentiate hypothetical beliefs from evidence and identify a conclusive research design to test a hypothesis (Sodian, Zaitchik & Carey, 1991). Third graders distinguish controlled from confounded experiments (Bullock, & Ziegler, 1999). Even pre-schoolers possess basic data evaluation competencies (Koerber, Sodian, Thoermer, & Nett 2005; Koerber b& Sodian, 2009). Thus, neither children nor adults appear to lack a basic understanding of the relationship between hypothetical beliefs and empirical evidence. Rather, in complex theory evaluation tasks, both children and adults appear to lack an understanding of mechanisms, as well as methodological knowledge to provide and judge evidence-based arguments (e.g., Koslowski, 2012).

A meta-conceptual understanding of the nature of scientific knowledge has been identified as a major source of developmental progress. Understanding progresses from an undifferentiated Level 1 (science as activities and effects) through an intermediate Level 2 (science as providing explanations via testable claims) to a Level 3 understanding (science as a cyclical and cumulative process of theory, testing, and revision), with children rarely displaying Level 2 and even adults rarely articulating a coherent Level 3 understanding (e.g., Carey & Smith, 1993). However, even the nature of elementary school students’ science understanding can be improved through instructional support (e.g., Sodian, Jonen, Thoermer, & Kircher, 2006). Moreover, an advanced meta-conceptual understanding of science in childhood has been found to predict strategy acquisition in adolescence (Bullock, Sodian, & Koerber, 2009).

Recent attempts in developmental research with elementary school students support a model of scientific reasoning as a complex set of interrelated abilities, consisting of four major components: “understanding the nature of science”, “understanding theories”, “designing experiments”, and “interpreting data” (e.g., Koerber, Sodian, Kropf, Mayer, & Schwippert, 2011). Apart from general cognitive abilities, student’s problem-solving skills and spatial abilities have been shown to have a major impact on these scientific reasoning competencies. Moreover, scientific reasoning has been shown to be a separate construct from measures of intelligence and reading skills in elementary school students (Mayer, Sodian, Koerber, & Schwippert, 2014).  

2.2       Scientific Argumentation

While developmental research is mainly interested in the developmental trajectories of an individual’s scientific reasoning, educational and science education research on scientific argumentation has focused on the externalised processes and products of scientific reasoning within social contexts (e.g., the science classroom; Osborne, 2010). The interest in scientific argumentation is sparked by the view that argumentation relates to the learning of core content and acquisition of general argumentation skills (Chinn & Clark, 2013). Previous research strived for two main goals: (a) identification of students’ deficits during their engagement in scientific argumentation in social contexts, and (b) design and development of effective scaffolding approaches to improve students’ argumentation.

With respect to students’ deficits in scientific argumentation, some studies focused on the structural quality of student-generated arguments, for example on the use of evidence (e.g., McNeill, 2011), qualifiers (Stegmann, Wecker, Weinberger, & Fischer, 2012) or warrants (Kollar, Fischer, & Slotta, 2007). A recurring finding has been that students tend to make claims without justifications. In socio-scientific debates, they typically do not spontaneously refer to scientific concepts and information (Sadler, 2004). Other studies have shown that students often have problems producing arguments of high content quality (e.g., Kelly, & Takao, 2002). A third set of studies revealed that students often exhibit a poor dialogic or social quality of argumentation as reflected in the social exchange and co-construction of arguments. For example, students have been found to refrain from challenging others’ arguments (Weinberger, Stegmann, & Fischer, 2010). This might be related to the recurring finding that students have difficulties recognising contrasting argumentative positions (Sadler, 2004) and are often not successful in integrating different perspectives of different learners within a group or community (Noroozi, Weinberger, Biermans, Mulder, & Chizari, 2012).

2.3       Intervention Studies

How students can effectively be supported in their acquisition of SRA-related skills has been subject to a large body of intervention-based research, including long-term and short-term interventions, technology-based and teacher-based scaffolding, laboratory as well as field studies, and studies at the school and university levels (e.g., Kollar et al., 2007; McNeill, Lizotte, Krajcik & Marx, 2006). Overall, this research shows that SRA can be substantially advanced by making it an explicit topic of instruction (see Osborne, 2010). This applies to both increasing students’ abilities to engage in activities of scientific knowledge generation (or epistemic activities) and helping them develop a more sophisticated understanding of the nature of science. Current research on instructional approaches focuses on immersing learners into scientific practices (see Cavagnetto, 2010) which typically involves student engagement in research-related activities and debates. Three prototypical instructional approaches are inquiry learning, problem-based learning, and design-based learning. Inquiry learning engages students in more or less authentic activities of hypothesis formulation, generation of evidence, and drawing conclusions (Chinn & Malhotra, 2002). Inquiry learning proved to be an effective instructional approach to advance science learning, especially when combined with teacher-led activities (e.g., Furtak et al., 2012). Similarly, in problem-based learning, students are confronted with complex problems and expected to find explanations and solutions that are based on scientific concepts and methods (e.g., Dochy, Segers, van den Bossche & Gijbels, 2003). Design-based learning (e.g., Kolodner, 2007) engages students in inter-linked cycles of research and design with the goal of arriving at an optimal design of a concrete product, such as a miniature car that that can go from one end of the classroom to the other.

In all of these approaches that aim to immerse students into authentic SRA processes, it has been found crucial to provide students with structural support. This scaffolding may be directed at individual learners, small groups and whole classrooms. For individual learning, hints, prompts, sentence starters, and guiding questions that help students focus their attention on the critical aspects of SRA have been found to be effective (see Quintana, Reiser, Davis, Krajcik, Fretz, Duncan, & Soloway, 2005). A hypothesis scratchpad, for example, helped students formulate better hypotheses than students whose hypothesis formation was unscaffolded (van Joolingen, & de Jong, 1993). For small-group collaboration, several studies showed that the quality of SRA can be raised substantially through collaboration scripts (see Fischer, Kollar, Stegmann, & Wecker, 2013), which assign roles to learners and sequence their epistemic activities. For instance, a social-discursive peer review script has been shown to enhance student argumentation. Detailed process analyses revealed that social-discursive argumentation during the peer review processes mediated the effects of scaffolding by the script on the improvement of (individual) argumentation skills (Stegmann et al., 2012). A related form of structuring collaboration is peer assessment (e.g., Cho, Schunn, & Wilson, 2006; Strijbos, & Sluijsmans, 2010), which is also a crucial aspect of the contemporary scientific process. Peer assessment can be used to help collaborators uncover incongruence in their respective SRA processes when scrutinising scientific claims and evidence. The incongruence can subsequently foster refinement of target processes through critical reflection (Nicol, Thomson, & Breslin, 2014). Finally, studies demonstrated that teachers can be successfully empowered to help students gain scientific argumentation skills (e.g., Erduran, Simon, & Osborne, 2004). For instance, research on classroom scripts has shown that epistemic activities can be facilitated if teachers combine scaffolding at different social levels in the classroom (plenary, group, individual; e.g., Mäkitalo-Siegl, Kohnle, & Fischer, 2011). Moving even beyond the boundaries of the classroom, knowledge building communities have been successfully implemented in schools around the globe to engage students in argumentative processes to jointly construct knowledge in the classroom (Scardamalia & Bereiter, 2006).

3.                  Deficits of Prior Research and Directions for Advancing Studies on SRA

Research on the development of scientific reasoning, as well as research on scientific argumentation, has substantially progressed over the last two decades (see Nussbaum, 2011; Zimmerman, 2007). However, there are still important research gaps which leads us to argue for more systematic and interdisciplinary research on SRA. We propose that future research should (a) expand the range of epistemic modes and epistemic activities, (b) investigate domain-specific aspects of SRA more systematically, (c) examine the role of emotions in SRA, (d) consider the social context of SRA in a more systematic way, and (e) explore the influence of digital technologies on SRA. Each of these suggestions is more closely elaborated upon in the following.

3.1       Expanding the Range of Epistemic Modes and Epistemic Activities

3.1.1 Epistemic Modes

People engage in SRA with different motivations. For example, a researcher may strive to contribute to theory building in a domain while practitioners try to find solutions for problems in their professional practice by applying scientific concepts or methods. We argue that these different motivations have not yet been systematically reflected in research on SRA in educational contexts. Stokes (1997) suggested a widely accepted classification according to which approaches to scientific reasoning vary in their primary goals along two orthogonal dimensions: understanding and use. Pure basic research is characterised by its primary goal of advancing scientific understanding of natural and social phenomena, regardless of its usefulness in practice. Stokes used Nils Bohr’s scientific approach – with no emphasis on the use and societal uptake of his theoretical advances – to characterise this type of research. In contrast, pure applied research emphasises the use of scientific knowledge without the aim of advancing theory building and understanding. Stokes exemplified this kind of research with the work of Thomas A. Edison, who brought electricity to a whole country by using scientific knowledge and methods, but without being concerned about generalisation and theory building beyond this practical challenge. A third class that Stokes (1997) identified is the scientific approach that combines the goals of understanding and use, which he termed “use-inspired basic research” and exemplified with Louis Pasteur’s work. Pasteur started from problems in practice (e.g., how to make food last longer), conducted systematic research to solve them, but simultaneously strived for a generalised theoretical explanation.

We suggest that Stokes’ classification of research approaches can be used to inform the differentiation of three distinct modes of SRA. In a first mode (1) SRA can be used to advance theory building about natural and social phenomena. When learners apply this mode, they aim to generate and test hypotheses to develop and improve scientific theories and explanations about social and natural phenomena. That way, this epistemic mode will help support student learning of the scientific knowledge of a domain, how it is created, and how students themselves can contribute to knowledge creation by engaging in scientific research.

A second SRA mode may be labelled (2) science-based reasoning and argumentation in practice. In this mode, learners aim at developing solutions for contextualised problems using scientific concepts, theories, and methods. Based on information about the problem and the state-of-research as they know it, learners generate one or more solution approaches and evaluate them in light of scientific knowledge and methods, but also based on standards of the practice under consideration. That way, learners take over the role of scientifically knowledgeable practitioners rather than that of basic researchers. For example, teacher education students may develop a concept to help 4th graders improve with respect to their reading abilities, based on both practice-based observations of the possibly poor reading abilities of their students and on prior scientific theories and empirical studies on how to effectively support students with reading difficulties (e.g., Reciprocal Teaching; Palincsar & Brown, 1984). Another example is the application of mathematics to solve practical problems (e.g., predicting the development of sprint world records by describing historical data with an appropriate mathematical function), typically referred to as “mathematical modelling” (Galbraith, Henn, & Niss, 2007). The difference between science-based reasoning in practice and problem solving is that the result is not only the solution of a problem, but also an argument based on scientific theory.

The third SRA mode we would like to introduce is called (3) artefact-centred SRA. This mode is realized when students engage in circular processes which involve the concurrent development of an artefact and a scientific theory or explanation for why the artefact works or does not work (i.e., why a given problem can or cannot be solved by the use of the artefact), through repeated cycles of prototype design, testing, and analysis of test results. For example, Kolodner (2007) reports on a science curriculum unit during which students are supposed to build miniature cars from a given set of materials. Based on concepts from physics (e.g., friction and force), the students’ task is to design a car that would travel from one end of the classroom to the other. That way, the students’ reasoning and argumentation resembles that of researchers in engineering and technology. This mode of scientific reasoning differs from “science-based reasoning in practice” with respect to the thrust towards generalisation and theory building. Nevertheless, in educational contexts, both modes have the potential to address student competence of understanding and engagement in scientific knowledge creation activities, as well as their competence to address practical problems through application of scientific concepts and methods.

3.1.2 Epistemic activities

The three epistemic modes imply an extended notion of SRA that also calls for considering a comprehensive set of scientific activities. Students in educational contexts need to learn how these activities work and how to engage in them. We suggest distinguishing eight epistemic activities that all may be fulfilled in SRA in all of the three epistemic modes. Yet, both the weight that is attributed to each activity in each of the three modes and the way these activities are performed within each of the three modes may differ. In the following, we describe each of these activities along with one example of how the activity may be performed in one of the three epistemic modes.

(1) Problem identification. Many scientific reasoning processes are driven by concrete problems. According to the three epistemic modes, such problems might be practical real-world problems (see Kolodner, 2007), but also scientific problems that cannot be solved with the available theoretical concepts and methods. Becoming aware that available explanations do not appropriately explain phenomena is a starting point for both the advancement of science as an abstract set of knowledge, and for the individual learner advancing his or her understanding of the world. Thus, to engage in SRA, one first needs to perceive a mismatch or shortcoming concerning the available explanation of a particular problem. During this epistemic activity, a problem representation is built from an analysis of the situation. A medical student may for example be confronted with a patient who reports a diverse set of illness symptoms (exemplifying the epistemic mode science-based SRA in practice). Based on medical knowledge, which in medical experts typically is encapsulated in so-called “illness scripts” (Charlin, Boshuizen, Custers, & Feltovich, 2007), the student will try to identify which parts of the patient’s descriptions are relevant for the diagnostic process and which are not. That way, the actual biomedical problem is gradually concretized and then determines further action.

(2) Questioning. Based on the representation developed during problem identification, one or more initial questions are identified for the subsequent reasoning process (see White & Frederiksen, 1998). Later on, this question might be refined to allow for a systematic search of evidence. To exemplify how a math student may be confronted with questioning in the epistemic mode of advancing theory building about natural and social phenomena we refer to the following famous problem formulated by Euler in 1741 (Seven Bridges of Königsberg; solution proved by Hierholzer & Wiener, 1873): In a given arrangement of points and lines between these points (e.g., a set of crossings and streets in a city), how can we determine if an “Euler-Walk” along adjacent lines is possible, which passes each line exactly once (e.g., a sightseeing walk through the city)? The problem here is a classification problem (how to describe objects with a given property).

(3) Hypothesis generation. During hypothesis generation, students derive possible answers to the question from plausible models, available theoretical frameworks or empirical evidence they are aware of (Klahr & Dunbar, 1988). If the student’s prior knowledge does not allow for predictions, the question might be refined or – alternatively – an exploratory approach of evidence generation may be adopted to derive a hypothesis based on patterns in this evidence. This process involves formulating the hypothesis according to scientific standards. In biology, a learner may for example aim at developing an answer to the question how the memory of honey bees develops. Based on prior research, the learner may hypothesize that glutamate plays a role in this process, since glutamate has been shown to be important for human memory development. To substantiate this hypothesis, further search for corresponding literature may be necessary, e.g. concerning the question whether glutamate has also been found in other insects.

(4) Construction and redesign of artefacts. Scientific reasoning often includes the construction of some kind of artefact, be it the development of a prototype object by an engineer or an axiomatic system describing a new mathematical structure. Typically, this construction will be based on current theoretical knowledge. Following its construction, the artefact is submitted to a test in an authentic environment (see Kolodner, 2007). For example, teacher students may have the task to develop a computer-based collaborative learning environment that would effectively scaffold the interaction of small groups of learners in order to raise the individuals’ learning outcomes (exemplifying the epistemic mode of artefact-centred SRA). For that purpose, a prototype of the learning environment (e.g., based on the collaboration script approach; Fischer et al., 2013) may be built that – based on theoretical reasoning and prior empirical evidence – seems promising to achieve this goal.

(5) Evidence generation. Evidence generation includes various approaches. One approach is to conduct hypothetico-deductive experimental studies that refer to the systematic, theory-driven variation of one or more variables by the learner in consecutive trials, while repeatedly observing the same outcome variables. Evidence generation may also follow an inductive approach of observing, comparing and describing phenomena to draw conclusions about structures and functions, for example in evolutionary biology or sociology. Another approach is observing the synchronous or sequential co-occurrence of phenomena, which is frequently applied in the natural sciences (e.g., when studying climate models), but also in the social sciences (e.g., in longitudinal studies). Finally, most natural and social sciences use deductive reasoning – within more or less elaborate theories – to generate evidence in favour or against a claim. In the mathematic example by Euler described above, a first approach to gather (exploratory, in the mathematical sense preliminary) evidence would be to study single examples of point-line configurations and test if they admit an Euler-Walk. Comparing configurations which admit such a walk and some which do not, might lead to a first hypothesis about the characteristic difference between the two (Hypothesis generation). Studying more, and perhaps extreme, examples will add further (still preliminary) evidence to support the hypothesis, maybe leading to its revision or refinement. Finally, starting from a set of basic assumptions on such line configurations (described by the axioms of mathematical Graph Theory), a deductive chain of arguments can be constructed that shows that configurations admitting an Euler-Walk have the hypothesized property, and vice versa. Constructing such a line of deductive arguments, which derive that a conjecture follows from the axioms of a mathematical theory, is actually the main mode of evidence generation in mathematics. Nevertheless, also other kinds of evidence play a major role in mathematical reasoning, such as counter-examples that disprove a general conjecture (e.g., Zazkis, & Chernoff, 2008).

(6) Evidence evaluation. The aim of evidence evaluation is to assess the degree to which a certain piece of evidence supports a claim or theory. What counts as evidence will differ both with respect to the epistemic mode in which SRA is realised and with respect to the domain under study. Observational studies (Shafto, Kemp, Bonawitz, Coley, & Tenenbaum, 2008), for example, might be considered the best available evidence in one discipline (e.g., astronomy) but less valuable than experimental studies in another (e.g., psychology, engineering; Kolodner, 2007). Deductions from a theoretical framework constitute the crucial acceptance criterion in mathematics, whereas in psychology or in natural sciences they serve an auxiliary role as predictions about the outcomes of an experiment from theoretical assumptions. Even though an “experimentum crucis” is not viable in most disciplines, cumulated evidence from several experimental or observational studies is necessary to sustain a claim. An example from medical education in the epistemic mode of science-based SRA in practice would be a medical student aiming to find the right diagnosis for a patient’s health problem in a case-based simulation environment. Evidence evaluation in this example may refer to the accumulating evidence from the patient´s history, physical examinations and additional lab and technical tests. Optimally, this evidence is interpreted in light of candidate diagnoses that have already been set up during hypothesis generation. Here the development of encapsulated, experiential knowledge in the form of illness scripts (Charlin et al., 2007) has been identified as crucial in order to arrive at a sound evaluation of the collected evidence.

(7) Drawing conclusions. Since different kinds of evidence can be generated within the scientific reasoning process, drawing conclusions is not restricted to reconsidering an initial claim in light of experimental results. Different pieces of evidence must often be integrated by weighing each single piece according to the method by which it was generated and by the rules and criteria of the discipline. In the case of a teacher student developing a scaffolded computer-supported learning environment, drawing conclusions means to critically analyse data and observations from an experiment or a field trial in which the environment was used and to derive consequences for whether the environment (or specific features of it) needs to be re-designed or may be used as originally planned in further trials. To arrive at such a conclusion, typically a multitude of data sources needs to be considered (e.g., individual knowledge tests, verbal protocols, data on students’ motivation).

(8) Communicating and scrutinising. Individual scientific reasoning processes and their results are typically shared with and scrutinised by others (Shavelson & Towne, 2002). Persons involved in scientific reasoning are more or less constantly involved in conversations and discussions in work groups or peer groups. These interactions might influence scientific reasoning from problem identification to knowledge-based interventions in practice situations. Thus, social-discursive and dialogic argumentation is an integral component of many scientific reasoning processes and should be included when analysing and facilitating SRA in educational contexts (e.g., Clark, Sampson, Weinberger & Erkens, 2007; Sampson & Clark, 2009). In the biology example on the memory of honey bees, communicating and scrutinising may play a double role. On the one hand, if groups of learners work on the honey bee problem, communication within the team is necessary to secure that the research process is carried out in a rigorous way, including arriving at a sound explanation for the phenomenon under investigation. On the other hand, the research process and outcomes are typically shared with the broader community, e.g. in the form of plenary presentations.

 

3.2 Domain-Specific Aspects of SRA Need to be Investigated More Systematically

While research on SRA focused on commonalities across domains, investigations on the differences of SRA between disciplines have been rare (e.g., Herrenkohl & Cornelius, 2013). In addition, the set of domains under consideration has so far been small and seemingly arbitrary. One crucial question is what role domain-specific conceptual knowledge plays for successful SRA (e.g., Chinnappan, Ekanayake & Brown, 2011; Schunn & Anderson, 1999). Domain-specific conceptual knowledge is, for example, necessary to build a mental representation of the problem situation and to identify aspects of the situation that offer scientifically accessible questions. Moreover, the process of scientific reasoning is different across domains, with respect to both nature and weight of the epistemic activities to be displayed. For example, engineers enact the epistemic activity of “problem identification” by starting their design process with a clear problem for which the initial stage, the solution stage, and the constraints are all well-defined. Natural scientists and social scientists do not necessarily have such well-defined initial and solution stages – for them, thus, the epistemic activities “questioning” and “hypothesis generation” play a major role. Regarding the epistemic activity of “evidence evaluation”, scientific disciplines vary considerably in what is regarded as acceptable evidence to support a scientific claim. While many natural sciences rely upon hypothetico-deductive methods, many social sciences accept inductive comparisons as methods of evidence evaluation. In (pure) mathematics the only acceptable evidence is a chain of deductive arguments within a theory. All other kinds of evidence are regarded as informal. Thus, transferring criteria for evidence evaluation from one discipline to another appears problematic. Moreover, it is unclear whether exposure to one domain-specific approach of scientific reasoning influences the nature of evidence evaluation skills in other domains (given that K-12 education, as well as teacher education, immerses students in various domains).

Although the nature of epistemic activities varies across disciplines, approaches to foster student’s scientific reasoning have typically focused on single domains and developed in different directions. While research from developmental psychology and science education has predominantly focused on hypothesis and evidence generation and evaluation processes, research from mathematics education focused on meta-cognitive aspects to improve students’ self-regulated problem-solving (for example when searching for mathematical proofs, Chinnappan & Lawson, 1996).

Despite the fact that the existence of domain-dependent differences concerning SRA can hardly be doubted, we contend that the three epistemic modes and the eight epistemic activities are of relevance to a broad range of disciplines. In other words, there may also be skill aspects of SRA that are similar across domains (such as skills for structuring a problem situation, experimentation or deductive reasoning). However, since disciplines might differ substantially in the relative weights of the modes and activities and thus in the specific knowledge, skills and attitudes that students are supposed to develop when learning SRA, a more representative selection of disciplines seems key for investigating their particularities in future research.

Finally, existing approaches to facilitation have typically proven effective for only one specific domain, in the context of one epistemic mode, in referring to only some specific epistemic activities, and in focusing on only some specific learning prerequisites. The extent to which the approaches to facilitation are domain-specific is an important question, but the extent to which they can be generalised across epistemic modes, domains, epistemic activities, and different learners is an important question as well (see Klahr, Zimmerman & Jirout, 2011). Future research should thus invest effort in identifying domain-specific and domain-general aspects of SRA and their facilitation.

3.3       The Role of Emotions in SRA Requires Investigation

Cognition is intricately interwoven with emotions. Emotions are defined as systems of interrelated component processes, including subjective, physiological, and behavioral components (e.g., uneasy and nervous feelings, physiological activation, and anxious facial expression in anxiety; Shuman & Scherer, 2014). Cognitive appraisals of situational demands and one’s competencies are known to shape human emotion. Emotions, in turn, are prime drivers of motivation to solve problems and can profoundly impact the quality and outcomes of cognitive processes (e.g., Moors, Ellsworth, Scherer & Fijda, 2013; Pekrun, 2006). It seems likely that this is also true for SRA. Without emotions such as surprise, curiosity triggered by contradictory findings, joy about solving scientific problems, or pride in one’s accomplishments, scientists would likely not be motivated to engage in scientific discovery, and students would lack motivation to learn science (Pekrun, Hall, Goetz & Perry, in press). Furthermore, these emotions are known to regulate attention, memory processes, and different modes of cognitive problem solving, such as analytical versus holistic ways to approach problems, which are critically important for SRA (Fiedler & Beier, 2014). Systematic research examining the links between emotions and scientific reasoning, however, is largely lacking as yet (see Sinatra, Broughton & Lombardi, 2014). We propose that five groups of emotions that seem to be relevant for scientific reasoning should be investigated.

(1) Epistemic emotions. As noted, epistemic activities such as generating hypotheses, are at the core of scientific reasoning in a broad range of domains. Typically, these activities are accompanied by emotions triggered by the epistemic quality of problem-related information and mental activity. A prototypical case is cognitive incongruity triggering surprise, awe, curiosity, confusion, or joy when the incongruity is resolved. As proposed by philosophers (Brun, Doğuoğlu, & Kuenzle, 2008; Morton, 2010), these emotions can be called epistemic emotions (Pekrun, & Stevens, 2011).

(2) Achievement emotions. Achievement emotions are emotions that relate to activities or outcomes that are judged according to competence-related standards of quality (Pekrun, 2006). In many learning situations, scientific reasoning activities and the outcomes of these activities are judged for their achievement quality. Depending on the perceived importance of success and failure, scientific reasoning can induce strong achievement emotions, such as hope and pride or anxiety, shame, and hopelessness.

(3) Topic emotions. During scientific reasoning, emotions can be triggered by the contents of the problem to be solved. An example is the anxiety experienced when dealing with issues of climate change or genetically modified food. In contrast to epistemic emotions, topic emotions do not directly pertain to the process of scientific reasoning, however, they can strongly influence engagement in reasoning (Ainley, 2006).

(4) Social emotions. Scientific reasoning is often situated in social contexts. By implication, scientific reasoning can induce a multitude of social emotions related to other people. These emotions include both social achievement emotions, such as admiration, envy, contempt, or empathy related to the success and failure of others, as well as non-achievement emotions, such as love or hate in relationships with collaborators in the reasoning process (Weiner, 2007).

(5) Incidental emotions and moods. When engaging in scientific reasoning, a person can continue experiencing emotions that relate to external events, such as current stress, or problems in their family. These emotions do not relate to the reasoning process itself, but have the potential, nonetheless, to strongly influence the quality of reasoning and learning to reason, such as a student’s worries about their parents’ divorce being brought into the science classroom.

All five classes of emotion can play a role in all epistemic activities. However, it seems likely that different emotions are more typical for some of these activities than for others. For example, epistemic emotions are likely to be triggered by mental activities that can involve impasses and cognitive incongruity, such as “problem identification” or “evidence evaluation”, whereas social emotions are of primary importance in collaborative reasoning processes and for the communication of the results of scientific reasoning.

Furthermore, emotions of all five classes can profoundly influence the scientific reasoning process and its outcomes. The impact of these emotions on reasoning can be mediated by various cognitive and motivational processes, e.g. intrinsic and extrinsic motivation to engage, or deep versus shallow information processing strategies (e.g., Clore & Huntsinger, 2009). As a consequence, positive activating emotions in reasoning may typically support high-quality reasoning, whereas some negative emotions may be detrimental. However, for many emotions and task conditions, the effects on reasoning performance are likely to be more complex. Thus we argue that studying the role of emotions in and during SRA is an important task for future research.

3.4       The Social Context of SRA Should be Considered More Systematically

Scientific reasoning and argumentation are typically situated in a social context (Dunbar, 1995). Some epistemic activities are collaborative in nature, such as discussing the results of scientific reasoning with peers or communicating them to the broader public. Other epistemic activities are not collaborative in nature but may benefit from collaboration (Chi, 2009; Duschl, 2008). We propose two strands for future research that bear the potential to improve our understanding of the social aspects of SRA.

(1) Collaborative knowledge construction. Extensive research has been carried out on the cognitive and social mechanisms of knowledge construction in groups and collectives. Research on knowledge construction in pairs and small groups has often been conducted in a joint problem-solving paradigm. This line of research focuses on how pairs and groups, in contrast to individuals, work on complex science-related problems (e.g., Okada & Simon, 1997) and on how groups develop joint strategies and norms for SRA beyond just learning the domain content associated with the task (e.g., Roschelle & Teasley, 1997). Research on dialogic education (Wegerif, 2007) and argumentative classroom discourse (Osborne, 2010) focuses on the structure and content of discussions in groups and collectives, and on the conditions for evolving (scientific) quality of the argumentation in these discussions. In contrast to the perspectives on joint problem solving and dialogic argumentation that analyse the micro-mechanisms of knowledge construction, research on communities of practice emphasises processes of knowledge creation, participation and identity in collectives of people sharing goals or interests (Lave & Wenger, 1991). In knowledge community approaches, domain knowledge acquisition by individuals is rather seen as a by-product of qualitative changes in the participation pattern, from legitimate peripheral to more “core” participation. Research is needed on which forms of participation in epistemic activities of certain scientific communities effectively advance students’ SRA skills.

(2) Distributed, shared and collective cognition. Approaches to distributed and shared cognition share the assumption that reasoning in real world tasks cannot be understood by just focussing on isolated individuals. In real world tasks, individuals collaborate with others on solving problems and making decisions, but they also use tools that allow them to act much more intelligently than they would be able to without. The distributed cognition perspective suggests a systemic perspective for the analysis of complex social and socio-technical tasks (e.g. Salomon & Perkins, 1998). Research on transactive memory systems (Wegner, 1987) addresses the cognitive interdependence that develops when group members collaborate for some time and specialise in specific areas of which the other members are aware. A transactive memory system is thus characterised by the collaborative division of labour for learning, remembering, and communication of knowledge (e.g., Hollingshead, Gupta, Yoon, & Brandon, 2011), which seems crucial for most epistemic activities. The shared mental models perspective (e.g. Mohammed & Dumville, 2001; Wu & Keysar, 2007) addresses the question which kind of knowledge (e.g., knowledge on task vs. knowledge on team) is needed and the extent to which group members need overlapping (shared) information as opposed to unique (or unshared) information to perform well as a team. Research on cognitive convergence (Teasley et al., 2008) or knowledge convergence (Fischer & Mandl, 2005) focuses on the similarity and dissimilarity of cognitive representations in collaborative situations, as well as their changes through collaboration.

In the context of SRA it is an interesting open question to which extent divergent vs. convergent cognitive representations of different individuals in a group are supportive for different epistemic activities. It seems plausible to hypothesise that divergent knowledge in a group is specifically supportive in epistemic activities such as “evaluating evidence” and “scrutinising arguments”. Furthermore, a recurring result from prior research is that the knowledge learners acquire through collaboration is surprisingly dissimilar (Miyake, 1986). This might be especially relevant for educational settings where students engage in collaborative learning to develop SRA skills. Research on expert-layperson communication (Bromme, Jucks  & Runde, 2005) has shown that large differences in domain expertise may have detrimental effects on communication and understanding. Measures to support expert-layperson communication have shown positive effects (e.g., Nückles & Stürz, 2006). In the context of SRA these knowledge differences exist, e.g., between scholars acting as teachers and students in their early years but also in the context of communicating scientific outcomes to wider audiences. An open question is how different disciplines try to overcome detrimental effects and make optimal use of large knowledge differences between scholars and students.

3.5       The Influence of Digital Technologies on SRA Needs Further Research

Studies show that digital technologies affect reasoning and learning contingent to the way that they are used. For instance, a study by Sparrow, Liu and Wegner (2011) revealed that digital technologies increasingly become “external memories” integral to people’s reasoning. Their findings show that the availability of externally stored information changes cognitive processing dramatically, depending on the person’s assumptions on later accessibility. It is plausible to assume that the availability of digital technology affects SRA in a similar way. Moreover, this should be true for all three epistemic modes, but in different ways. When advancing theory about natural and social phenomena, technology is typically used for data collection and visualisation (e.g., computer simulations; Gijlers & de Jong, 2009) as well as for analysis, including not only statistical analysis but also analysis based on language and logic (e.g., Rosé, Wang, Arguello, Stegmann, Weinberger & Fischer, 2008). When applying science-based reasoning in practice, technology is often used to provide access to the knowledge base and theories in the respective domain (e.g., Sparrow, et al., 2011). In the epistemic mode of artefact-centred scientific reasoning, technology often acts as the core enabler for prototypes or simulating features of a design artefact (e.g., Wiethoff, Schneider, Rohs, Butz & Greenberg, 2012). Across the three modes, research has generated evidence that communication and collaboration can be substantially enhanced by digital technologies (see Stegmann et al., 2012). Furthermore, awareness tools, i.e. tools capable of capturing and mirroring the quality of group processes via external representations, have shown strong potential to support scientific argumentation (Janssen & Bodemer, 2013; Streng, Stegmann, Boring, Böhm, Fischer & Hussmann, 2010).

We propose that future research should investigate how technologies shape SRA. Firstly, research should more systematically address how available and easily accessible technologies influence scientific reasoning in the different epistemic modes and activities. A co-evolutionary perspective on the mutual influence of technology development and scientific reasoning seems promising, for example, how access to scientific information through the internet affects the SRA of practitioners. Secondly, research should investigate the effects of technological tools specifically designed to facilitate certain epistemic activities in SRA. Prior research on computer simulations and computer-supported collaboration will be informative for the formulation of design principles for the development of technology-based scaffolds.

 

4.                  Conclusions

SRA is considered one of the core competences in knowledge societies (Trilling & Fadel, 2009). Knowledge of the structure and generality of these competences, their emotional, social and technological conditions, and how they can be facilitated appears key for a promising re-design of curricula and interventions in schools, higher education and vocational practice to foster the development of SRA. As a starting point for the necessary interdisciplinary research we suggest the following broad definition of SRA:  Scientific reasoning and argumentation include the knowledge and skills involved in different epistemic activities (problem identification, questioning, hypothesis generation, construction of artefacts, evidence generation, evidence evaluation, drawing conclusions as well as communicating and scrutinising scientific reasoning and its results) in the context of three different epistemic modes (advancing theory building about natural and social phenomena, science-based reasoning in practice, and artefact-centred scientific reasoning,). Scientific reasoning and argumentation are assumed to consist of domain-specific as well as domain-general components, and depend on emotional, social, instructional/facilitative, and technological conditions.

We proposed a research agenda on the analysis and facilitation of SRA in educational contexts, which significantly broadens our perspective beyond basic experimental research.

Based on Stokes’ (1997) model of scientific knowledge production, we suggested three epistemic modes of SRA: (1) advancing theory building about natural and social phenomena, (2) science-based reasoning in practice, and (2) artefact-centred scientific reasoning. In a broad range of domains, all three epistemic modes play a role. Students thus need to learn to understand how scientific knowledge is developed in their domains of study, and how it can be applied to address practical problems. To an extent differing vastly between domains and study programmes, students are also expected to learn to participate in processes of scientific research (Trilling & Fadel, 2009).

We further identified eight epistemic activities, of which some have only received marginal or narrowly focused consideration in research on SRA, mainly in the experimental paradigm: (1) Problem identification, (2) Questioning, (3) Hypothesis generation, (4) Construction and redesign of artefacts, (5) Evidence generation, (6) Evidence evaluation, (7) Drawing conclusions and (8) Communicating and scrutinising. We do not claim that this process typology is exhaustive, and do not intend to conceal that others have developed alternative typologies (e.g., van Joolingen, de Jong, Lazonder, Savelsbergh & Manlove, 2005; White & Frederiksen, 1998). Instead it is proposed as a starting point for an interdisciplinary research agenda, to be modified in further theoretical discussion and based on findings of empirical studies. Based on this framework, we suggest five further areas in research on SRA that require more systematic investigation.

First, research should investigate the differences between disciplines regarding how epistemic modes and activities are employed and to what extent knowledge generated within them is considered as evidence for or against theories. We suggest that it is crucial to advance our understanding of SRA by determining which aspects are domain-general and which aspects are specific for a single domain or group of domains (see Schunn & Anderson, 1999).

Second, commonalities and differences between disciplines are also likely to exist with respect to measures of intervention and facilitation. On the one hand, some of the interventions developed for a specific domain and context might prove generalizable to some extent to other contexts and domains. On the other hand, domain-independent instructional approaches might well be differentially effective in different domains (see Klahr et al., 2011). In addition, we suggest building a coherent conceptual framework for integrating the diverse research findings from intervention research across domains. Chi’s (2009) ICAP model might be a promising starting point in this respect to integrate the available evidence and guide future research on SRA interventions. ICAP classifies learning activities based on their underlying cognitive processes into interactive, constructive, active and passive. The model predicts the best learning outcomes for interactive learning activities, followed by constructive, active and passive activities.

Third, research on SRA displays a strong cognitive bias. However, it seems likely that most scientific reasoning processes are triggered, modulated, or followed by emotions (see Shuman, & Scherer, 2014). Thus far there is no systematic research on emotions in the context of SRA, which is striking because, for example, curiosity is widely regarded as a major driving force for any scientific endeavour (Pekrun & Stevens, 2011).

Fourth, scientific reasoning is increasingly recognised as a social epistemic practice rather than a purely individual activity (Dunbar, 1995). However, prior research on SRA has examined the social context in which SRA appears in a rather unsystematic way. Therefore, we suggest considering constructs of research fields that are advanced in this perspective, such as peer assessment (Cho et al., 2006; Strijbos & Sluijsmans, 2010) or research on collaboration scripts (Fischer et al., 2013), as starting points for addressing the social aspects of scientific reasoning.

Fifth, recent years have seen an expansion of digital technology in nearly every sector of society, including research and related fields of practice. We argue that the effects of digital technologies on SRA practices need to be examined more systematically. Important questions include how digital technologies are used to support scientific reasoning and how technologies can be designed to support students in SRA (see for example Gijlers & de Jong, 2009).

Given the amount of research in the fields of scientific reasoning and scientific argumentation described at the outset of this article, the field might benefit from an integrative view that combines the so far largely separated strands of research. With concerted and interdisciplinary research efforts, we strongly believe that we may achieve a better understanding of what SRA skills are, how they develop and how their development can be supported effectively. The outcomes of this research may subsequently inform educational practice to help educate citizens who are able to participate in science-related societal debates and make more systematic use of scientific knowledge and skills.

Keypoints

 

Acknowledgements

This work was funded by the Elite Network of Bavaria.

References

Ainley, M. (2006). Connecting with learning: Motivation, affect and cognition in interest processes. Educational Psychology Review, 18(4), 391-405. doi: 10.1007/s10648-006-9033-0

Bromme, R., Jucks, R., & Runde, A. (2005). Barriers and biases in computer-mediated expert-layperson-communication. In R. Bromme, F.W. Hesse, & H. Spada (Eds.), Barriers and biases in computer-mediated knowledge communication (pp. 89-118). New York: Springer.

Brun, G., Doğuoğlu, U., & Kuenzle, D. (Eds.). (2008). Epistemology and emotions. Aldershot, UK: Ashgate.

Bullock, M., Sodian, B., & Koerber, S. (2009). Doing experiments and understanding science. Development of scientific reasoning from childhood to adulthood. In W. Schneider, & M. Bullock (Eds.). Human development from early childhood to early adulthood: Findings from a 20 year longitudinal study (pp. 173-198). New York, NJ: Psychology Press.

Bullock, M., & Ziegler, A. (1999). Scientific reasoning: Developmental and individual differences. In F. E. Weinert, & W. Schneider (Eds.). Individual development from 3 to 12: Findings from the Munich Longitudinal Study (pp. 38-54). Cambridge: Cambridge University Press.

Carey, S., & Smith, C. (1993). On understanding the nature of scientific knowledge. Educational Psychologist, 28(3), 235-251. doi: 10.1207/s15326985ep2803_4

Cavagnetto, A. R. (2010). Argument to foster scientific literacy. A review of argument interventions in K-12 science contexts. Review of Educational Research, 80(3), 336-371. doi: 10.3102/0034654310376953

Charlin, B., Boshuizen, H. P., Custers, E. J., & Feltovich, P. J. (2007). Scripts and clinical reasoning. Medical Education, 41(12), 1178-1184. doi: 10.1111/j.1365-2923.2007.02924.x

Chi, M. T. (2009). Active, constructive, interactive: A conceptual framework for differentiating learning activities. Topics in Cognitive Science, 1(1), 73-105. doi: 10.1111/j.1756-8765.2008.01005.x

Chinn, C., & Clark, D. B. (2013). Learning through collaborative argumentation. In C. E. Hmelo-Silver, C. A. Chinn, C. K. K. Chan, & A. M. O'Donnell (Eds.), International handbook of collaborative learning (pp. 314-332). New York: Routledge.

Chinn, C. A., & Malhotra, B. A. (2002). Epistemologically authentic inquiry in schools: A theoretical framework for evaluating inquiry tasks. Science Education, 86(2), 175-218. doi: 10.1002/sce.10001

Chinnappan, M., Ekanayake, M. B., & Brown, C. (2011). Specific and general knowledge in geometric proof development. SAARC Journal of Educational Research, 8, 1-28.

Chinnappan, M., & Lawson, M. J. (1996). The effects of training in the use of executive strategies in geometry problem solving. Learning and Instruction, 6(1), 1-17. doi: 10.1016/S0959-4752(96)80001-6

Cho, K., Schunn, C. D., & Wilson, R. W. (2006). Validity and reliability of scaffolded peer assessment of writing from instructor and student perspectives. Journal of Educational Psychology, 98(4), 891-901. doi: 10.1037/0022-0663.98.4.891

Clark, D. B., Sampson, V., Weinberger, A., & Erkens, G. (2007). Analytic frameworks for assessing dialogic argumentation in online learning environments. Educational Psychology Review, 19(3), 343-374. doi: 10.1007/s10648-007-9050-7

Clore, G. L., & Huntsinger, J. R. (2009). How the object of affect guides its impact. Emotion Review, 1, 39-54. doi: 10.1177/1754073908097185

Dochy, F., Segers, M., van den Bossche, O., & Gijbels, D., (2003). Effects of problem-based learning: a meta-analysis. Learning and Instruction, 13(5), 533-568. doi: 10.3102/00346543075001027

Dunbar, K. (1995). How scientists really reason: Scientific reasoning in real-world laboratories. In R. J. Sternberg, & J. Davidson (Eds.), Mechanisms of insight (pp. 365-395). Cambridge MA: MIT press.

Duschl, R. (2008). Science education in three-part harmony: Balancing conceptual, epistemic, and social learning goals. Review of Research in Education, 32(1), 268-291. doi: 10.3102/0091732X07309371

Erduran, S., Simon, S., & Osborne, J. (2004). TAPping into argumentation: developments in the application of Toulmin's argument pattern for studying science discourse. Science Education, 88(6), 915-933. doi: 10.1002/sce.20012

Euler, L. (1741). Solutio problematis ad geometriam situs pertinentis. Commentarii academiae scientiarum Petropolitanae, 8, 128-140.

Fiedler, K., & Beier, S. (2014). Affect and cognitive processes. In R. Pekrun, & L. Linnenbrink-Garcia (Eds.), International handbook of emotions in education (pp. 36-55). New York: Taylor & Francis.

Fischer, F., Kollar, I., Stegmann, K., & Wecker, C. (2013). Toward a script theory of guidance in computer-supported collaborative learning. Educational Psychologist, 48(1), 56-66. doi: 10.1080/00461520.2012.748005

Fischer, F., & Mandl, H. (2005). Knowledge convergence in computer-supported collaborative learning: The role of external representation tools. Journal of the Learning Sciences, 14(3), 405-441. doi: 10.1207/s15327809jls1403_3

Furtak, E. M., Seidel, T., Iverson, H., & Briggs, D. C. (2012). Experimental and quasi-experimental studies of inquiry-based science teaching. A meta-analysis. Review of Educational Research, 82(3), 300-329. doi: 10.3102/0034654312457206

Galbraith, P. L., Henn, H.-W., & Niss, N. (2007). Modelling and applications in mathematics education. New York, NJ: Springer.

Gijlers, H., & de Jong, T. (2009). Sharing and confronting propositions in collaborative inquiry learning. Cognition and Instruction, 27(3), 239-268. doi: 10.1080/07370000903014352

Herrenkohl, L. R. & Cornelius, L. (2013). Investigating Elementary Students' Scientific and Historical Argumentation. Journal of the Learning Sciences, 22(3), 413-461. doi: 10.1080/10508406.2013.799475

Hierholzer, C., & Wiener, C. (1873). Ueber die Möglichkeit, einen Linienzug ohne Wiederholung und ohne Unterbrechung zu umfahren. Mathematische Annalen, 6(1), 30-32. doi: 10.1007/BF01442866

Hollingshead, A. B., Gupta, N., Yoon, K., & Brandon, D. P. (2011). Transactive memory theory and teams: Past, present, and future. In E. Salas, S. M. Fiore, & M. P. Letzky (Eds.), Theories of Team Cognition: Cross-disciplinary Perspectives (pp. 421-455). New York: Routledge.

Inhelder, B. & Piaget, J. (1958). The growth of logical thinking from childhood to adolescence: An essay on the construction of formal operational structure. London: Routledge & Kegan Pau.

Janssen, J., & Bodemer, D. (2013). Coordinated computer-supported collaborative learning: Awareness and awareness tools. Educational Psychologist, 48(1), 40-55. doi: 10.1080/00461520.2012.749153

Kelly, G. J., & Takao, A. (2002). Epistemic levels in argument: An analysis of university oceanography students' use of evidence in writing. Science Education, 86(3), 314-342. doi: 10.1002/sce.10024

Klahr, D., & Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive science, 12(1), 1-48. doi: 10.1207/s15516709cog1201_1

Klahr, D., Zimmerman, C., & Jirout, J. (2011). Educational interventions to advance children’s scientific thinking. Science, 333(6045), 971-975. doi: 10.1126/science.1204528

Koerber, S., & Sodian, B. (2009). Reasoning from graphs in young children. Preschoolers’ ability to interpret and evaluate covariation data from graphs. Journal of Psychology of Science & Technology, 2(2), 73-86. doi: 10.1891/1939-7054.2.2.73

Koerber, S., Sodian, B., Kropf, N., Mayer, D., & Schwippert, K. (2011). Die Entwicklung des wissenschaftlichen Denkens im Grundschulalter. Theorieverständnis, Experimentierstrategien, Dateninterpretation. Zeitschrift für Entwicklungspsychologie und Pädagogische Psychologie, 43(1), 16-21. doi: 10.1026/0049-8637/a000027

Koerber, S., Sodian, B., Thoermer, C., & Nett, U. (2005). Scientific reasoning in young children: Preschoolers' ability to evaluate covariation evidence. Swiss Journal of Psychology 64(3), 141-152. doi: 10.1024/1421-0185.64.3.141

Kollar, I., Fischer, F., & Slotta, J. D (2007). Internal and external scripts in computer-supported collaborative inquiry learning. Learning & Instruction, 17(6), 708-721. doi: 10.1016/j.learninstruc.2007.09.021

Kolodner, J. L. (2007). The roles of scripts in promoting collaborative discourse in learning by design. In F. Fischer, I. Kollar, H. Mandl, & J. M. Haake (Eds.), Scripting computer-supported collaborative learning - cognitive, computational and educational approaches (pp. 237-262). New York: Springer.

Koslowski, B. (1996). Theory and evidence: The development of scientific reasoning. Cambridge, MA: MIT Press/Bradford Books.

Koslowski, B. (2012). Scientific reasoning: Explanation, confirmation bias, and scientific practice. In G. J. Feist, & M. E. Gorman (Eds.), Handbook of the psychology of science (pp. 151-192). New York, NJ: Springer.

Kruglanski, A. W., & Gigerenzer, G. (2011). Intuitive and deliberate judgments are based on common principles. Psychological Review, 118(1), 97-109. doi: 10.1037/a0020762

Kuhn, D. (1991). The skills of argument. New York: Cambridge University Press.

Kuhn, D., & Franklin, S. (2006). The second decade: What develops (and how)? In D. Kuhn, & R. Siegler (Eds.), Handbook of child psychology: Vol. 2. Cognition, perception, and language (pp. 517-550). Hoboken, NJ: Wiley.

Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge, UK: Cambridge University Press.

Mäkitalo-Siegl, K., Kohnle, C., & Fischer, F. (2011). Computer-supported collaborative inquiry learning and classroom scripts: Effects on help-seeking processes and learning outcomes. Learning and Instruction, 21(2), 257-266. doi: 10.1016/j.learninstruc.2010.07.001

Mayer, D., Sodian, B., Koerber, S., & Schwippert, K. (2014). Scientific reasoning in elementary school children: Assessment and relations with cognitive abilities. Learning and Instruction, 29, 43-55. doi: 10.1016/j.learninstruc.2013.07.005

McNeill, K. L. (2011). Elementary students' views of explanation, argumentation, and evidence, and their abilities to construct arguments over the school year. Journal of Research in Science Teaching, 48(7), 793-823. doi: 10.1002/tea.20430

McNeill, K. L., Lizotte, D. J., Krajcik, J., & Marx, R. W. (2006). Supporting students' construction of scientific explanations by fading scaffolds in instructional materials. Journal of the Learning Sciences, 15(2), 153-191. doi: 10.1207/s15327809jls1502_1

Miyake, N. (1986). Constructive interaction and the iterative process of understanding. Cognitive Science, 10, 151-177. doi: 10.1207/s15516709cog1002_2

Mohammed, S., & Dumville, B. C. (2001). Team mental models in a team knowledge framework: Expanding theory and measurement across disciplinary boundaries. Journal of Organizational Behavior, 22(2), 89-106. doi: 10.1002/job.86

Moors, A., Ellsworth, P. C., Scherer, K. R., & Frijda, N. H. (2013). Appraisal theories of emotion: State of the art and future development. Emotion Review, 5(2), 119-124. doi: 10.1177/1754073912468165

Morton, A. (2010). Epistemic emotions. In P. Goldie (Ed.), The Oxford handbook of philosophy of emotion (pp. 385–399). Oxford, United Kingdom: Oxford University Press.

Nicol, D., Thomson, A., & Breslin, C. (2014). Rethinking feedback practices in higher education: a peer review perspective. Assessment & Evaluation in Higher Education, 39(1), 102-122. doi: 10.1080/02602938.2013.795518

Noroozi, O., Weinberger, A., Biemans, H. J., Mulder, M., & Chizari, M. (2012). Argumentation-Based Computer Supported Collaborative Learning (ABCSCL): A synthesis of 15 years of research. Educational Research Review, 7(2), 79-106. doi: 10.1016/j.edurev.2011.11.006

Nückles, M., & Stürz, A. (2006), The assessment tool: A method to support asynchronous communication between computer experts and laypersons. Computers in Human Behavior, 22(5), 917-940. doi: 10.1016/j.chb.2004.03.021

Nussbaum, M. (2011). Argumentation, dialogue theory, and probability modeling: Alternative frameworks for argumentation research in education. Educational Psychologist, 46(2), 84-106. doi: 10.1080/00461520.2011.558816

Okada, T., & Simon, H. A. (1997). Collaborative discovery in a scientific domain. Cognitive Science, 21(2), 109-146. doi: 10.1207/s15516709cog2102_1

Osborne, J. (2010). Arguing to learn in science: The role of collaborative, critical discourse. Science, 328(5977), 463-466. doi: 10.1126/science.1183944

Palincsar, A. S., & Brown, A. L. (1984). Reciprocal teaching of comprehension-fostering and comprehension-monitoring activities. Cognition and Instruction, 1(2), 117-175. doi: 10.1207/s1532690xci0102_1

Pekrun, R. (2006). The control-value theory of achievement emotions: Assumptions, corollaries, and implications for educational research and practice. Educational Psychology Review, 18, 315-341. doi: 10.1007/s10648-006-9029-9

Pekrun, R., Hall, N. C., Goetz, T., & Perry, R. P. (in press). Boredom and academic achievement: Testing a model of reciprocal causation. Journal of Educational Psychology.

Pekrun, R., & Stephens, E. J. (2011). Academic emotions. In K. R. Harris, S. Graham, T. Urdan, S. Graham, J. M. Royer, & M. Zeidner (Eds.), APA educational psychology handbook (Vol. 2, pp. 3-31). Washington, DC: American Psychological Association.

Quintana, C., Reiser, B. J., Davis, E. A., Krajcik, J., Fretz, E., Duncan, R. G., & Soloway, E. (2004). A scaffolding design framework for software to support science inquiry. Journal of the Learning Sciences, 13(3), 337-386. doi: 10.1207/s15327809jls1303_4

Roschelle, J., & Teasley, S. D. (1997). The construction of shared knowledge in collaborative problem solving. In C. O'Malley (Ed.), Computer supported collaborative learning (Vol. 128, pp. 69-97). Berlin: Springer.

Rosé, C. P., Wang, Y. C., Arguello, J., Stegmann, K., Weinberger, A., & Fischer, F. (2008). Analyzing collaborative learning processes automatically: Exploiting the advances of computational linguistics in computer-supported collaborative learning. International Journal of Computer-Supported Collaborative Learning, 3, 237-271. doi: 10.1007/s11412-007-9034-0

Sadler, T. D. (2004). Informal reasoning regarding socio-scientific issues: A critical review of research. Journal of Research in Science Teaching, 41(5), 513-536. doi: 10.1002/tea.20009

Salomon, G., & Perkins, D. N. (1998). Individual and social aspects of learning. Review of Research in Education, 23, 1-24. doi:10.3102/0091732X023001001

Sampson, V., & Clark, D. (2009). The impact of collaboration on the outcomes of scientific argumentation. Science Education, 93(3), 448-484. doi: 10.1002/sce.20306

Scardamalia, M., & Bereiter, C. (2006). Knowledge Building: Theory, pedagogy, and technology. In K. Sawyer (Ed.), Cambridge handbook of the learning sciences (pp. 97-119). New York: University Press.

Schunn, C. D., & Anderson, J. R. (1999). The generality/specificity of expertise in scientific reasoning. Cognitive Science, 23(3), 337-370. doi: 10.1016/S0364-0213(99)00006-3

Shafto, P., Kemp, C., Bonawitz, E. B., Coley, J. D., & Tenenbaum, J. B. (2008). Inductive reasoning about causally transmitted properties. Cognition, 109(2), 175-192. doi: 10.1016/j.cognition.2008.07.006

Shavelson, R. J., & Towne, L. (Eds.). (2002). Scientific research in education. Washington, DC: National Academic Press.

Shuman, V., & Scherer, K. R. (2014). Concepts and structures of emotions. In R. Pekrun, & L. Linnenbrink-Garcia (Eds.), International handbook of emotions in education (pp. 13-35). New York: Taylor & Francis.

Sinatra, G. M., Broughton, S. H., & Lombardi, D. (2014). Emotions in science education. In R. Pekrun, & L. Linnenbrink-Garcia (Eds.), International handbook of emotions in education (pp. 415-436). New York: Taylor & Francis.

Sodian, B., & Bullock, M. (2008). Scientific reasoning – where are we now? Cognitive Development, 23(4), 431-434. doi: 10.1016/j.cogdev.2008.09.003

Sodian, B., Jonen, A., Thoermer, C. & Kircher, E. (2006). Die Natur der Naturwissenschaften verstehen: Implementierung wissenschaftstheoretischen Unterrichts in der Grundschule. In M. Prenzel, & L. Allolio-Näcke (Eds.), Untersuchungen zur Bildungsqualität von Schule. Abschlussbericht des DFG-Schwerpunktprogramms (S. 147-160). Münster: Waxmann.

Sodian, B., Zaitchik, D., & Carey, S. (1991). Young children's differentiation of hypothetical beliefs from evidence. Child Development, 62, 753‑766. doi: 10.1111/j.1467-8624.1991.tb01567.x

Sparrow, B., Liu, J., & Wegner, D. (2011). Google effects on memory: Cognitive consequences of having information at our fingertips. Science, 333(6043), 776-778. doi: 10.1126/science.1207745

Stegmann, K., Wecker, C., Weinberger, A., & Fischer, F. (2012). Collaborative argumentation and cognitive elaboration in a computer-supported collaborative learning environment. Instructional Science, 40(2), 297-323. doi: 10.1007/s11251-011-9174-5

Stokes, D. E. (1997). Pasteur’s quadrant: Basic science and technological innovation. Washington, DC: Brookings Institution Press.

Streng, S., Stegmann, K., Boring, S., Böhm, S., Fischer, F., & Hussmann, H. (2010). Measuring effects of private and shared displays in small-group knowledge sharing processes. In E. Hvannberg, M. K. Lárusdóttir, A. Blandford, & J. Gulliksen (Eds.), Proceedings of the 6th Nordic Conference on Human-Computer Interaction (NordiCHI 2010) (pp. 789-792). New York, NY: ACM.

Strijbos, J. W., & Sluijsmans, D. (2010). Unravelling peer assessment: Methodological, functional, and conceptual developments. Learning and Instruction, 20(4), 265-269. doi: 10.1016/j.learninstruc.2009.08.002

Teasley, S., Fischer, F., Dillenbourg, P., Kapur, M., Chi, M., Weinberger, A., & Stegmann, K. (2008). Cognitive convergence in collaborative learning. In Proceedings of ICLS 2008 (Vol. 3, pp. 360–367). International Society of the Learning Sciences.

Trilling, B., & Fadel, C. (2009). Twenty-first century skills. Learning for life in out times. San Francisco: Jossey-Bass.

van Joolingen, W. R., & de Jong, T. (1993). Exploring a domain through a computer simulation: traversing variable and relation space with the help of a hypothesis scratchpad. In D. Towne, T. de Jong, & H. Spada (Eds.), Simulation-based experiential learning (pp. 191-206). Berlin: Springer.

van Joolingen, W. R., de Jong, T., Lazonder, A. W., Savelsbergh, E. R., & Manlove, S. (2005). Co-Lab: research and development of an online learning environment for
collaborative scientific discovery learning. Computers in Human Behavior, 21, 671-688. doi: 10.1016/j.chb.2004.10.039

Wegerif, R. (2007). Dialogic education and technology: Expanding the space of learning. New York: Springer.

Wegner, D. M. (1987). Transactive memory: A contemporary analysis of the group mind. In B. Mullen & G. R. Goethals (eds.), Theories of group behavior (pp. 185-208). New York: Springer.

Weinberger, A., Stegmann, K., & Fischer, F. (2010). Learning to argue online: Scripted groups surpass individuals (unscripted groups do not). Computers in Human Behavior, 26(4), 506-515. doi: 10.1016/j.chb.2009.08.007

Weiner, B. (2007). Examining emotional diversity in the classroom: An attribution theorist considers the moral emotions. In P. A. Schutz, & R. Pekrun (Eds.), Emotion in education (pp. 75-88). San Diego, CA: Academic Press.

White, B. Y., & Frederiksen, J. R. (1998). Inquiry, modelling, and metacognition: Making science accessible to all students. Cognition and Instruction, 16(1), 3-118. doi: 10.1207/s1532690xci1601_2

Wiethoff, A., Schneider, H., Rohs, M., Butz, A., & Greenberg, S. (2012). Sketch-a-TUI: low cost prototyping of tangible interactions using cardboard and conductive ink. In S. N. Spencer (ed.), Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction (pp. 309-312). New York: ACM.

Wu, S., & Keysar, B. (2007). The effect of information overlap on communication effectiveness. Cognitive Science, 31(1), 169-181. doi: 10.1080/03640210709336989

Zazkis, R., & Chernoff, E. J. (2008). What makes a counterexample exemplary? Educational Studies in Mathematics, 68, 195-208. doi: 10.1007/s10649-007-9110-4

Zimmerman, C. (2000). The development of scientific reasoning skills. Developmental Review, 20, 99-149. doi: 10.1006/drev.1999.0497

Zimmerman, C. (2007). The development of scientific thinking skills in elementary and middle school. Developmental Review, 27, 172-223. doi: 10.1016/j.dr.2006.12.001