Beyond Evidence-Based Belief Formation:

How Normative Ideas Have Constrained Conceptual Change Research

Stellan Ohlssona

aUniversity of Illinois at Chicago, Chicago, United States

 

Article received 4 September 2013  / revised 13 December 2013 / accepted 13 December 2013/ available online 20 December 2013

 

 

Abstract

The cognitive sciences, including psychology and education, have their roots in antiquity. In the historically early disciplines like logic and philosophy, the purpose of inquiry was normative. Logic sought to formalize valid inferences, and the various branches of philosophy sought to identify true and certain knowledge. Normative principles are irrelevant for descriptive, empirical sciences like psychology. Normative concepts have nevertheless strongly influenced cognitive research in general and conceptual change research in particular. Studies of conceptual change often ask why students do not abandon their misconceptions when presented with falsifying evidence. But there is little reason to believe that people evolved to conform to normative principles of belief management and conceptual change. When we put the normative traditions aside, we can consider a broader range of hypotheses about conceptual change. As an illustration, the pragmatist focus on action and habits is articulated into a psychological theory that claims that cognitive utility, not the probability of truth, is the key variable that determines belief revision and conceptual change.

Keywords: Belief formation; Belief revision; Cognitive utility; Conceptual change; Descriptive vs. normative inquiry; Pragmatism

Corresponding author: Stellan Ohlsson, University of Illinois, Chicago, stellan@uic.edu

http://dx.doi.org/10.14786/flr.v1i2.58



Cognitive scientists pride themselves on their interdisciplinary approach, drawing upon anthropology, artificial intelligence, evolutionary biology, linguistics, logic, neuroscience, philosophy, psychology, and yet other disciplines in their efforts to understand human cognition. Interdisciplinary research strategies have paid off in the natural sciences. For example, in the middle of the 20th century, research on the border between biology and chemistry resulted in spectacular advances, including the determination of the structure of DNA (Watson & Crick, 1953). It is plausible that an interdisciplinary approach will pay off in the study of cognition as well.

But the cognitive sciences exhibit principled differences that might get in the way of interdisciplinary efforts. Inquiry into cognition was originally rooted in the desire for human betterment. The first cognitive disciplines, including logic, epistemology, and linguistics, were normative disciplines. Logicians wanted to systematize valid inferences, as opposed to whatever inferences, including fallacious ones, that people make; philosophers sought to identify criteria for certain knowledge, as opposed to describe all knowledge[1]; and early linguists were more concerned with codifying correct grammar than with cataloguing grammatical errors. Historically, these and related disciplines mixed normative and descriptive elements in a way that is quite foreign to the contemporary conception of a natural or social science. In this respect, they resembled aesthetics, ethics, and legal scholarship more than biology, chemistry, and physics as practiced since the Scientific Revolution (Butterfield, 1957; Osler, 2000).

Because the normative disciplines were historically prior, the concepts, practices, and tools of normative inquiry became part of the intellectual infrastructure of the self-consciously descriptive sciences like neuroscience and experimental psychology that became established in the latter half of the 19th century. Concepts like abstraction, association, and imagery are obvious examples of such imports. This intellectual inheritance helped the new sciences get started, in part by suggesting questions and problems (How and when are associations formed?). But normative and descriptive disciplines are different enough in their goals and methods so that it is reasonable to ask whether that inheritance has had a negative impact as well.

In this article, I argue that certain normative ideas have led the study of cognition in general and cognitive change in particular down an unproductive path. The types of cognitive changes I have in mind are those that psychologists call conceptual change, belief revision, and theory change (Carey, 2009; Duit & Treagust, 2003; Nersessian, 2001; Thagard, 1992; Vosniadou, Baltas & Vamvakoussi, 2007)  For purposes of this article, I use these terms as near-synonyms. When a collective label is needed, I call them non-monotonic change processes (Ohlsson, 2011).

The argument proceeds through seven steps. In Section 1, I elaborate on the distinction between normative and descriptive inquiry. Section 2 highlights the role of normative concepts in what I call the Ideal-Deviation paradigm, a particular style of research that is common in cognitive psychology. In Section 3 I show that this research paradigm is also present, albeit implicitly, in conceptual change research. From a normative perspective, people ought to base their concepts and beliefs on evidence and revise them when they are contradicted by new evidence. The assumption -- sometimes implicit, sometimes explicit -- that people’s cognitive systems are designed to operate in this way has focused researchers’ attention on the deviations of human behaviour from normatively correct belief management (using the latter term as a convenient shorthand for “belief formation and belief revision”). But the normative perspective is irrelevant to the scientific study of cognitive change; hence, so are the deviations between the norms and actual cognitive processing. But if the deviations are irrelevant, so are our explanations for them. To go beyond the current state of conceptual change research, we need to make explicit the influence of the normative perspective, identify the constraints it has imposed on theory development, and relax those constraints. When we cultivate a resolutely descriptive stance, the space of possible theories of conceptual change expands. As a first step towards a new theory, Section 4 argues that the notion that people form concepts and beliefs on the basis of evidence might be fundamentally incorrect. The problem of why people do not revise their misconceptions when confronted with contradictory evidence then dissolves, and other questions move to the foreground. In Section 5, I outline an approach to conceptual change that is inspired by the pragmatist notion that concepts and beliefs are tools for successful action. According to this perspective, the key variable that drives conceptual change is not the strength of the relevant evidence or the probability of truth, but cognitive utility. Section 6 answers two plausible objections to this view, and Section 7 outlines some of its implications. Section 8 recapitulates the argument. Although the concept of utility is not itself new, the critique of conceptual change research as mired in normative ideas is stated here for the first time, and the conjecture that utility can replace probability of truth as the key theoretical variable in conceptual change has not been previously proposed.

 

1.       Normative versus descriptive inquiry

Descriptive sciences aim to provide accurate theories about the way the world is. As I use it, the word “descriptive” does not stand in contrast to “theoretical” or “explanatory” but encompasses all the empirical and theoretical practices of the natural and social sciences as we now conceive them; the term “empirical” could have been used instead, but is sometimes understood as standing in contrast to “theoretical.” The goal of descriptive science is to provide an account of reality that is intersubjectively valid and reflects the world as it is, independent of human judgments or wishes. Descriptive sciences are essentially concerned with adapting theories and concepts to data.  The descriptive sciences constitute what we today call “science.”

Normative disciplines, in contrast, investigate how things ought to be. They are essentially concerned with conformity to standards of goodness. A large portion of what we have in our heads consists of more or less explicit normative knowledge. Aesthetics, epistemology, ethics, etiquette, law, literary criticism, logic, rhetoric, and several other disciplines ask what the appropriate standards are, or should be, in some area of human endeavour; how one decides whether some instance does or does not conform to the relevant standards; and why particular instances conform, or fail to conform. The state of theory in these disciplines varies widely, from formalized theories of valid inferences in logic to the obviously culture-dependent rules of good manners, and the highly controversial theories of literary criticism.

As these nutshell definitions are meant to illustrate, descriptive and normative disciplines are so different from each other that the distinction seems impossible to overlook. But the separation of normative and descriptive inquiry was in fact long in coming; for example, psychology was included among “the moral sciences” well into the 19th century. The distinction was not fully articulated and accepted in Western thought until the 20th century, supported by, among other influences, the logical positivists’ emphasis on the distinction between fact and value. (For criticisms of the distinction, see, e.g., Köhler, 1938/1966, Putnam, 2002, and others.). It is nevertheless anachronistic to think of earlier generations of scholars as having confused descriptive and normative inquiry. The situation is better described by saying that they had not yet distinguished them.

Astronomy provides an example of research in an era when the distinction was not yet fully articulated (Kuhn, 1957; Margolis, 1987, 1993). Some ancient astronomers adopted the normative idea that planets ought to move in perfectly circular orbits, because the heavenly bodies were perfect beings and perfect beings ought to move in perfect orbits and the circle is the most perfect geometric figure. Astronomers then spent two millennia explaining the deviations of the observed planetary orbits from the normatively specified orbits using the Ptolemaic construct of epicycles, instead of exploring other hypotheses about the geometric shape of the orbits (Frank, 1952). Research in the natural sciences is no longer constrained by normative ideas in this way. Section 2 shows that psychology, in contrast, has not yet outgrown its normative inheritance.

2.       The Ideal-Deviation Paradigm

Normative principles have generated a psychological research paradigm that I refer to as the Ideal-Deviation Paradigm. Although only a subset of psychological research conforms to this paradigm, the paradigm has had a strong and largely negative impact on research in cognitive psychology in general and research on conceptual change in particular. A line of research that follows this paradigm proceeds through the following general steps (examples to follow):

 

(a) Choose a normative theory. How ought the mind carry out such-and-such a process, or perform such-and-such a task?

(b) Construct or identify a situation or task environment in which that theory applies, and derive its implications for normatively correct behaviour.

(c) Recruit human subjects and observe their behaviour in the relevant situation.

(d) Describe the deviations of the observed behaviour from the normatively correct behaviour.

(e) Hypothesize an explanation for the observed deviations.

(f) Test the explanation in further empirical experiments.

 

Readers who are familiar with cognitive psychology will have no difficulty in thinking of instances of the Ideal-Deviation Paradigm. The prototypical example is research on logical inference (Evans, 2007). In this area, researchers originally used logic as developed by logicians – primarily the logics of syllogistic and propositional inferences – as the relevant normative theory. The reasoning problems presented to human subjects include Wason’s famous 4-Card Task (a.k.a. the Selection Task; Wason & Johnson-Laird, 1972). Propositional logic prescribes a particular pattern of responses to this task, and the deviations of human responses from the prescribed pattern have been replicated in dozens, perhaps hundreds, of experimental studies. Researchers have proposed and debated a wide range of explanations for the observed deviations (Johnson-Laird, 2006; Klauer, Stahl, & Erdfelder, 2007).

Research on decision making is a second instance of the Ideal-Deviation Paradigm. The Subjective Expected Utility (SEU) theory and the mathematics of probability provide a normative theory for how to choose among competing options. When people are confronted with choices that involve probabilistic outcomes in laboratory settings, they deviate from the normatively correct behaviour in a variety of ways. Errors like availability and representativeness are examples. In the former, human judgments about the probability of an event (e.g., an airline crash) are influenced by the ease with which the person can retrieve an example of such an event from memory. In the latter case, human judgments are influenced by the similarity of a sample to the population from which the sample was drawn. In this field, too, researchers have proposed, debated, and experimentally investigated multiple explanations for these and other observed deviations (Kahneman, 2011).

The key point for present purposes is that the Ideal-Deviation Paradigm mixes normative and descriptive elements in a way that is foreign to the way we now think of scientific research. To highlight this point, imagine biochemists in the 1950s deciding that a particular protein molecule ought to fold itself into such-and-such a three-dimension structure, for, say, aesthetic reasons. Imagine also that they observe that the actual shape of the molecule deviates from this normatively specified structure, and then spend their time and theoretical energy explaining why the protein deviates in such-and-such a way from the supposedly correct structure, instead of explaining why it folds together the way it actually does. No such investigation could survive peer review for a contemporary chemistry journal.

In short, given our current conception of scientific research, there is no justification for the normative element in the Ideal-Deviation Paradigm. A descriptive theory of how people think or learn must be based on accounts of the actual processes occurring in people’s heads when they draw inferences, make decisions, and revise their knowledge, regardless of whether those processes are similar to, or different from, normatively correct processes. Comparing empirical observations to a normative theory contributes nothing to that enterprise. Section 3 argues that normative conceptions are nevertheless at the centre of contemporary research in conceptual change.

 

3.       Ideal-Deviation in conceptual change

The Ideal-Deviation Paradigm has strongly impacted psychological research on belief revision, conceptual change, theory change, and related processes. The impact is not immediately obvious, because the relevant normative theory is less precise and less explicit than the normative theories that underpin studies of logical reasoning and decision making. The normative theory of belief management can be summarized in four principles:

Principle 1: Grounding. Beliefs and concepts ought to be based on evidence. In this context, “based on” means “derived from.” The derivation is typically understood to be some form of induction across qualitative observations and/or aggregation of quantitative data. To adopt a belief for which one has no evidence is deplorable, even irresponsible, and a belief that is not grounded in evidence is dismissed as a guess, prejudice, or mere speculation.

Principle 2: Graded conviction. Beliefs ought to be held with a conviction that is proportional to the strength of the relevant evidence. For example, hearsay provides weaker evidence than direct observation; a anecdote provides weaker evidence than a study based on a representative sample; and a correlational study provides weaker evidence for a causal relation than an experimental study. The strength of one’s convictions ought to reflect such differences in the nature and extent of the relevant evidence. For purposes of quantitative comparisons, the conviction with which a belief is held can be conceptualized as an estimate of its probability of being true.

Principle 3: Belief-belief conflicts. When two beliefs or informal theories contradict each other, the person ought to choose to believe the one that is backed by the stronger evidence. The theory with the strongest support ought to have priority in the control of behaviour, including both discourse and action. To hold contradictory beliefs (P & not-P) is to be inconsistent and hence irrational.

Principle 4: Belief-evidence conflicts. When beliefs are contradicted by new evidence, they ought to be revised so as to be consistent with both the old and the new evidence. Failure to do so makes a person “closed minded”, “irrational”, “rigid minded”, or a victim of “robust misconceptions.”

These four principles are mere common sense; this is how a rational agent ought to manage his or her beliefs. There seems to be little gain in giving such vacuous verities the status of principles. But my purpose is to make explicit what is normally too embedded in our conceptual infrastructure to be visible.

Elements of the normative theory of belief management, masquerading as descriptive statements, can be found throughout the cognitive sciences. For example, Allport (1958/1979) proposed the contact theory of racial prejudice. The key idea was that negative racial stereotypes would be diminished if a person with such a stereotype were subjected to frequent contacts with members of the relevant ethnic group. The hypothesis was that the contacts would provide evidence against the negative stereotypes and pave the way for other, more positive opinions. In the philosophy of science, Kuhn (1970) described theory change as a consequence of the accumulation of anomalies. In educational psychology, Posner et al. (1982) hypothesized that students have to be dissatisfied with their current beliefs about scientific phenomena before they are prepared to revise them, and that being confronted with evidence to the contrary is the key source of dissatisfaction. In developmental psychology, Gopnik and Meltzoff (1997) embraced Principle 4, designating belief-evidence conflicts as the main drivers of cognitive change: “Theories may turn out to be inconsistent with the evidence, and because of this theories change.” (p. 39) Although other processes are involved as well, the processing of counterevidence is the most important: “Theories change as a result of a number of different epistemological processes. One particularly critical factor is the accumulation of counterevidence to the theory.”  (p. 39)

Paradoxically, cognitive scientists confidently assert these variations of Principle 4, while they simultaneously and in parallel assert that people deviate from Principle 4. In discipline after discipline, researchers have observed that people do not always and necessarily revise their beliefs when confronted with contradictory evidence. The predictions of the contact theory of racial prejudice were not verified and the theory had to be reformulated (Pettigrew, 1998). Likewise, Strike and Posner (1992) found that students retain their misconceptions even after instruction that is directly aimed at confronting those misconceptions with contradicting evidence. “One of the most important findings of the misconception literature…is that misconceptions are highly resistant to change” (Strike & Posner, 1992, p. 153). In a review paper, Limón (2001) wrote that “…the most outstanding result of the studies using the cognitive conflict strategy is the lack of efficacy for students to achieve a strong restructuring and, consequently, a deep understanding of the new information.” (p. 364) Indeed, the deviation of student behaviour from Principle 4 is the very phenomenon that created conceptual change as a distinct field of research, at least within educational research.

Consistent with the Ideal-Deviation Paradigm, researchers have responded to the finding that people do not (necessarily) adapt their beliefs to contradictory evidence by proposing various explanations for this deviation. For example, Rokeach (1960, 1970) proposed that belief systems have a hierarchical structure, and that change becomes more and more difficult as one moves from the periphery to the centre. As a result, most changes are peripheral and central principles are hardly ever affected by evidence. Political and religious principles are cases in point. The philosopher Imre Lakatos has proposed a similar theory to explain theory change in science (Lakatos, 1980). Festinger (1957/1962) launched a long-lasting line of research in social psychology that centred on a set of mechanisms for reducing what he called cognitive dissonance. Cognitive mechanisms for dissonance reduction process contradictory evidence without any fundamental revision of the relevant beliefs.

More recently, cognitive psychologists have added yet other explanations. The category shift theory of Chi (2005, 2008) and co-workers explains the robustness of misconceptions as a consequence of the inheritance of characteristics from the (frequently inappropriate) ontological category to which a phenomenon has been assimilated. Vosniadou and Brewer (1992) and Vosniadou and Skopeliti (2013) explain the deviations as a consequence of the synthesis of prior (and frequently inaccurate) mental models into more comprehensive (but sometimes equally inaccurate) mental models. Sinatra and co-workers have added motivational and emotional variables as additional sources of explanation (Broughton, Sinatra, & Nussbaum, 2013; Sinatra & Pintrich, 2003). Yet other perspectives on conceptual change have been proposed (see, e.g., Rakison & Poulin-Dubois, 2001; Shipstone, 1984). Ohlsson, (2011, Chap. 9) provides a more extensive comparative analysis of these and related types of explanations.

In short, although the normative theory of belief formation is less explicit than the normative theories that underpin studies of logical reasoning and decision making, research on conceptual change follows closely the Ideal-Deviation paradigm. The basic structure of conceptual change research is that (a) students ought to revise their misconceptions when confronted with contradictory evidence, (b) the empirical evidence indicate that they do not in fact do so, and therefore (c) we need to explain why they do not do so. But this research enterprise is only meaningful if one accepts Principles 1 - 4 as relevant for the study of conceptual change. Section 4 prepares for a new approach to conceptual change by arguing that people do not base their concepts and beliefs on evidence. If so, Principles 1-4 are irrelevant for understanding conceptual change.

 

4.       The irrelevance of evidence

At first glance, the normative theory of belief management seems highly relevant for understanding human cognition. If people do not adapt their beliefs to reality, how do they get through their day? Surely the deviations from rational belief management uncovered in various areas of cognitive research are relatively minor slips of a fundamentally rational cognitive system for building and maintaining a veridical belief base? Such slips might be due, for example, to cognitive capacity limitations or emotional biases.

This view is plausible but difficult to evaluate. We know very little about how people form and revise beliefs in natural settings, because there are few relevant empirical studies. What follows are some informal observations and examples. In conjunction, they suggest a radical conclusion: The principle that people base their beliefs on evidence might be fundamentally incorrect rather than an optimistic idealization or a partial truth.

An adult person has a large belief base in memory, at least if the term “belief” is applied broadly enough to include not only the deep principles that tend to be the object of analysis, but also local, concrete facts. For example, I have multiple beliefs about the public transportation system in the city where I live: that there are buses and subway trains; that there are multiple subway lines; where they go; how long a trip is likely to take; how much it costs; the location of stations; and so on. This small domain of experience is likely to encompass several hundreds, perhaps even thousands, of beliefs, most of which are likely to be accurate. The view that a belief ought to be derived from, or based on, observational evidence works well with respect to such concrete, particular matters. For example, the belief that there is a subway station at the corner of X and Y streets might very well be acquired by no more complicated a process than walking down X street and encountering that very station at the crossing with Y street. Such routine belief formation events can plausibly be attributed to direct observation and in that sense conforms to Principles 1 - 4.

The direct observation account of belief formation quickly runs into difficulties when the belief is general. For example, most adults have a variety of beliefs about economical, political, and social affairs. Informal observations indicate that a significant proportion of such beliefs are not based on any evidence whatsoever. Will austerity economics stimulate the economy or depress the markets by robbing consumers of their ability to consume? Quite a few adults are prepared to offer a point of view about this issue, and, just as obviously, very few of them have access to relevant quantitative data or other observational evidence.

This is not an isolated instance. Consider the range of controversial socio-political and economic issues in the public discourse: gun control, surveillance by intelligence organizations, same-sex marriage, drone strikes on foreign soil, the benefits of universal health care – a large proportion of adults have beliefs regarding many of such issues, but almost none of those beliefs are based on evidence. That is, a person who holds a belief on such an issue did not, as a rule, induce it from multiple historical examples or derive it from statistical data or other types of observational evidence. Most people cannot give any coherent or detailed account of why, how, or even when they adopted any particular belief. If people operated with Principle 1, they ought to answer almost every question about socio-political and economic issues by saying, “I don’t know; I don’t have an opinion on that; I don’t have enough information.”

If general beliefs are not formed by induction from observations, how, by what processes are they formed instead? Informal reflection on everyday life suggests that we form general beliefs by accepting what someone else tells us, either in face-to-face conversation or via media. The notion of evidence does not enter into this belief formation process in any prominent way, because we do not normally and as a rule question or doubt what we are being told. It is enough to hear someone say it for us to encode it as veridical.  Gabbay and Woods (2001) call this the Ad Ignorantiam Rule: “Human agents tend to accept without challenge the utterances and arguments of others except where they know or think they know or suspect that something is amiss.” (p. 150) The reason for this rule is probably that we tend to communicate with people we trust, and access sources that we have already judged as reliable. But this does not support the normative principle, because it is not obvious that we base our judgments about the trustworthiness of a source on anything that would qualify as evidence.

Another hypothesis is that belief formation is internal to the cognitive system. Many of our beliefs appear to arrive in the belief base as consequences of already adopted beliefs. For example, I believe that public education is an essential social institution. I also believe that nations that invest in education will fare better than those that do not. It would be an exaggeration to say that I have evidence for the second belief. After all, what counts as evidence as to what will happen in the future? It seems more accurate to say that I have adopted the second belief because it follows from the first. If public education is essential, nations underfund it at their peril. Intra-mental derivations of this sort can hardly be characterized as evidence-based. First, the question of evidence is merely pushed one step backwards, because the derived belief cannot be said to be evidence-based unless the beliefs it is derived from are themselves evidence-based. Second, the internal derivations are influenced by factors that are themselves unrelated to truth, such as a desire for consistency, instrumental gain, and various types of biases.

Consider next Principle 2, that beliefs ought to be held with graded convictions that reflect the strength of the evidence. Are people sensitive to the relative strength of the evidence? That is, do people in general and as a rule hold their beliefs more strongly when they are supported by more evidence and less strongly when the support is weaker? A thorough answer to this question would require extensive data collection and some way of measuring the strength of the relevant evidence. However, it is noteworthy that the beliefs that people hold with the greatest conviction tend to be their religious beliefs, and church leaders and followers alike insist that religious beliefs are, and should be, based on faith, not evidence. The fact that faith-based beliefs are held more strongly than other classes of beliefs is inconsistent with the idea that our brains are programmed, in some deep and fundamental way, to base beliefs on evidence.

Other examples of conviction levels that do not seem to reflect the strength of the available evidence include those that pertain to beliefs regarding climate change and the value of vaccinations. At one time, it was rational to be sceptical regarding the reality of climate change; now, the evidence is overwhelming (Oreskes, 2004). Nevertheless, some people continue to believe that the climate is not changing. The controversy over vaccinations exhibits a similar pattern. Although caution was once rational, there are now multiple, large-scale studies that show conclusively that there is nothing wrong with the common vaccines that are given to children, or with the way they are administered. People do not get sick from vaccines; they get sick from germs. However, anti-vaccine activists continue to claim that vaccines are harmful and they have many followers (Offit, 2011).

Finally, consider Principle 3, namely that theory-theory conflicts are to be resolved with reference to the relative strengths of the evidence for the competing theories. Do people consistently side with the view that has the strongest evidence? Consider the issue whether human beings are fundamentally evil and require discipline in order to behave themselves, or fundamentally good, so all they need is an opportunity to blossom in a natural way. Every news story about yet another serial killer is evidence for the former view; every heartwarming news story about someone who goes out of their way to make a difference for people around them is evidence for the latter view. Every war produces novel atrocities, but every natural catastrophe – forest fire, hurricane, tsunami – generates a fresh batch of stories about individual heroism and self-sacrifice. Anybody who attends to the news has as much evidence for one view as for the other. Given that there is much evidence for either view, those of us who hold a strong opinion on the issue of human nature must have resolved the conflict between these two theories at least partially on the basis of something other than the evidence.

To summarize, informal observations suggest that people do not, in general, induce their beliefs from observational evidence. Although we often base concrete beliefs about particular objects and events on direct observation, we appear to form general beliefs through ubiquitous encoding of communications by trusted sources and by deriving them from other, already adopted beliefs. These hypothetical but plausible belief formation processes are not inductive in nature, and the contribution of what we normally call evidence to each is weak.  In addition, people show few signs of holding their beliefs with a conviction that is proportional to the strength of the supporting evidence, resolve conflicts among competing beliefs by comparing the relative strength of the supporting evidence, or to revise their beliefs when they encounter contradictory evidence. These observations suggest that the normative theory of belief management, taken as a descriptive theory, is fundamentally wrong rather than merely an optimistic idealization.

But if so, why do the observed deviations of human behaviour from Principles 1-4 deserve our attention? The consequence of abandoning Principles 1-4 is that conceptual change researchers no longer need to explain why misconceptions are robust in the face of contradictory evidence. If there is no reason to expect students to revise their beliefs when confronted with new evidence, then the absence of such revisions is not puzzling. Explanations for why misconceptions are robust become obsolete, not in the sense of being falsified, but in the sense of being answers to a question we do not need to ask. The problem of why people do not revise their beliefs is not so much solved as dissolved. However, the task of formulating a scientific theory of belief formation and belief revision that can support effective pedagogical practices remains. In Section 5, I propose that some of the ideas put forward by the American pragmatist philosophers can serve as a starting point for a new approach to conceptual change.

 

5.       A Pragmatist Approach

At the end of the 19th century and the beginning of the 20th, American scholars, lead by William James, Charles Sanders Peirce, and John Dewey, tried to reformulate the classical philosophical problems about knowledge, meaning, and truth in terms of action instead of observation. They claimed that the meaning of a concept or belief resides in the set of actions or “habits” to which it gives rise. “The essence of belief is the establishment of a habit, and different beliefs are distinguished by the different modes of action to which they give rise.” (Peirce, 1878, p. 129-130)  The truth of a belief is tied to the outcomes of executing those habits. They stopped short of claiming that, “what works is what is true”, but some of their contemporaries stated their ideas even more boldly than they did themselves (Schiller, 1905).

Pragmatism did not flourish as a philosophical, i.e., normative, theory. Its impact faded after the demise of its most charismatic leaders. Although it is once again receiving serious attention from philosophers (Stich, 1983), my purpose is not to revive philosophical pragmatism. Instead, I intend to mine this strand of thought for an approach to cognitive change that does not begin with the assumption that people decide what to believe by estimating the probability of truth. The question is what they estimate instead.

 

5.1     Cognitive Utility As the Basis for Cognition

The pragmatist emphasis on action fits well with psychological theories of cognition. There is broad consensus on certain general features of what cognitive psychologists have come to call the cognitive architecture, i.e., the information processing machinery that underpins the higher cognitive processes (Polk & Seifert, 2002). At the centre of the cognitive architecture there is a limited-capacity working memory, connected to separate long-term memory stores for declarative and practical (skill) knowledge. The working memory receives input from sensory systems, and holds information that is being processed in reasoning and decision making. The purpose of the cognitive system is to generate behaviour that satisfies the person’s current goal. In the process, the system makes endless, lightening quick choices: Which goal to pursue next (planning); which part of the environment to attend to next (attention allocation); which interpretation of perceptual input to prefer (perception); which memory structure to activate next (retrieval); which inference to carry out next (reasoning); and which change, if any, to make in the system’s knowledge base at any given time (learning).

The pragmatist stance invites the hypothesis that the variable that guides the never-ending choices is the cognitive utility of the relevant knowledge structures. To articulate this idea, imagine that each knowledge structure (concept or belief) in memory is associated with a numerical value that measures its past usefulness. When there is a choice to be made among knowledge structures, the one with the higher utility is preferred and gets to control discourse and action.  If a knowledge structure is instrumental in generating a particular action, and if that action is successful, then the utility of that knowledge structure is adjusted upwards; if the action is unsuccessful, it is adjusted downwards. Each application of a knowledge structure is an opportunity for that structure to accrue utility (or to loose some of it, in the case of unsuccessful action). Over time, the value of the cognitive utility associated with a knowledge structure will become stabilized at some asymptotic value that estimates its usefulness in general. The distribution of utility values over the belief base represents the person’s experience of the world, as filtered through action rather than perception.

The cognitive utility hypothesis is not novel. A construct of this sort has been incorporated into the ACT-R model of the cognitive architecture proposed by John R. Anderson and co-workers (Anderson, 2007). In ACT-R, cognitive skills are encoded in sets of goal-situation-action rules (skill elements) that specify an action to be considered when certain conditions are satisfied by the current situation. In each cycle of operation, the architecture retrieves all the rules that have their conditions satisfied. It then selects one of those rules to be executed; that is, its action is taken. The action usually changes the current situation, and  the cycle starts over with a renewed evaluation of which rules have their conditions satisfied in the changed situation.

 In ACT-R, the utility u of rule i determines its probability of being selected for execution. In simplified form, that probability is given by

Eq. (1) Prob(i) = u(i) / Σ u(1, 2,…i,…j),

where Σ u(1,2,…i,…j) is the sum of the utilities of the rules for which the conditions are satisfied by the current situation. The probability that a particular rule i will be selected is thus proportional to how much of the total utility represented by all the currently satisfied rules it accounts for. The probability of being chosen for execution is thus a dynamic quantity that depends on context and that changes from moment to moment as cognitive processing unfolds.

If rule i is selected and executed on operational cycle n, its utility is adjusted upwards or downwards, depending on the outcome. The adjustment is given by the equation

Eq. (2) u(i, n) = u(i, n-1) + α[R(i, n) – u(i, n-1)],

in which i is the relevant rule, n is the operational cycle, and R is the reward or feedback from the environment about the success of the executed action (reinforcement in the behaviourist sense). The magnitude

R(i, n) – u(i, n-1)

is the reward the rule realized in cycle n, R(i, n), over and above the utility it already possessed in the previous cycle of operation, u(i, n-1). The rate parameter α controls the proportion of that reward increment that is to be added to the current utility of the rule, u(i, n-1), to compute the utility of the rule in the following cycle, u(i, n). The reader is recommended to consult the original source for further technical details (Anderson, 2007, pp. 159-164).

In the ACT-R theory, utility values are associated with skill elements (rules), and there is a separate system of theoretical quantities that pertain to the learning and application of declarative knowledge elements. To make the utility construct relevant for belief formation we have to hypothesize that utility values are associated with declarative knowledge structures (beliefs, concepts, informal theories) instead of (or in addition to) skill elements. Furthermore, the relation between cognitive utility and belief (subjective truth) has to be specified. One possible hypothesis is that there is a threshold such that, when the utility of a particular belief rises above that threshold, the person feels that the belief is true. A cognitive system that operates in this way would be significantly different from ACT-R and other cognitive systems described in the cognitive literate (Polk & Seifert, 2002).

A key question is what degree of utility new information will be assigned when it is first encoded into memory. At the outset, the new knowledge structure has no track record of supporting successful action, so one might decide that its initial utility is zero. This causes a paradox: If it is zero, it will always have lower utility than any competitor with even a modest track record, so it will never be activated or chosen, and therefore never have an opportunity to accrue utility. To an outside observer, it will appear as if the learner did not encode the new information, because his or discourse and action continue to be guided by other knowledge structures.

There are multiple solutions to this theoretical problem. In Anderson’s ACT-R theory, the initial value is indeed set to zero, but a knowledge structure (rule) can be created multiple times, and each time the utility value is increased. Other solutions are possible. The initial value can be hypothesized to be random, or equal to the mean of the utility values of all knowledge structures in memory. There might be situations in which competing older knowledge structures do not apply, but the new one does, and those situations afford the newer knowledge with opportunities to accrue utility.

Many scenarios that seem like straightforward instances of truth-based processing are equally or better understood in terms of utility. For example, suppose that my eyes itch. I might have dry eyes, or I might suffer from an allergy attack. I decide to take an antihistamine pill. The itch disappears. In a logic-inspired analysis, the belief that I am suffering from an allergy outbreak is a hypothesis the truth of which is unknown. The connection between the belief that I have an allergy and the prediction that the itch will disappear is a step-by-step chain of inferences. The disappearance of the itch is an observation that verifies the hypothesis, and my estimate of the probability that I have an allergy increases as specified by, for example, Bayesian principles.

This account has weaknesses. One weakness is that I am not aware of any lengthy reasoning process to arrive at a testable prediction. The process that connects the belief “I have an allergy attack” with the fact that my itch stopped is a process of problem solving and planning (what should I do about my itchy eyes?), not a process of propositional inference. Another weakness is that the envisioned process is an instance of a logical fallacy: If P, then Q in conjunction with Q does not imply P. This is Popper’s classical critique of verificationism. But if the truth-based account has a logical fallacy at its core, how can people function? The utility-based account avoids this problem by postulating a direct link between the action outcome and the relevant belief: The action of taking the antihistamine worked, so my disposition to act on the allergy belief in the future is increased.

As the example illustrates, the difference between an account in terms of evidence, inference, and truth, on the one hand, and an account in terms of utility and action, on the other, can be subtle. How does that difference affect how we view conceptual change? The pragmatist stance focuses attention on action, the output side of the cognitive system, instead of perception, the input side. What the learner does matters more than what he or she hears or sees. Passive reception of information will not in and of itself have any cognitive consequences. Unless the learner retrieves a knowledge structure and uses it to decide what to do next, that knowledge structure cannot accrue utility and hence might remain dormant, even though the new information has been encoded accurately.

In the pragmatist perspective, new information does not replace the old. In a logic-based theory, two different beliefs can be mutually incompatible, which implies that a person cannot embrace both. The Earth is either round or flat; it is impossible to believe both assertions at once. However, the fact that knowledge structure i has utility u(i) is not incompatible with the fact that knowledge structure j has utility u(j). The belief that the Earth is flat might be useful for mapmaking purposes, while the belief that the Earth is round might be more useful for the purpose of circumnavigation. Many tasks in real life admit of multiple solutions, varying with respect to goal satisfaction, efficiency, and range of applicability. Evaluating beliefs with respect to their cognitive utility is thus very different from evaluating them with respect to their truth.

Falsification by contradictory evidence is, in principle, a one-shot affair. A single application of Modus Tollens is logically sufficient to bring down a belief and even an entire theory. But utility-based belief revision is necessarily a gradual matter. Once the utility rises to the point where a new knowledge structure is chosen to be the basis for action on at least some occasions, belief change is contingent on the outcomes of the resulting actions. The utility of structure i might be steadily raising with each application, while the utility of some competing knowledge structure j is gradually dropping. Eventually, the utility of the newer knowledge structure will surpass that of the older, competing structures, and rise above the threshold of belief. If changes in utility values are incremental, then this process is necessarily gradual.

The most radical difference between a truth-based and a utility-based account of cognition pertains to the trigger of conceptual change. In the truth-based account, it is the failure of the older knowledge that drives belief revision. Change happens because already acquired concepts and beliefs have been found to be false, triggering dissatisfaction and a search for more veridical concepts and beliefs to replace them. If there is no failure, there is no push for change. In the utility-based account, on the other hand, new information need not wait for falsification or dissatisfaction with prior beliefs. It is the success of the newer concepts and beliefs that drives the change. No dissatisfaction with the old belief is required, only a recognition that the newer belief is an even more useful basis for action. Change is driven by success, not failure (Ohlsson, 2009, 2011). However, before the utility-based perspective can be adopted, some plausible objections must be dealt with; this is the task of Section 6.

 

6.       Two objections

The purpose of this section is to address two objections that must have occurred to the reader. The first is that evolution through natural selection ought to have pushed human cognition in the direction of the normative theory of belief management (Principles 1-4), and the second is that the behaviour of scientists appears to conform to the normative theory.

 

6.1     Natural Selection for Truthfulness?

One might argue that the shift from estimates of the probability of truth to estimates of cognitive utility is unimportant. After all, how can a belief be useful unless it is, in fact, true? If only true beliefs are useful, then the selective pressures that drove the evolution of human cognition must have pushed the belief management processes in the learner’s head to conform at least approximately to Principles 1- 4. How could our hunter-gatherer ancestors have survived unless their beliefs corresponded to reality? The instrumental value of veridicality in the struggle for survival implies that the human cognitive architecture is designed to derive beliefs from evidence.

But natural selection cannot have operated directly on the truthfulness of beliefs. The probability of surviving long enough to mate and to raise the resulting offspring to reproductive age is a function of how the individual behaves, not on how he or she thinks. What mattered during human evolution cannot have been the truth of beliefs per se, but the effectiveness of human behaviour. Consistent selection in the direction of effective action would create a utility-based rather than truth-based system.

The distinction between truth and utility would be of minor importance, if the two were perfectly correlated. However, false beliefs can lead to successful action. For example, it does not matter what belief one has about the causes of severe weather, as long as that belief implies that when storm clouds gather, it is time to seek shelter. The belief that lightening is a sign of the anger of the gods and the belief that it is an electrical discharge are equally good reasons to get out of the way. The belief that a certain medical condition is caused by an evil spirit and that the spirit can be exercised by ingesting a certain herb can be as successful as an account of the disease in terms of bacteria, white blood cells, etc., if the relevant herb contains traces of, for example, an antibiotic substance.

An even stronger example is provided by the 14th century physicist Buridan’s impetus theory of mechanical motion (Claggett, 1959; Robin & Ohlsson, 1989). A central principle in this theory says that to keep an object in motion requires the continuous application of force, the opposite of the principle of inertia that is at the centre of Newtonian mechanics. However, the impetus principle holds on the surface of the Earth due to the universal presence of friction. If the goal is to keep an object moving, or to make it move further or faster, the impetus concept is as useful a guide to action as the theory that physicists teach (apply more force).

In short, truth and utility are only partially correlated, and evolution has no way of selecting for the truth of beliefs directly, but only for the success of an individual’s struggle for survival. Evolutionary considerations thus support rather than contradict the hypothesis that utility is the key variable in belief formation.

 

6.2     The Behaviour of Scientists

The reader would be excused for thinking that the present author is engaged in a self-defeating enterprise: To use evidence and arguments to make the reader believe that people do not use evidence and arguments when deciding what to believe. This article is itself an attempt to base belief in this matter on evidence. More generally, scientists do base their theories on evidence and scientists are people, so it seems unreasonable to claim that this is not a common cognitive capability.

Gopnik and Meltzoff (1997) has emphasized this connection between the procedures of scientific knowledge creation and individual belief formation: “The central idea of [our] theory is that the processes of cognitive development in children are similar to, indeed perhaps even identical with [sic], the processes of cognitive development in scientists.” (p. 3) Indeed, they have stated their hypothesis quite clearly: “…the most central parts of the scientific enterprise, the basic apparatus of explanation, prediction, causal attribution, theory formation and testing, and so forth, is not a relatively late cultural invention but is instead a basic part of our evolutionary endowment.”  (pp. 20-21)  The consequence is that cognition can be explained with normatively correct processes such as Bayesian inference (Gopnik et al., 2004).

The utility-based view explored in this article does not deny that people can acquire the higher-order cognitive skills needed to engage in the methods and procedures of science. It does claim that those methods and procedures are acquired. Scientists are professional theorizers; they engage in belief formation (a.k.a. hypothesis testing) deliberately and on purpose, with a high degree of awareness. To be able to do this, they undergo a multi-year training process called graduate school. They are supported by a wide variety of tools such as special-purpose statistical software that embody the principles of the normative view of belief management. Furthermore, scientific research takes place within a social context, the scientific discipline, that enforces adherence to the normative theory. For example, a scientist who revises his or her theory to improve its fit to empirical data (Principle 4) is more admired than someone who continues to advocate a favourite theory in the face of counterevidence. The behaviour of scientists shows that people can acquire the high-level skills needed to function at least approximately as prescribed by the normative theory. But this does not imply that the basic processes of the cognitive architecture conform to the normative theory.

To cast the procedures of science as a description of conceptual change in the individual is to confuse two levels of description: the level of the basic processes of the cognitive architecture (the “basic part of our evolutionary endowment”), on the one hand, and the level of acquired higher-order strategies and skills, on the other. The arguments put forth in this paper concern the basic processes. I know of no reason to believe that “the most central parts of the scientific enterprise, the basic apparatus of explanation, prediction, causal attribution, theory formation and testing” is part of our “evolutionary endowment.” The late arrival of science in human history, its invention by one culture at one time, and the extensive training individuals need to conduct scientific research make it highly implausible that anything like the “basic apparatus” of science is among our “evolutionary endowment.” Instead, the cognitive apparatus of science is precisely “a relatively late cultural invention.”

The relation between the basic processes of cognitive change and the procedures of science is the opposite of the one claimed by Gopnik and Meltzoff  (1997). Rather than scientific practices explaining how cognitive change happens in children and lay adults, the relationship should be construed the other way around: A theory of the basic cognitive processes should explain how it is possible to acquire the higher-order strategies for belief management that approximate the normative theory in Principles 1-4. The utility-based perspective have other implications as well, three of which are discussed in Section 7.

7.       Implications

If we adopt the utility-based perspective, what follows? From the point of view of basic research on conceptual change, it implies a re-evaluation of existing theoretical constructs, methodologies, and applications. Traditionally, research on conceptual change and belief formation has been perception-centric: The focus has been on what the learner sees and hears, and how he or she processes the perceived information. The utility-based perspective, in contrast, implies a need to focus on what the learner does, when and where he or she succeeds or fails, and on what information is activated, retrieved, and used to guide action. A learning trajectory is primarily to be defined in terms of tasks undertaken, and only secondarily in terms of information encountered. As a side effect of such a re-focusing, the traditional concepts, tools, and puzzles regarding truth inherited from philosophy and logic will become comparatively less important.

The perception-centric bias of cognitive research in general and cognitive studies for education in particular is driven, in part, by the practicalities of psychological experimentation. The experimenter controls the subject’s task environment, so he or she can create complex but well-specified conditions and contrasting situations by varying the stimulus. Such variations can easily be described in research reports. The subjects’ behaviours, on the other hand, are only easy to report and interpret in an intersubjectively valid way if they consist of simple, easy-to-record events, like pushing a button or placing a mark on a rating scale. The pragmatist perspective implies that this style of empirical inquiry runs the risk of eliminating from the researchers’ consideration the central subject matter of cognition, namely complex, temporally extended, hierarchically structured, and dynamically coordinated sequences of actions in the service of human goals and objectives. The pragmatist perspective implies a need for a period of methodological innovation in which researchers develop new techniques to record and interpret complex behaviours.

From the point of view of instructional application, the utility-based account poses multiple challenges: How to stimulate students to encode knowledge that they have no reason to believe, and that is only tangentially relevant for their own action? How to design situations in which the new knowledge presented in the course of instruction, but not their prior knowledge, applies, so that the new knowledge can accrue utility? How to provide learners with multiple opportunities to apply new knowledge without resorting to mind numbing drill and practice? These questions are quite different from the questions of why misconceptions are robust, what evidence will convince a student that his or her misconceptions are in fact inaccurate, or how to train students to pay attention to evidence, so pursuing them will likely lead educational researchers in novel directions.

8.       Conclusion

Throughout the history of science, interdisciplinary work has often been innovative and path breaking. At the beginning of conceptual change research, there was every reason to believe that drawing upon a variety of disciplines was a productive way to proceed. However, researchers (including the present author) overlooked the distinction between normative and descriptive disciplines, fell into the Ideal-Deviation paradigm, and spent their theoretical energies explaining the main observed deviations from the normative theory: that students do not revise their prior conceptions when confronted with counterevidence. But the normative idea that people ought to base their beliefs on evidence is irrelevant for the empirical study of cognition. There is little or no evidence that people base any of their beliefs on evidence, and considerable evidence that they do not. If they do not, then it is no surprise that science courses fail to impact students’ beliefs about scientific phenomena, and efforts to explain this supposed phenomenon are unnecessary. To make progress in understanding conceptual change, researchers need to adopt a resolutely naturalistic approach that makes no normatively inspired assumptions about belief formation and belief revision. The pragmatist view that cognition evolved to support successful action and that beliefs are evaluated on the basis of their cognitive utility instead of their probability of being true is an alternative starting point for conceptual change research. The utility-based perspective implies that action is necessary for conceptual change, that old and new beliefs are not mutually incompatible, that conceptual change is necessarily gradual, and that change is not driven by the failures of misconceptions but by the successes of better ideas. A research program to articulate this perspective would replace the traditional perception-centric bias of psychological research with an action-centric approach that forefronts the cognitive consequences of complex actions.

 

References

Allport, G. W. (1958/1979). The nature of prejudice (2nd ed.).  Reading, MA: Addison-Wesley.

Anderson, J. R. (2007). How can the human mind occur in the physical universe (pp. 159-165)? Oxford UP.

Broughton, S. H., Sinatra, G. M., & Nussbaum, E. M. (2013). “Pluto has been a planet my whole life!” Emotions, attitudes, and conceptual change in elementary students’ learning about Pluto’s reclassification. Research in Science Education, 43, 529-550.

Butterfield, H. (1957). The origins of modern science 1300-1800 (revised ed.). Indianapolis, IN: Hackett.

Carey, S. (2009). The origin of concepts. New York: Oxford University Press.

Chi, M. T. H. (2005). Commonsense conceptions of emergent processes: Why some misconceptions are robust. The Journal of the Learning Sciences, 14, 161-199.

Chi, M.T.H. (2008). Three types of conceptual change: Belief revision, mental model transformation, and categorical shift. In S.Vosniadou (Ed.), Handbook of research on conceptual change (pp. 61-82).  Hillsdale, NJ: Erlbaum.

Claggett, M. (1959). The science of mechanics in the middle ages. Madison, Wisconsin: University of Wisconsin Press.

Duit, R., & Treagust, D. F. (2003). Conceptual change: A powerful framework for improving science teaching and learning. International Journal of Science Education, 25(6), 671-688.

Evans, J. St. B. T. (2007). Hypothetical thinking: Dual processes in reasoning and judgment. New York: Psychology Press.

Festinger, L. (1957/1962). A theory of cognitive dissonance. Stanford, CA: Stanford University Press.

Frank, P. (1952). The origin of the separation between science and philosophy. Proceedings of the American Academy of Arts and Sciences, 80(2), 115-139.

Gabbay, D., & Woods, J. (2001). The new logic. Logic Journal of the Interest Group in Pure and Applied Logics, vol. 9, pp. 141-174.

Gopnik, A., & Meltzoff, A. N. (1997). Words, thoughts, and theories. Cambridge MA: MIT Press.

Gopnik, A., Glymour, C., Sobel, D. M., Schulz, L. E., & Kushnir, T. (2004). A theory of causal learning in children: Causal maps and Bayes nets. Psychological Review, 111, 3-32.

Johnson-Laird, P. N. (2006). How we reason. New York: Oxford University Press.

Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus, and Giroux.

Klauer, K. C., Stahl, C., & Erdfelder, E. (2007). The abstract selection task: New data and an almost comprehensive model. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33, 680-703.

Kuhn, T. S. (1957). The Copernican revolution: Planetary astronomy in the development of Western thought. New York: Random House.

Kuhn, T. S. (1970). The structure of scientific revolutions (2nd ed.). Chicago, IL: University of Chicago Press.

Köhler, W. (1938/1966). The place of value in a world of facts. New York: Liveright.

Lakatos, I. (1980). Philosophical papers (vol. 1): The methodology of scientific research programmes). Cambridge, UK: Cambridge University Press.

Limón, M. (2001). On the cognitive conflict as an instructional strategy for conceptual change: a critical appraisal. Learning and Instruction, 11, 357-380.

Margolis, H. (1987). Patterns, thinking, and cognition: A theory of judgment. Chicago, IL: University of Chicago Press.

Margolis, H. (1993). Paradigms and barriers: How habits of mind govern scientific beliefs. Chicago, IL: University of Chicago Press.

Nersessian, N. J. (2008). Creating scientific concepts. Cambridge, MA: MIT Press.

Offit, P. A. (2011). Deadly choices: How the anti-vaccine movement threatens us all. New York: Basic Books.

Ohlsson, S. (2009). Resubsumption: A possible mechanism for conceptual change and belief revision. Educational Psychologist, 44, 20-40.

Ohlsson, S. (2011). Deep learning: How the mind overrides experience. New York: Cambridge University Press.

Osler, M.J., (Ed.), (2000). Rethinking the scientific revolution. Cambridge, NY: Cambridge University Press.

Oreskes, N. (2004). Beyond the ivory tower: The scientific consensus on climate change. Science, 306, 1686.

Peirce, C. S. (1878). How to make our ideas clear. Popular Science Monthly, vol. 12, pp. 286-302. [Reprinted in N. Houser and C. Kloesel (Eds.), The essential Peirce: Selected philosophical writings (vol. 1, pp. 124-141). Bloomington, IN: Indiana University Press.]

Pettigrew, T. F. (1998). Intergroup contact theory. Annual Review of Psychology, vol. 49, pp. 65-85.

Polk, T. A., & Seifert, C. M., (Eds.), (2002). Cognitive modeling. Cambridge, MA: MIT Press.

Posner, G. J., Strike, K. A., Hewson, P. W., & Gertzog, W. A. (1982).  Accommodation of a scientific conception:  Toward a theory of conceptual change.  Science Education, 66, 211-27.

Putnam, H. (2002). The collapse of the fact/value dichotomy; and other essays. Cambridge, MA: Harvard University Press.

Robin, N., & Ohlsson, S. (1989) Impetus then and now:  A detailed comparison between Jean Buridan and a single contemporary subject.  In D. E. Herget (Ed.), The history and philosophy of science in science teaching.  Proceedings of the First International Conference (pp. 292-305). Tallahassee:  Florida State University, Science Education & Dept. of Philosophy.

Rakison, D. H., & Poulin-Dubois, D. (2001). Developmental origin of the animate-inanimate distinction. Psychological Bulletin, 127(2), 209-228.

Rokeach, M. (1960). The open and closed mind. New York: Basic Books.

Rokeach, M. (1970). Beliefs, attitudes, and values: A theory of organization and change. San Francisco, CA: Jossey-Bass.

Schiller, F. C. S. (1905). The definition of ‘pragmatism’ and ‘humanism’. Mind, 14, 235-240.

Shipstone, D. M. (1984). A study of children’s understanding of electricity in simple DC circuits. European Journal of Science Education, 6, 185-198.

Sinatra, G. M., & Pintrich, P. R., (Eds.), (2003). Intentional conceptual change. Mahwah, NJ: Lawrence Erlbaum.

Stitch, S. P. (1983). From folk psychology to cognitive science: The case against belief. Cambridge, MA: MIT Press.

Strike, K. A., & Posner, G. J. (1992). A revisionist theory of conceptual change. In R. A. Duschl and R. J. Hamilton (Eds.), Philosophy of science, cognitive psychology, and educational theory and practice (pp. 147-176). New York: State University of New York Press.

Thagard, P, (1992). Conceptual revolutions. Princeton, NJ: Princeton University Press.

Vosniadou, S., Baltas, A., & Vamvakoussi, X., (Eds.), (2007). Reframing the conceptual change approach to learning and instruction. Amsterdam, The Netherlands: Elsevier Science.

Vosniadou, S., & Brewer, W.F. (1992). Mental models of the earth: A study of conceptual change in childhood. Cognitive Psychology, 24, 535-585.

Vosniadou, S., & Skopeliti, I. (2013). Conceptual change from the framework theory side of the fence. Science & Education, DOI 10.1007/s11191-013-9640-3.

Wason, P. C., & Johnson-Laird, P. N. (1972). Psychology of reasoning: Structure and content. London, UK: B. T. Batsford.

Watson, J. D., & Crick, F. H C. (1953). A structure for deoxyribose nucleic acid. Nature, vol. 171, pp. 737-738.



[1] Indeed, many philosophers insist that unless knowledge is true and certain, it does not qualify as knowledge.