Frontline Learning Research Vol. 12 No. 1 (2024) 16 - 33
ISSN 2295-3159

The development of visual expertise in a virtual environment: A case of maritime pilots in training

Charlott Sellberg1, Elin Nordenström2 & Roger Säljö2

1University of Oslo, Norway
2 University of Gothenburg, Sweden

Article received 4 January 2023 / Article revised 9 October / Accepted 10 December / Available online 12 January 2024

Abstract

This study connects to an ongoing discussion about the limits and affordances of simulators as realistic and relevant contexts for professional learning, in this case in the development of visual expertise. Earlier studies of simulator-based maritime pilot training conclude that there are risks associated with so-called negative skills transfer due to a lack of photorealism in simulator environments. The aim of this study is to carefully examine how visual expertise develops in and through training in a simulated environment. Through a practice-based approach to the development of visual expertise, and by using qualitative interaction analysis of video recorded training sessions, the analytical focus is directed towards maritime pilot trainees’ talk about imperfections and inconsistencies in the virtual environment during exercises in a high-fidelity bridge simulator. Considering the multi-layered nature of the maritime pilot’s visual expertise, findings show that the maritime pilots in training noticed and adapted to the specific methodological and technological challenges when manoeuvring a simulated vessel. During such reflection-in-action, they also commented on and explored the differences between, navigating in a simulator, on the one hand, and, on the other hand, navigating on board a ship. Instead of concluding that there is a risk for negative skills transfer that follows from the differences between the two contexts of navigating, we argue that the challenges introduced by representations encountered when training in a virtual environment may add to the expertise of the trainees and lead to enriched conceptual, methodological, and technical knowledge regarding the specificities of visually demanding and ambiguous navigation situations. In this way, this study contributes to advance our understanding of learning in virtual environments to the frontline of learning research.

Keywords: Visual expertise, simulator fidelity, skills transfer, conceptual change

Info corresponding author email: charlott.sellberg@iped.uio.no Doi: https://doi.org/10.14786/flr.v12i1.1217

1. Introduction

A maritime pilot is an expert navigator, specialised to support maritime officers to manoeuvre their vessel in a challenging maritime territory. Through the maritime pilot's intimate knowledge of the fairway, and the experience of manoeuvring many different types of vessels, the maritime pilot contributes to ensuring that maritime and environmental safety can be maintained when vessels operate in pilotage-obliged water (see, e.g., Lützhöft & Nyce, 2006). In addition to skills in ship manoeuvring, navigation, and seamanship, the ability to interact with various types of technologies, cultures and crews is also required by pilots, as each ship is unique in terms of equipment and instruments. Central to the expertise of the maritime pilot is the skilled perception, interpretation and evaluation of the domain's visual materials (see Gegenfurtner et al., 2019). Building on the pioneering work on “professional vision” by Goodwin (1994), visual expertise is described as the “superior performance of professionals in processing domain-specific visual information” (Gegenfurtner et al., 2022, p. 3). This ability is particularly important in visually intense domains, such as medicine, aviation, and, as in this case, maritime navigation, where the skilled perception, interpretation, and evaluation of critical visual elements by professionals is crucial.

Traditionally, maritime pilot training has been based on apprenticeship on board ships, where the skills and competencies specified above have been developed through years of experience travelling a specific territory. Contemporary maritime pilot training instead combines periods of apprenticeship with training on board miniature model ships as well as simulator-based training. While these novel methods are claimed to offer unique and, in some respects, better training opportunities, for example by including exercises of handling risky and unusual situations in safe environments (Kim et al., 2021), there are also concerns expressed about how well such simulated activities make efficient learning possible. Studies conducted in various safety-critical domains warn that lack of fidelity, i.e., resemblance with the real environment, and other shortcomings in the simulator, may lead to trainees learning how to deal with the inaccurate conditions of the simulator environment rather than the conditions that apply in their future work settings, so-called negative skills transfer (see e.g., Hontvedt, 2015; Petersen et al. 2022; Shul et al., 2019; Taber, 2013). In a study of simulation-based training of helicopter underwater escape performance, for instance, Taber (2013) stresses that “a specific skill practiced in a simulator may not be effective in a real helicopter if for example the actual helicopter windows are different than those used in simulation” (p. 184). In Peterson et al. (2022), a treatment group of novice surgeons used a virtual reality (VR) vitreoretinal simulator for pre-training of basic surgical skills. Overall, the VR training did not cause any significant effect on the performance curve in comparison to traditional pre-training, neither in a positive nor negative sense. However, as noted by Peterson et al. (2022), the control group performed better than the treatment group in one of the investigated modules. Some frustration was observed among the participants in the treatment group when moving from simple to more complex tasks, which is interpreted as a possible indication of negative skills transfer. Shul et al. (2019) take the debate a step further, arguing that negative skills transfer connected to lack of anatomic accuracy and realistic tissue behaviours of urological simulators used in training was the reason behind high numbers of injuries and infections resulting from catheterisation in real clinical practice. In the context of maritime pilot training, Hontvedt (2015) shows how imperfections and inconsistencies in the visual lookout of the simulator environment came in conflict with the trainees’ professional vision and forced them to change their way of working in order to adapt to the shortcomings of the simulator. In line with these results, Hontvedt (2015) warns that less experienced trainees might adopt inaccurate work practices due to poor photorealism. Hontvedt (2015) argues that “lack of fidelity may harm the logic of the actual work task”, and that the training participants were shifting their focus “from performing within a simulated work environment to simply manipulating the simulated model” (p. 82). Following this, Hontvedt (2015) emphasises the importance of proper instructional guidance to maintain focus on the learning objectives of the simulation exercises.

While such a warning and recommendation can be considered justified and worthy of consideration, especially in relation to training of inexperienced trainees, it should be noted that studies explicitly concerned with the development of visual expertise in various professional domains have shown that the general idea of skills transfer ‒ whether positive or negative ‒ might be a too simplistic way of analysing professional learning in technological work settings (e.g., Gegenfurtner et al., 2009; Lehtinen et al., 2020; Nivala et al., 2012). Using the metaphor of different layers of conceptual change for professional learning in a biomedical setting, Lehtinen et al. (2020) show how experts learn new ways of conceptualising the unfamiliar conditions of working with new visualisation technologies for doing diagnosis. In this case, learning goes beyond simply transferring knowledge between work settings. Rather, it is a matter of adapting to new methods and working conventions, while the basic principle, the biomedical concept these experts need to understand, remained stable across work settings. In this process, Lehtinen et al. (2020) found that the experts spent significant time analysing the relevant aspects of unfamiliar conditions in order to adapt their familiar working methods to the new visualisation technology. Furthermore, studies of simulation-based training in various domains show that discrepancies between the simulator environment and the work setting, if properly addressed, may provide opportunities for fruitful discussions and learning rather than posing a risk of participants adopting incorrect work methods (e.g., Hindmarsh et al., 2014; Hontvedt & Øvergård, 2020; Rystedt & Sjöblom, 2012; Sellberg, 2017). Sellberg (2017), for example, explores how the embodied activity of ship handling is trained in high-fidelity navigation simulators that, while mimicking many of the features of the bridge of a real ship, are lacking kinaesthetic and proprioceptive feedback and thereby not simulating the sense of moving in an authentic way. As shown by the study, these built-in inconsistencies of the simulator environment provided for instructional opportunities and thus facilitated rather than hindered the maritime students’ learning of ship sense. Similarly, Hindmarsh et al. (2014) in a study of simulation-based clinical skills training for undergraduate dental students demonstrate how differences between the simulated environment and the authentic work environment provided for teachable moments where the instructors got the opportunity to highlight and explain the appropriate performance of and the rationale behind specific occupational procedures. More specifically, this was done by the instructors noting problematic performance by the students occasioned by deficiencies or constraints in the simulator and then intervening, presenting the students with what Weeks (1985, 1996) has termed ‘a contrasting pair’: a description of the incorrect performance of the procedure coupled with a presentation of the preferred way of performing it.

In the present study, a similar way of contrasting non-preferred conduct with a preferred version, as Hindmarsh et al. (2014) and Weeks (1985; 1996) put it, has been observed in training exercises for maritime pilot trainees in a high-fidelity bridge simulator. Unlike what was the case in the aforementioned studies, however, it is the training participants themselves who identify problems and present a form of contrasting pairs, thereby transforming situations where visual deficiencies/peculiarities in the simulator environment prevent them from adopting established ways of working into learnable moments. Through a practice-based perspective on the development of visual expertise, and by taking the multi-layered nature of professional skills into account, this study seeks to further explore such situations where issues of simulator fidelity are addressed. Thus, directing the analytical focus towards maritime pilot trainees’ communication about visual discrepancies between the simulator environment and the “real” setting, the aim is to carefully examine how visual expertise is developed in and through training in a simulated environment. The following research questions are in focus: a) how do maritime pilot trainees identify and handle visual imperfections and inconsistencies in the simulator environment? b) what are the implications of adapting to such shortcomings in the simulator environment for the development of visual expertise?

With the analytical attention directed towards naturally occurring dialogues between trainees, the research design builds on a videography approach (Knoblauch & Schnettler, 2012). Following this approach, the study draws on focused ethnography at a Scandinavian simulator centre. Video records from a course on advanced ship handling in maritime pilot training have been used to conduct qualitative interaction analyses (Luff & Heath, 2019).

Before we proceed and present selected episodes of the video recordings intended to shed light on the phenomenon under study, a section that briefly outlines the differences between the dominant neuropsychological and cognitive perspectives on the development of visual expertise and the practice-based perspective adopted in the current study will be presented.

2. A practice-based perspective on the development of visual expertise

The study of visual expertise has a long tradition in science, involving research fields such as cognitive neuroscience, cognitive psychology, and the learning sciences (e.g., Boucheix, 2017; Gegenfurtner & van Merriënboer, 2017; Gegenfurtner et al., 2022). Whilst the unit-of-analysis in cognitive neuroscience is the neurophysiological activity of the individual (e.g., Gegenfurtner et al., 2017), cognitive psychology focusses attention to the details of the individual’s eye movements and/or verbal reports of cognitive processes during visually intensive tasks (e.g., Helle, 2017; Jarodzka & Boshuizen, 2017). At these levels, findings show that experts display stronger activation patterns in the brain for areas associated with encoding and storing visual objects and events, indicating a non-conscious and stimulus-driven indexical relation between a visual element from their work setting and the corresponding mental representation (Gegenfurtner & van Merriënboer, 2017). At a behavioural level, visual perception develops with experience, from slow search-to-find modes towards more efficient, holistic modes of seeing. Moreover, beyond being faster in their visual search, experts exhibit a higher rate of accuracy when decisions are made based on visual elements in their work setting (Gegenfurtner & van Merriënboer, 2017). There are also findings which show that experts, in comparison to novices, are better at verbalising the perceptual features of visual elements of their work setting, and they are also better at conceptualising their perceptual activities (Gegenfurtner & van Merriënboer, 2017). From these findings, we can arrive at an understanding of the development of visual expertise as a process that starts with visual objects and events being stored in the cerebral cortex and next, the perceptual-conceptual relation develops (Gegenfurtner & van Merriënboer, 2017).

These illustrations of the neuropsychological correlates of visual activities are interesting in their own right. However, in order to develop teaching and learning we need a research agenda of how to study the development of visual expertise, to understand how the emergence of “a good eye” is interactionally accomplished and socially organised within a professional field. In his seminal study on professional vision, Goodwin (1994) shows that visualisation practices evolve through an increasing coordination with the requirements and expectations of a profession. Professional vision is described as the discursive practices that “become the insignia of a profession’s craft: the theories, artifacts, and bodies of expertise that distinguish it from other professions” (Goodwin, 1994, p. 606). While Goodwin’s seminal work on professional vision suggests a research program for the study of concrete practices connected to visuality, recent studies have started to map out the conceptual changes and transitions of expertise involved in learning and developing visual expertise when working with new visualisation technologies (Gegenfurtner et al., 2019; Nivala et al., 2012; Lehtinen et al., 2020). Lehtinen et al. (2020) describe how theories of conceptual change initially were developed to explain the difficulties students face when attempting to understand scientific concepts, often in school settings. In short, the theory is based on the notion of different belief systems, commonly developed through everyday experiences and prior learning in the school system, and the possibility that innate predispositions come into conflict with scientific theories. Consequently, there might be resistance to learning new scientific concepts or problems of coping with novel task demands. In Lehtinen et al. (2020), the focus was on horizontal conceptual changes, i.e., “increased conceptual knowledge and possible conceptual change” (p. 6) when medical experts developed their diagnostics skills through learning to interpret visualisations from novel x-ray technologies used within their field. They found that the experts could not easily transfer their existing skills to the novel technologies, instead they had to learn new diagnostic methods for being able to make sense of the new visualisations. However, all experts were familiar with the basic scientific principles underlying both medical visualisations under study. Thus, the struggles of re-learning were connected with coping with method-specific knowledge and professional practices, rather than linked to the underlying scientific concepts. In all, all of the experts in Lehtinen et al. (2020) were able to adapt their method-specific and professional practice to work with the new technology. Moreover, the changed working methods led the experts to develop more advanced visual perceptions, including enriched scientific knowledge of anatomy, detailed technical knowledge of the colour nuances seen in x-rays, as well as new professional practices of diagnosing. This is explained in Lehtinen et al (2020) as a consequence of “the multi-layer nature of professional skills” (p. 8), involving a system of various layers of conceptual, methodological, and technical knowledge. In the present study, aiming at examining how visual expertise develops in and through training in a simulated environment, the metaphor of multiple layers of professional skills, and the notion of conceptual change, serves as starting points in order to advance our understanding of how the complex professional practice of maritime piloting can be taught and learned in a virtual environment. In the next section of the article, the empirical case in terms of setting, participants, method, data, and analytical approach will be presented.

3. The empirical case

Maritime pilot training is a one-year specialisation program, organised by national maritime administrations and undertaken by master mariners with extensive experience of working as marine officers. The training program consists of three parts. First, there is an introduction period where the applicant goes through a recruitment process for a probationary employment as maritime pilot. After admission to the program, there is a package of basic courses, addressing a wide range of professional aspects of working as a maritime pilot, including legal and administrative course content, simulator-based training of teamwork skills as well as advanced navigation and manoeuvring training. In the last part, the maritime pilot trainee undertakes on-the-job training and is subjected to a simulator-based test of competence. When passing this test, the maritime pilot trainee becomes a certified maritime pilot, and the probationary employment transitions into a permanent position.

Figure 1. Maritime pilots in training approaching port in a full mission bridge simulator.

The course under study takes place in the second part of training and focuses on advanced ship handling. During data collection, the course gathered six maritime pilot trainees for a week of intense training guided by two well-experienced maritime pilots serving as instructors. The trainees as well as the instructors in our study are all male, and they all have given their written, informed consent to participate in the study. Since the trainees all are experienced master mariners, ages vary from trainees who are in their late twenties to those who are in their late forties. In general, the older trainees have gained more work experience, approximately 20 years, while the younger trainees have gained more experience in simulator-based training from their more recent master mariner education. The training activities during this study take place at the simulator centre, which is equipped with a full mission VTS simulator as well as four high-fidelity full mission bridge simulators. The full mission bridge simulators mimic a ship’s bridge with high accuracy in terms of navigation equipment. The simulators are provided with state-of-the-art technologies used on board ships, such as radars, automated plotting and tracking aids, electronic chart displays, gyro- and magnetic compasses, rudder angle and rate-of-turn indicators, echo sounder and different steering devices. There are also screens with projections of the marine environment, representing views from the front windows, the bridge wings, as well as the rear view (Figure 1). In these full-mission bridge simulators, the maritime pilot trainees train in pairs of two when manoeuvring vessels in port areas, shallow and narrow waters, and during tugboat manoeuvring, operations that are highly demanding both for the ship and its crew.

3.1. Method, data, and analytical approach

Videography can be described as a focused ethnography, where observations of social interaction are documented by video recordings (Knoblauch & Schnettler, 2012). The approach is dedicated to the study of naturally occurring phenomena, that is, instead of designing or controlling the learning activities under study, the purpose is to capture the everyday social interaction that normally takes place in an instructional or any other setting. Before filming started, the first and second author made two visits to the simulator centre during the autumn of 2021. During the first visit, the aim was to introduce ourselves and get a tour of the premises, as well as to discuss a suitable course for our interest in the development of visual expertise in simulated environments. Later on, a second visit was made with the aim of making a detailed plan for collecting video data.

In November 2021, the video recorded data were gathered by the first and second author during one week of training. Handycam® cameras on tripods were placed in all three bridge operation simulators, providing mainly a view of the participants’ work on the bridge panel and the front window (see Figure 1). Another Handycam® camera was placed in the adjacent briefing/debriefing room where the participants gather before and after each simulated scenario. In order to ensure satisfactory audio uptake from the collaborative discussions before and after each session in the simulator, a shotgun microphone was placed in the briefing/debriefing room. The collected data capture four full days of simulator-training and include all stages of training: from the pre-simulation introduction (the so-called briefing), through the simulated scenario to the post-simulation debriefing. This covers the entire training process in the course, and in sum approximately 130 hours of video recorded simulator-based training was documented.

The video records of simulations form the basis for qualitative interaction analyses. In qualitative interaction analysis, the unit-of-analysis consists of verbal utterances, bodily conduct, and interaction with material and digital objects, observable through turns of talk between participants (Luff & Heath, 2019). In the first step of analysis, the video recordings were reviewed and catalogued in order to obtain an overview of the entire data corpus (Heath et al., 2010). In the next step of the analysis, the catalogue of simulations was revisited with a focus on identifying trainees’ talk about imperfections and inconsistencies in the simulator environment. In all, 30 episodes on this theme were identified and categorised according to the type of “glitch” discussed by the trainees. Categories of imperfections and inconsistencies noticed by participants include 1) frozen screens in need of restart (n=7), 2) malfunctioning instruments (n=4), 3) lack of proprioceptive/kinaesthetic feedback (n=3), 4) restricted visibility from bridge wings and aft window (n=11) and 5) lack of depth perception (n=5). It is noteworthy that categories 3-5 mostly occurred during the first two days of training and were associated with tasks where manoeuvring needed to be done with high precision, for example, going to quay. For this study, with its focus on visual expertise, categories 4 and 5 were assessed as most relevant. Three episodes from category 4 were selected for further analysis. The episodes were transcribed with attention to verbal utterances and their intonation, as well as to relevant nonverbal behaviours such as gestures and gaze shifts (see Table 1 for Transcript notations). Two of these episodes are presented in the article text (analysis).

Table 1. Notation system used for transcription

4. Analysis

In our data corpus, two maritime pilots in training work together as a bridge team, consisting of a maritime pilot and the captain of the ship. During the different exercises in the course, the trainees are taking turns working with each other on the bridge. Episode 1 presented in Section 4.1 concerns a scenario taking place during the second day of simulator training in the course. During the briefing, the trainees were given the task of manoeuvring a cargo vessel in the harbour of Hong Kong, where they are going to quay under good traffic and weather conditions. The overall aim of the exercise is that the trainees should be able to dock their vessel by using four manoeuvres. This, in turn, will require a clear and strategic plan as well as full control over the ship’s position and angle to quay, the ship’s pivot point and manoeuvring speed. Figure 2 visualises the simulated vessel’s manoeuvring actions halfway through the scenario. First, the simulated vessel is going to the specified terminal. Second, the trainees are positioning their ship in order to be ready for the third manoeuvre, i.e., going in reverse to quay. The fourth position, not displayed in the visualisation, is to come to quay in position to dock the vessel. Hence, the trainees will have to keep a close eye on both the quay and the vessel lying at anchor.

4.1. Collaboratively making sense of professionally relevant visual materials

In Episode 1, Dan and Matt are preparing to undertake a scenario where they are docking at the terminal at Hong Kong Harbor. Docking at this or any terminal is a manoeuvring task that requires continuous visual lookout of the outside surroundings in all directions, forward, aft, port, and starboard in order to determine the distance to the terminal and other structures, e.g., other vessels. In order to achieve this, a pilot has several navigational aids at hand. On board a real vessel, the bridge wing is an extended platform located on the sides of the ship’s bridge, typically near the outer edges. This structure serves as an extension of the main bridge area and provides additional vantage points for the ship's officers to observe the surroundings, providing an unobstructed view of the ship's sides and forward areas. In addition, this scenario involves going in reverse to quay, which makes the lookout through the aft window critical in order to determine distance to quay and the other vessel anchored at the terminal. Electronic navigational aids, such as radar equipment and electronic navigational charts provide information about the vessel's surroundings, and they are particularly valuable under conditions of restricted visibility, for instance due to fog, snow or heavy rain.

Dan and Matt are both experienced master mariners who each has approximately 20 years of working experience as captains. Both can thus in a sense be regarded as experts with a "superior ability to interpret and analyze situations as well as solve problems typical of their [... area] of expertise" (Lehtinen et al., 2020, p. 1). At the same time, however, they are novices in the sense that they are just entering their training as maritime pilots, which involves learning to handle new and more advanced ship operations and mastering new technologies. As part of this training, they are also gaining experience in simulation-based training in an advanced high-fidelity bridge simulator, something these two trainees have limited experience of from their master mariner training in the 1990’s. They are thus faced with the challenge of developing different layers of their expertise (Lehtinen et al., 2020) in that they must simultaneously learn new profession-specific skills and to manage the functions of a more technically advanced simulator.

In the scenario, Dan takes on the role of pilot responsible for navigating the ship and Matt takes on the role of captain. While the first part of the episode (1a) is reproduced to facilitate understanding of the context, our analytical focus is put on the second part (1b).

In (1a), we can see how Dan, acting as the pilot, is starting to lay out a plan for how to approach the manoeuvring task. The episode starts with Dan saying that he is “just thinking a bit here” while rubbing his templates and sighing (line 01), signalling that he finds the situation challenging. The overall objective of the training, to present scenarios that pose challenges for the trainees, thus seems to have been achieved. Furthermore, Dan’s utterance and non-verbal actions can be seen as responsive to the instructors’ request for the trainees to think-out-loud while conducting the scenarios, i.e., to say what they do and why, an important element when fostering collaborative learning on the bridge (Hontvedt & Arnseth, 2013). Dan continues to think-out-loud for the remainder of (1a), describing what manoeuvres he plans to perform and what he wants to achieve with these manoeuvres (lines 03, 05-09), which is met with minimal and affirmative responses from Matt (lines 04, 10). In (1b), however, there is a shift in the activity when Dan switches from reporting on planned manoeuvres to orienting to a potentially problematic situation caused by limitations in the visual lookout of the simulator, inviting Matt to collaboratively make sense of the available visual materials.

In (1b), we can see how Dan initially addresses a problem related to a deficiency in the visual functionality of the simulator: in the simulator it is not possible to get a visual overview of the quay by looking out through the aft window which would be possible on a real ship. He states that to ensure the correct angle of the ship in the accomplishment of the manoeuvring task (line 11) “one would like to look straight aft", while pointing towards the aft window with his left arm (line 13). That this is the established way of the profession to get a visual overview of the quay rather than a personal preference of Dan is suggested by his use of the generic third person pronoun “one” (Sw. “man”) rather than the first-person pronoun 'I' (Sw. “jag”) which is used initially (line 11).

Matt’s response (lines 15-16) indicates that he understands and is prepared to accept and adapt to the shortcomings of the simulator identified by Dan. Rather than commenting further on how the task should be performed in a "real" work setting, Matt presents an alternative way of performing it which is adapted to the functionality of the simulator environment: if Dan specifies what he needs to see, Matt will provide visual access to that with the help of the camera. The camera Matt refers to is adjustable and provides the representations that can be seen on one of the screens in front of the trainees (see Figure 1 and 3) showing the view from the port side bridge wing. Compared to a real ship where you get a visual overview of the surroundings by standing on the bridge wing, the visual representation from the bridge wing shown on this screen in the simulator is quite limited. Furthermore, as Dan initially continues to maintain, this would not be the preferred approach “in reality” to get a visual lookout and determine the current position of the ship. He argues that “one would have wanted to look aft” (lines 17-19), while turning around and looking aft.

What Dan has presented so far could be seen as one part of “a contrasting pair” (Weeks, 1985, 1996), the conversational device mentioned in section 1: a description of the preferred way of performing a particular visually demanding manoeuvring task (lines 13, 17-19). What he produces next, after having announced that he will begin reversing the ship (lines 20-21), could be seen as the other part of the contrasting pair: a description of an alternative non-preferred way of performing the same task (lines 23-24). As stated by Dan, it would also be possible to use the electronic chart, displaying navigational information such as ship positions in real-time, to obtain the necessary visual information. As we have explained earlier, the option of using this instrument, instead of relying on the visual lookout, could become relevant also in a “real” situation when visibility is restricted by, for example, fog or heavy rain. Using the ECDIS would thus not be an incorrect approach, but as is clear from Dan’s subsequent comment on line 24, “if one should trust it somehow”, trust in this instrument is not to be taken for granted in the current situation. Without going into depth on the topic of trust in automation, it is of significance to point to the safety culture in maritime navigation that is prescribing visual lookout in combination with a range of digital instruments to triangulate information (see, e.g., Lützhöft & Dekker, 2002).

At this point, Matt presents yet another alternative for gathering visual information on the bridge, one that could help to achieve triangulation: they could use the so-called conning display, an instrument providing an integrated overview of the situation during manoeuvres, including course, speed, depth and rate of turn, with which Matt claims that “one can see absolutely everything” (line 26). Dan agrees (line 27), and they proceed to reason about how to adjust the instrument to get the visual information needed, e.g., distances to the quay, as well as to other ships, to complete the task (lines 28-32). Consensus thereby seems to have been reached that ECDIS and the conning display will be used to deal with the problem of poor visual lookout through the aft window initially presented by Dan.

Figure 3. Dan and Matt exploring different methods for gathering visual information in the simulator.

In this episode, we can see how the restrictions of the simulator open up for the trainees to collaboratively explore different navigational methods for completing a visually demanding manoeuvring task. On board a seagoing vessel and with the weather conditions simulated in the current scenario, the participants would likely only have used the standard navigational method initially presented by Dan. However, in the simulator they are compelled to collaboratively identify and attempt alternative strategies to complete the task. In this case, the limitations and affordances of the simulated environment thus occasion explorations that might lead to an extension of the participants' visual expertise (Lehtinen et al., 2020). However, as seen in our next episode, trainees might also hesitate to adapt their working methods to the visual limitations of the simulator environment.

4.2. Drawing on differences in prior experiences for solving the task

In this episode, Bill and Tim are training together in one of the other full-mission bridge simulators, performing the same scenario as Matt and Dan in the previous example. Bill is taking the role as pilot and Tim as captain of the vessel. While they both are novices in their roles as maritime pilots, they have different experiences from working as master mariners as well as from training in a virtual environment. Tim, the younger of the two, has quite recently graduated from a master mariner program and has extensive experience in training in simulated environments. Bill has many years of experience of working at sea but is to be considered a novice when it comes to working in a simulation environment. Thus, to use the words of Lehtinen et al. (2020), while Bill has “acquired a high level of expertise in one specific field [he] must extend the scope of [his] expertise into new fields, such as new technologies that offer opportunities to enrich [his] repertoires of tools and alternative methods of dealing with the work objects” (p. 6) in the current scenario. Simultaneously, Tim’s experience of both the real-world setting and the simulator environment becomes a valuable resource for solving the task at hand.


The episode begins with Bill presenting a plan for how to perform the manoeuvre to dock at the terminal: he will use one of the cranes stationed in the port (see Figure 4) as a visual reference (line 01). After a prompt “okay” from Tim (line 02), Bill then begins to produce what can be heard as a caveat to the successful execution of this plan: “so:: we’ll see how it-” (03). However, the utterance is aborted, and he instead proceeds to explain the preferred way of performing the manoeuvre to maintain a proper visual lookout (lines 04-06): “it's like this (.) when one does this manoeuvre n’ come in like this one really wants to hang on the bridge wing so that one sees everything right”. While producing the utterance, he points towards the location where the starboard bridge wing, i.e., the extended platform located on the side of the ship’s bridge, would be on a real ship, thereby showing Tim where he would have liked to position himself in a situation at sea.

Note here the similarities to (1b) which also begins with the pilot, using the generic pronoun “one”, presenting the approach for performing the manoeuvre that would have been preferred in a “real” situation to get a good visual overview. However, unlike in (1b) the trainees in the current episode do not as easily reach consensus on how to proceed. Similar to Dan and Matt, Bill and Tim are engaging in collaboratively exploring different methods for completing the task and they reason about the limits and affordances of the simulated environment in comparison to working on a ship’s bridge. Their willingness to adapt their way of working to the simulator's functions differs, however. As demonstrated by the remainder of (2), Bill, the more experienced master mariner, seems hesitant to adapt his working methods to the simulated environment. Tim, on the other hand, seems more willing to explore its possibilities. Repeatedly directing Bill’s attention towards the available visualisation technologies, he argues for a way of solving the task at hand adapted to the simulator environment. In response to Bill's initially stated preference for how to get a visual lookout (lines 04-06), Tim points towards a screen on the port side where the view from the bridge wings can be represented (Figure 3), saying “yeah but then you have-” (line 07), thereby directing Bill’s attention to one of the available resources in the simulator that could compensate for its visual limitations. Bill however cuts off the utterance, first delivering a token of agreement “yeah precisely” but then proceeding to claim that “but it doesn’t feel that good” followed by a short laugh (line 08), thus rejecting Tim’s suggestion of the alternative method of gathering visual information. Tim responds by presenting an additional suggestion, explaining how the intended visualisation could be adjusted and thus used to provide the visual information they would need for solving the task at hand (line 09): “but we can point it frontwards if you want?”. Bill however maintains his sceptic stance towards making use of the limited visual lookout through the digital visualisation, insisting that it is easier to perform the manoeuvre “if one can see properly so to say” (lines 10, 12).

Up to this point, we have seen how the trainees negotiate which strategies to use to obtain the necessary visual information to solve a task without reaching consensus. Tim, who has more experience of working in a simulated environment, took a leading role in exploring the technical functionalities of the simulator, while Bill, relying on his many years of experience as a master mariner, sought to maintain procedures which would have been preferable in an on board setting. Similar to the medical professionals observed by Lehtinen et al (2020), who trained to diagnose patient cases using for them new and unfamiliar imaging technologies, Bill has so far shown a preference for applying methods which are familiar to him in the new situation, even though these methods might not be the most efficient for this situation. However, given the collaborative reasoning that takes place, we can still assume that the in-situ attention to the shortcomings of the simulator contributed to promoting the visual expertise of both participants. By making the shortcomings a shared topic of discussion, both participants gain access to the discrepancies between a simulated and an on-board perspective on a situation.

Figure 4. Tim pointing towards the screen on portside where the view from the bridge wing is represented.

In line 14, Bill says “now let’s see” and leans over the bridge panel to look at the radar, stating that “we’re approaching this one there”. Moving his hand to the lever and putting it to a stop, he explains “I’ll put a stop to the machine then”. Here, Bill shows that he will stop the vessel, and hence the scenario before completion, since it’s time for a scheduled break. Tim responds with an assessment of the situation “now we’re about halfway through when you do this” (line 15), which is ratified by Bill in the next turn with a “yes” before he continues to say, “I’ll stop (.) we’ll see what happens”. Here Bill repeats that he will stop, and that the outcome of the actions taken still is uncertain, showing that he is quite hesitant of what the correct action would be at this point. However, in line 16, Bill acknowledges that Tim will “get a little guidance for the next run”, referring to the next session after the break, where Tim will act as pilot in the same scenario. Hence, their reasoning about the limits and affordances of the simulated environment during this session serves as a starting point for further exploration in the next run.

4.4. Analytical findings

The findings presented above are in line with those in Hontvedt’s study (2015), where the pilots repeatedly criticised the fact that navigation tasks in the simulator needed to be carried out with electronic equipment instead of through a visual lookout. However, while Hontvedt sees this as a potential risk for negative skills transfer, our results suggest that the shift in navigational methods in the simulator seems to present opportunities for professional learning. An important element of this argumentation is that the participants themselves in their activities notice and attend to the differences between navigating in a simulator and on board a ship, respectively. The differences thus trigger reflection and problem-solving. In addition, we can see how the trainees continuously connect their manoeuvring in the simulator to their professional experiences of ship handling on board real vessels (see Wiig et al. 2018). While previous studies highlight the need for instructors to facilitate discussions with trainees to avoid pitfalls in training due to a lack of fidelity (e.g., Sellberg 2017; Hindmarsh et al. 2014), the maritime pilots in training in our materials spontaneously connect the simulated practices to their professional practice without the support of an instructor. Extensive experiences of ship handling serve as resources for identifying and commenting on the limitations of the simulated environment. In other words, the participants are not constrained by the simulation as a fixed environment, rather they entertain and test hypotheses about differences and similarities between the two settings. Put differently, the participants do not learn by passively subordinating their decisions on how to navigate to the design of the simulated environment, rather they mobilise their professional experiences as resources for sense-making and for reflecting and commenting on what characterises navigation in the two situations.

Furthermore, when taking the multilayered nature of visual expertise into account in our analysis, it is important to consider the different dimensions of this task. Rather than viewing the inconsistencies in the simulator as in conflict with the trainees’ professional vision, we can see that at the conceptual level, the calculations for determining distance, speed and turn ratio in ship handling are the same in the simulated model as in a real vessel (see Lehtinen et al., 2020). However, the methods used for gathering information, i.e., using instruments rather than relying on visual lookout, are different in the simulator, where the pilot in training needs to make use of several digital navigation aids. As a result, the task involves attending to different representations for understanding manoeuvring and movement during ship handling. For instance, rather than sensing the movements of the ship and receiving proprioceptive feedback, the trainees need to interpret how the ship moves through rather abstract representations such as numerical values and graphs available through the instruments (see Sellberg, 2017). In previous research, such differences in resources for interpretation between tasks have led experts to develop more advanced visual perceptions, enriched scientific understandings, detailed technical knowledge and new professional practices (Lehtinen et al., 2020).

5. Conclusion and discussion

In this study, examining how visual expertise develops in and through simulator-based training, the metaphor of multiple layers of professional skills and the notion of conceptual change serves as starting points to advance our understanding of how the complex professional practice of maritime piloting can be taught and learned through experiences generated in virtual environments. Our detailed analysis of the talk of trainees during training, and our close examination of how they handle imperfections and inconsistencies in-situ, show how ship handling in a simulator environment is a different activity than ship handling on board a seagoing vessel. In the simulator, maritime pilots in training make use of a variety of navigational instruments to compensate for, and to adapt to, the shortcomings of the visual lookout in the simulator. These findings are in line with previous studies that warn for negative skills transfer due to the lack of photorealism in simulated environments (Hontvedt, 2015). However, our findings show how the trainees articulate and conceptualise the differences between simulations and work on board a seagoing vessel in ways that support the development of visual expertise (see Lehtinen et al., 2020). In other words, in our materials, the discussions of the trainees about the imperfections of the simulation show that they have learned about such differences, and that their evaluations of information and judgements about how to act are grounded in conceptual control over what differs between the scenarios in the simulator, and what happens on a bridge at sea. On some occasions, they also articulate these differences on the basis of their maritime experiences at sea. Instead of warning for negative skills transfer, we argue that the challenges of training in a virtual environment might lead to enriched conceptual, methodological, and technical knowledge and considerations in the context of visually demanding and complex tasks and broaden the insights of participants of how representations relate to the world. However, for these positive training outcomes to emerge, we want to stress that inconsistencies between the simulator and the work on board a ship need to be reflected on during and/or after training. One way to ensure this is to systematically facilitate reflection on these matters in the post-simulation debriefing that follows the simulated scenario.

It is also important to point out that these results might not be directly applicable to novices training in a simulated environment, as they have limited experience of the working context and thus might have difficulties in noticing inconsistencies between the simulator and the working environment in the first place. Hence, for novices the simulator instructors’ dedicated work to monitor their activities in the simulator and explain the inconsistencies when they occur is essential to avoid pitfalls in training (Sellberg, 2017). However, if discussed and reflected on, inconsistencies in the simulator environment may provide powerful opportunities for professional learning also for novices (Hindmarsh et al., 2014; Hontvedt & Øvergård, 2020; Rystedt & Sjöblom, 2012). Additionally, while we often see novices and experts as representing opposite ends of a spectrum of knowledge and skills, our study shows that the distinction between novices and experts is multifaceted, contingent and non-linear.

Today, training in simulators is an integral part of educational programs that prepare trainees for professions with high standards of safety, in settings such as healthcare, aviation, and maritime navigation. In this study, we have taken seriously the concerns raised with respect to risks of inducing negative skills transfer when making use of simulators in training. As a general message, it is important to make all participants aware of the fact that simulators can never be realistic in all senses of this term, but neither are all ships and their equipment identical. Simulators have other affordances than those that apply to real life situations, and this is their strength, providing a context for development of expertise through deliberate practice. Our study contributes by providing a detailed analysis of simulator-based training as it is practically accomplished in maritime pilot education, thereby advancing our understanding of simulation as a tool for professional learning. As a result, our study shows how and why simulation training and training on board ships mutually support the advancement of the trainees’ visual expertise in their learning trajectory towards mastery of maritime skills. Finally, we argue that the trainees’ ability to handle inconsistencies and imperfections in the simulator is closely related to their prior experience of both training contexts. Hence, learning to simulate is essential in professional education that aims to prepare trainees for work in safety critical domains.

Keypoints

Acknowledgments

This study is part of the project “Evaluation of eye-tracking as support in simulator training for maritime pilots” financed by the Swedish Transport Administration between 2020-2023. The authors would like to express their warmest gratitude to the maritime pilots in training and their instructors who participated in the study. We are also grateful towards members of the Sociocultural and Dialogical Studies (SDS) seminar at University of Gothenburg for insightful discussions on an early draft of the manuscript and the audience at AERA for valuable comments on the submitted study at the annual meeting in Chicago April 2023.

References

Bassetti, C. (2021). The tacit dimension of expertise: Professional vision at work in airport security. Discourse Studies, 23(5), 597-615. https://doi.org/10.1177/14614456211020141

Boucheix, J.-M. (2017). The interplay between methodologies, tasks and visualisation formats in the study of visual expertise. Frontline Learning Research , 5(3), 155–166. https://doi.org/10.14786/flr.v5i3.311

Comi, A., Jaradat, S., & Whyte, J. (2019). Constructing shared professional vision in design work: The role of visual objects and their material mediation. Design Studies, 64, 90-123. https://doi.org/10.1016/j.destud.2019.06.003

Garfinkel, H. (2002). Ethnomethodology's program: Working out Durkheim's aphorism . Rowman & Littlefield Publishers.

Gegenfurtner, A., Gruber, H., Holzberger, D., Keskin, Ö., Lehtinen, E., Seidel, T., Stürmer, K., & Säljö, R. (2022). Towards a cognitive theory of visual expertise: methods of inquiry. In C. Damsa, A. Rajala, G. Ritella & J. Brouwer (Eds.), Re-theorizing learning and research methods in learning research . Routledge.

Gegenfurtner, A., Kok, E., van Geel, K., De Bruin, A., Jarodzka, H., Szulewski, A., & van Merriënboer, J. J. (2017). The challenges of studying visual expertise in medical image diagnosis. Medical Education , 51(1), 97-104. https://doi.org/10.1111/medu.13205

Gegenfurtner, A., Nivala, M., Säljö, R., & Lehtinen, E. (2009). Capturing individual and institutional change: Exploring horizontal versus vertical transitions in technology-rich environments. In U. Cress, V. Dimitrova & M. Specht (Eds.), Learning in the synergy of multiple disciplines (pp. 676-681). Springer.

Gegenfurtner, A., & van Merriënboer, J. J. G. (2017). Methodologies for studying visual expertise. Frontline Learning Research, 5(3), 1–13. https://doi.org/10.14786/flr.v5i3.316

Gegenfurtner, A., Lehtinen, E., Helle, L., Nivala, M., Svedström, E., & Säljö, R. (2019). Learning to see like an expert: On the practices of professional vision and visual expertise. International Journal of Educational Research , 98, 280-291. https://doi.org/10.1016/j.ijer.2019.09.003

Goodwin, C. (1994). Professional vision. American Anthropologist, 96(3), 606–633. doi:10.1525/aa.1994.96.3.02a00100

Heath C., Hindmarsh, J. & Luff, P. (2010). Video in qualitative research: Analysing social interaction in everyday life . SAGE Publications Ltd.

Helle, L. (2017). Prospects and pitfalls in combining eye-tracking data and verbal reports. Frontline Learning Research, 5(3), 1-12. https://doi.org/10.14786/flr.v5i3.254

Hindmarsh, J., Hyland, L., & Banerjee, A. (2014). Work to make simulation work: ‘Realism’, instructional correction and the body in training. Discourse Studies, 16(2), 247-269. https://doi.org/10.1177/1461445613514670

Hontvedt, M., & Arnseth, H. C. (2013). On the bridge to learn: Analysing the social organization of nautical instruction in a ship simulator. International Journal of Computer-Supported Collaborative Learning , 8, 89-112. https://doi.org/10.1007/s11412-013-9166-3

Hontvedt, M. (2015). Professional vision in simulated environments—Examining professional maritime pilots' performance of work tasks in a full-mission ship simulator. Learning, Culture and Social Interaction , 7, 71-84 https://doi.org/10.1016/j.lcsi.2015.07.003

Hontvedt, M., & Øvergård, K. I. (2020). Simulations at work—A framework for configuring simulation fidelity with training objectives. Computer Supported Cooperative Work , 29, 85-113. (CSCW), https://doi.org/10.1007/s10606-019-09367-8

Ivarsson, J. (2017). Visual expertise as embodied practice. Frontline Learning Research , 5(3), 123–138. https://doi.org/10.14786/flr.v5i3.253

Jarodzka, H., & Boshuizen, H. P. (2017). Unboxing the black box of visual expertise in medicine. Frontline Learning Research, 5 (3), 167–183. https://doi.org/10.14786/flr.v5i3.332

Kim, T. E., Sharma, A., Bustgaard, M., Gyldensten, W. C., Nymoen, O. K., Tusher, H. M., & Nazir, S. (2021). The continuum of simulator-based maritime training and education. WMU Journal of Maritime Affairs, 20(2), 135-150. https://doi.org/10.1007/s13437-021-00242-2

Knoblauch, H., & Schnettler, B. (2012). Videography: Analysing video data as a ‘focused’ ethnographic and hermeneutical exercise, 12(3), 334-356. Qualitative Research. https://doi.org/10.1177%2F1468794111436147

Lehtinen, E., Gegenfurtner, A., Helle, L., & Säljö, R. (2020). Conceptual change in the development of visual expertise. International Journal of Educational Research , 100, 101545. https://doi.org/10.1016/j.ijer.2020.101545

Luff, P. K., & Heath, C. (2019). Visible objects of concern: Issues and challenges for workplace ethnographies in complex environments. Organization, 26(4), 578-597. https://doi.org/10.1177/1350508419828578

Lützhöft, M. H., & Nyce, J. M. (2006). Piloting by heart and by chart. The Journal of Navigation, 59(2), 221-237. https://doi.org/10.1017/S0373463306003663

Lützhöft, M. H., & Dekker, S. W. (2002). On your watch: automation on the bridge. The Journal of Navigation, 55(1), 83-96. doi:10.1017/S0373463301001588

Lymer, G. (2009). Demonstrating professional vision: The work of critique in architectural education. Mind, Culture, and Activity, 16 (2), 145-171. https://doi.org/10.1080/10749030802590580

Nivala, M., Rystedt, H., Säljö, R., Kronqvist, P., & Lehtinen, E. (2012). Interactive visual tools as triggers of collaborative reasoning in entry-level pathology. International Journal of Computer-Supported Collaborative Learning , 7(4), 499-518. https://doi.org/10.1007/s11412-012-9153-0

Petersen, S.B., Vestergaard, A.H., Thomsen, A.S.S., Konge, L., Cour, M.L., Grauslund, J. & Vergmann, A.S. (2022). Pretraining of basic skills on a virtual reality vitreoretinal simulator: A waste of time. Acta Ophthalmol . 100(5). https://doi.org/10.1111/aos.15039

Popova, K. (2018). Ethnomethodological studies of visuality. Ethnographic Studies , 15, 23-37.

Rystedt, H., & Sjöblom, B. (2012). Realism, authenticity, and learning in healthcare simulations: rules of relevance and irrelevance as interactive achievements. Instructional Science, 40, 785-798. https://doi.org/10.1007/s11251-012-9213-x

Schul, A., Gong, A., & Sweet, R. (2019). MP35-13 Development and validation of a high-fidelity urethral catheter simulator. The Journal ofUrology . https://doi.org/10.1097/01.JU.0000556003.94745.db

Sellberg, C. (2017). Representing and enacting movement: The body as an instructional resource in a simulator-based environment. Education and Information Technologies , 22, 2311–2332. https://doi.org/10.1007/s10639-016-9546-1

Sellberg, C., & Lundin, M. (2017). Demonstrating professional intersubjectivity: The instructor's work in simulator-based learning environments. Learning, Culture and Social Interaction, 13 , 60-74. https://doi.org/10.1016/j.lcsi.2017.02.003

Taber, M. J. (2013). Crash attenuating seats: Effects on helicopter underwater escape performance. Safety Science, 57, 179-186. https://doi.org/10.1016/j.ssci.2013.02.007

Weeks, P. A. (1985). Error-correction techniques and sequences in instructional settings: Toward a comparative framework. Human Studies , 8, 195-233. https://www.jstor.org/stable/20008946

Weeks, P. (1996). A rehearsal of a Beethoven passage: An analysis of correction talk. Research on Language and Social Interaction, 29(3), 247-290. doi: 10.1207/s15327973rlsi29033

Wiig, C., Silseth, K., & Erstad, O. (2018). Creating intercontextuality in students learning trajectories. Opportunities and difficulties. Language and Education , 32(1), 43–59. https://doi.org/10.1080/09500782.2017.1367799