The How of Survey Self-report: VAS-Likert-Slide-Swipe... Same difference?

Main Article Content

Luke K. Fryer
Kaori Nakao

Abstract

Self-report is a fundamental research tool for the social sciences. Despite quantitative surveys being the workhorses of the self-report stable, few researchers question their format—often blindly using some form of Labelled Categorical Scale (Likert-type). This study presents a brief review of the current literature examining the efficacy of survey formats, addressing longstanding paper-based concerns and more recent issues raised by computer- and mobile-based surveys. An experiment comparing four survey formats on touch-based devices was conducted. Differences in means, predictive validity, time to complete and centrality were compared. A range of preliminary findings emphasise the similarities and striking differences between these self-report formats. Key conclusions include: A) that the two continuous interfaces (Slide & Swipe) yielded the most robust data for predictive modelling; B) that future research with touch self-report interfaces can set aside the VAS format; C) that researchers seeking to improve on Likert-type formats need to focus on user interfaces that are quick/simple to use. Implications and future directions for research in this area are discussed.

Article Details

How to Cite
Fryer, L. K., & Nakao, K. (2020). The How of Survey Self-report: VAS-Likert-Slide-Swipe. Same difference?. Frontline Learning Research, 8(3), 10–25. https://doi.org/10.14786/flr.v8i3.501
Section
Articles
Author Biography

Kaori Nakao, Seinan Gakuin University, Japan

Primary school teacher educator. 

References

Adelson, J. L., & McCoach, D. B. (2010). Measuring the Mathematical Attitudes of Elementary Students: The Effects of a 4-Point or 5-Point Likert-Type Scale. Educational and Psychological Measurement, 70(5), 796-807. https://doi.org/10.1177/0013164410366694

Albaum, G. (1997). The Likert scale revisited. Market Research Society. 39(2), 1-21. https://doi.org/10.1177/147078539703900202

Austin, P. C., & Brunner, L. J. (2003). Type I error inflation in the presence of a ceiling effect. The American Statistician, 57(2), 97-104. https://doi.org/10.1198/0003130031450

Berger, I., & Alwitt, L. F. (1996). Attitude conviction: a measure of strength and function. Unpublished paper.

Bishop, P. A., & Herron, R. L. (2015). Use and misuse of the Likert item responses and other ordinal measures. International journal of exercise science, 8(3), 297-302.

Boognese, J. A., Schnitzer, T. J., & Ehrich, E. (2003). Response relationship of VAS and Likert scales in osteoarthritis efficacy measurement. Osteoarthritis and Cartilage, 11(7), 499-507. https://doi.org/10.1016/S1063-4584(03)00082-7

Britanica.com (2019). Likert definition. Retrieved on November 18, 2019 from https://www.britannica.com/topic/Likert-Scale

Couper, M. P., Tourangeau, R., Conrad, F. G., & Singer, E. (2006). Evaluating the effectiveness of visual analog scales: A web experiment. Social Science Computer Review, 24(2), 227-245. https://doi.org/10.1177/0894439305281503

Chauliac, M., Catrysse, L., Gijbels, D., & Donche V. (2020). It is all in the surv-eye: can eye tracking data shed light on the internal consistency in self-report questionnaires on cognitive processing strategies? Frontline Learning Research. 8(3), 26 – 39. https://doi.org/10.14786/flr.v8i3.489

Devellis, R. F. (2012). Scale development: Theory and application. New York: Sage

Douven, I. (2018). A Bayesian perspective on Likert scales and central tendency. Psychonomic Bulletin & Review, 25, 1-9. https://doi.org/10.3758/s13423-017-1344-2

Durik, A. M., & Jenkins, J. S. (2020). Variability in certainty of self-reported interest: Implications for theory and research. Frontline Learning Research, 8(2) 86-104. https://doi.org/10.14786/flr.v8i3.491

Foddy, W. (1994). Constructing questions for interviews and questionnaires: Theory and practice in social research. Cambridge: Cambridge university press.

Fryer, L. K., Thompson, A., Nakao, K., Howarth, M., & Gallacher, A. (2020). Supporting self-efficacy beliefs and interest as educational inputs and outcomes: Framing AI and Human partnered task experience. Learning and Individual Differences. https://doi.org/10.1016/j.lindif.2020.101850

Fryer, L. K., & Dinsmore D.L. (2020). The Promise and Pitfalls of Self-report: Development, research design and analysis issues, and multiple methods. Frontline Learning Research, 8(3), 1–9. https://doi.org/10.14786/flr.v8i3.623

Fryer, L. K., Nakao, K., & Thompson, A. (2019). Chatbot learning partners: Connecting learning experiences, interest and competence. Computers in Human Behavior, 93, 279-289. https://doi.org/10.1016/j.chb.2018.12.023

Fryer, L. K., & Fryer, K. (2019).情報処理装置、情報プログラムおよびこれを記録した記録媒体、ならびに情報処理方法.. Patent # 6585129 (Japan).

TRANSLATION: [Dynamic touch based interface for survey self-report; Translation of Japanese patent title: information processor (information technology equipment), information program and a medium for the recording, and a method of information processing]

Fryer, L. K., Ainley, M., Thompson, A., Gibson, A., & Sherlock, Z. (2017). Stimulating and sustaining interest in a language course: An experimental comparison of Chatbot and Human task partners. Computers in Human Behavior, 75, 461-468. https://doi.org/10.1016/j.chb.2017.05.045

Fryer, L. K., Ainley, M., & Thompson, A. (2016). Modelling the links between students' interest in a domain, the tasks they experience and their interest in a course: Isn't interest what university is all about? Learning and Individual Differences, 50, 157-165. https://doi.org/10.1016/j.lindif.2016.08.011

Hayes, M. H., & Patterson, D. (1921). Experimental development of the graphic rating method. Psychological Bulletin, 18, 98-107.

Howell, J. L., Collisson, B., & King, K. M. (2014). Physics envy: Psychologists’ perceptions of psychology and agreement about core concepts. Teaching of Psychology, 41, 330-334. https://doi.org/10.1177/0098628314549705

Jaeschke, R., Singer, J., & Guyatt, G. H. (1990). A comparison of seven-point and visual analogue scales: data from an andomized trial. Controlled clinical trials, 11, 43-51. https://doi.org/10.1016/0197-2456(90)90031-V

Kuhlmann, T., Dantlgraber, M., & Reips, U.-D. (2017). Investigating measurement equivalence of visual analogue scales and Likert-type scales in Internet-based personality questionnaires. Behavior Research Methods, 49, 2173-2181. https://doi.org/10.3758/s13428-016-0850-x

Likert, R. (1932). “A Technique for the Measurement of Attitudes”. Archives of Psychology, 140, 5-55.

Liu, M. (2017). Labelling and direction of slider questions: Results from web survey experiments. International Journal of Market Research, 59, 601-624. https://doi.org/10.2501/IJMR-2017-033

Liu, M., & Conrad, F. G. (2018). Where Should I Start? On Default Values for Slider Questions in Web Surveys. Social Science Computer Review, 37(2), 248-269. https://doi.org/10.1177/0894439318755336

Chauliac, M., Catrysse, L., Gijbels, D. and Donce, V. (2020). It is all in the surv-eye: can eye tracking data shed light on the internal consistency in self-report questionnaires on cognitive processing strategies? Frontline Learning Research. 8 (2), 26 – 39. http://doi.org/10.14786/flr.v8i3.489

Matejka, J., Glueck, M., Grossman, T., & Fitzmaurice, G. (2016). The effect of visual appearance on the performance of continuous sliders and visual analogue scales. Paper presented at the Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems.

Merriam-Webster. (2019). Visual Analogue Scale definition. Retrieved on November 18, 2019 from https://www.merriam-webster.com/dictionary/likert

Muthén, L. K., & Muthén, B. O. (1998-2015). Mplus user's guide. (Sixth ed.). Los Angeles, CA: Muthén & Muthén.

Raykov, T. (2009). Evaluation of scale reliability for unidimensional measures using latent variable modeling. Measurement and Evaluation in Counseling and Development, 42, 223-232. http://doi.org/10.1177/0748175609344096

Rogiers, A.; Merchie, E. & Van Keer (2020). Opening the black box of students’ text-learning processes: A process mining perspective. Frontline Learning Research. 8(3) 40 – 62. http://doi.org/10.14786/flr.v8i3.527

Reed, C. C., Wolf, W. A., Cotton, C. C., & Dellon, E. S. (2017). A visual analogue scale and a Likert scale are simple and responsive tools for assessing dysphagia in eosinophilic oesophagitis. Alimentary Pharmacology & Therapeutics, 45, 1443-1448. https://doi.org/10.1111/apt.14061

Renninger, K., & Hidi, S. (2015). The power of interest for motivation and engagement. New York: Routledge.

Renninger, K., & Schofield, L. S. (2014). Assessing STEM interest as a developmental motivational variable. Paper presented at the American Educational Research Association, Philadelphia, PA.

Roster, C. A., Lucianetti, L., & Albaum, G. (2015). Exploring slider vs. categorical response formats in web-based surveys. Journal of Research Practice, 11(1), 1.

Vickers, A. J. (1999). Comparison of an ordinal and a continuous outcome measure of muscle soreness. International Journal of Technology Assessment in Health Care, 15, 709-716. https://doi.org/10.1017/S0266462399154102

Voutilainen, A., Pitkäaho, T., Kvist, T., & Vehviläinen‐Julkunen, K. (2016). How to ask about patient satisfaction? The visual analogue scale is less vulnerable to confounding factors and ceiling effect than a symmetric Likert scale. Journal of Advanced Nursing, 72, 946-957. https://doi.org/10.1111/jan.12875

Wetzel, E., & Greiff, S. (2018). The world beyond rating scales: Why we should think more carefully about the response format in questionnaires. European Journal of Psychological Assessment, 34, 1-5. http://doi.org/10.1027/1015-5759/a000469