# Modelling for understanding AND for prediction/classification - the power of neural networks in research

## Main Article Content

## Abstract

Two articles, Edelsbrunner and, Schneider (2013), and Nokelainen and Silander (2014) comment on Musso, Kyndt, Cascallar, and Dochy (2013). Several relevant issues are raised and some important clarifications are made in response to both commentaries. Predictive systems based on artificial neural networks continue to be the focus of current research and several advances have improved the model building and the interpretation of the resulting neural network models. What is needed is the courage and open-mindedness to actually explore new paths and rigorously apply new methodologies which can perhaps, sometimes unexpectedly, provide new conceptualisations and tools for theoretical advancement and practical applied research. This is particularly true in the fields of educational science and social sciences, where the complexity of the problems to be solved requires the exploration of proven methods and new methods, the latter usually not among the common arsenal of tools of neither practitioners nor researchers in these fields. This response will enrich the understanding of the predictive systems methodology proposed by the authors and clarify the application of the procedure, as well as give a perspective on its place among other predictive approaches.

## Article Details

*Frontline Learning Research*,

*2*(5), 67–81. https://doi.org/10.14786/flr.v2i5.135

FLR adopts the Attribution-NonCommercial-NoDerivs Creative Common License (BY-NC-ND). That is, Copyright for articles published in this journal is retained by the authors with, however, first publication rights granted to the journal. By virtue of their appearance in this open access journal, articles are free to use, with proper attribution, in educational and other non-commercial settings.

## References

Agrawal, A., & Chhatre, A. (2006). Explaining success on the commons: Community forest governance in the Indian Himalaya. World Development, 34, 149-166. doi: 10.1016/j.worlddev.2005.07.013

Aires, F., Prigent, C., & Rossow, W. B. (2004). Neural network uncertainty assessment using Bayesian statistics: A remote sensing application. Neural Computing, 16, 2415-2458. doi: 10.1162/0899766041941925

Al-Deek, H. M. (2001). Which method is better for developing freight planning models at seaports – Neural networks or multiple regression? Transportation Research Record, 1763, 90- 97. doi: 10.3141/1763-14

Anders, U., & Korn, O. (1996). Model selection in neural networks. ZEW Discussion Papers, 96-21. Retrieved from http://hdl.handle.net/10419/29449

Ansari, A., Jedidi, K., & Jagpal, H. S. (2000). A hierarchical Bayesian methodology for treating heterogeneity in structural equation models. Marketing Science, 19, 328-347. doi: 10.1287/mksc.19.4.328.11789

Benitez, J. M., Castro, J. L., & Requena, I. (1997). Are artificial neural networks black boxes? IEEE Transactions on Neural Networks, 8, 1156-1164. doi: 10.1109/72.623216

Blackard, J. A. & Dean, D. J. (1999). Comparative accuracies of artificial neural networks and discriminant analysis in predicting forest cover types from cartographic variables. Computers and Electronics in Agriculture, 24, 131–151. doi: 10.1016/S0168-1699(99)00046-0

Boekaerts, M., & Cascallar, E. C. (2006). How far have we moved toward the integration of theory and practice in Self-regulation? Educational Psychology Review, 18, 199-210. doi: 10.1007/s10648-006-9013-4

Bridle, J. S. (1992). Neural networks or hidden Markov models for automatic speech recognition: is there a choice? In P. LaFAce (Ed.), Speech Recognition and Understanding: Recent Advances, Trends and Application (pp. 225-236). New York: Springer.

Cascallar, E. C., Boekaerts, M., & Costigan, T. E. (2006) Assessment in the evaluation of self- regulation as a process. Educational Psychology Review, 18, 297-306. doi: 10.1007/s10648-006-9023-2

Cheng, B., & Titterington, D. M. (1994). Neural networks: A Review from a statistical perspective. Statistical Science, 9, 1, 2-54. doi: 10.1214/ss/1177010638

Chin, W. W., Marcolin, B. L., & Newsted, P. R. (2003). A partial least squares latent variable modelling approach for measuring interaction effects: Results from a Monte Carlo simulation study and an electronic-mail emotion/adoption study. Information Systems Research, 14, 189–217. doi: 10.1287/isre.14.2.189.16018

Decuyper, S., Dochy, F., & Van den Bossche, P. (2010). Grasping the dynamic complexity of team learning: An integrative model for effective team learning in organisations. Educational Research Review, 5, 111-133. doi: 10.1016/j.edurev.2010.02.002

Detienne, K. B., Detienne D. H., & Joshi, S. A. (2003). Neural networks as statistical tools for business researchers. Organizational Research Methods, 6, 236-265. doi: 10.1177/1094428103251907

Donaldson, R. G., & Kamstra, M. (1999). Neural network forecast combining with interaction effects. Journal of the Franklin Institute, 336B, 227-236. doi: 10.1016/S0016-0032(98)00018-0

Dreiseitl, S., & Ohno-Machado, L. (2002). Logistic regression and artificial neural network classification models: A methodology review. Journal of Biomedical Informatics, 35, 352–359. doi: 10.1016/S1532-0464(03)00034-0

Edelsbrunner, P., & Schneider, M. (2013). Modelling for Prediction vs. Modelling for Understanding: Commentary on Musso et al. (2013). Frontline Learning Research, 2, 99-101.

Everson, H. T., Chance, D., & Lykins, S. (1994, April). Exploring the use of artificial neural networks in educational research. Paper presented at the Annual meeting of the American Educational Research Association, New Orleans, Louisiana.

Frey, U. J., & Rusch, H. (2013). Using artificial neural networks for the analysis of social-ecological systems. Ecology and Society, 18, 40.doi:10.5751/ES-05202-180240.

Frigg, R. & Hartmann, S. (2006). Models in science. In E. N. Zalta (Ed.), The Stanford Encyclopaedia of Philosophy. Summer 2006 Edition. Stanford, CA: Stanford University Press.

Garson, G. D. (1991). Interpreting neural-network connection weights. AI Expert, 6, 47-51.

Garson, G. D. (1998). Neural networks. An introductory guide for social scientists. London: Sage Publications Ltd.

Gevrey, M., Dimopoulos, I., & Lek, S. (2003). Review and comparison of methods to study the contribution of variables in artificial neural network models. Ecological Modelling, 160, 249-264. doi: 10.1016/S0304-3800(02)00257-0

Golino, H. F., & Gomes, C. M. (2014). Four Machine Learning methods to predict academic achievement of college students: a comparison study. Manuscript submitted for publication.

Grishman, R., & Sundheim, B. (1996). Message Understanding Conference - 6: A Brief History. In: Proceedings of the 16th International Conference on Computational Linguistics (COLING), I, Copenhagen, 466–471.

Haenlein, M., & Kaplan, A. (2004). A beginner's guide to partial least squares analysis. Understanding Statistics, 3, 283–297. doi: 10.1207/s15328031us0304_4

Hahn, C., Johnson, M. D., Herrmann, A., & Huber, F. (2002). Capturing customer heterogeneity using a finite mixture PLS approach. Schmalenbach Business Review, 54, 243- 269.

Hand, D., Mannila, H., & Smyth, P. (2001). Principles of data mining. Cambridge, MA: MIT Press.

Haykin, S. (1994). Neural networks: A comprehensive foundation. New York: Macmillan.

He, S., & Li, J. (2011). Confidence intervals for neural networks and applications to modeling engineering materials. In C. L. P. Hui (Ed.), Artificial Neural Networks – Application. Shanghai, China: InTech. doi: 10.5772/16097

Intrator, O., & Intrator, N. (2001). Interpreting neural-network results: A simulation study. Computational Statistics and Data Analysis, 37, 373–393. doi: 10.1016/S0167-9473(01)00016-0

Kim, J., & Ahn, H. (2009). A new perspective for neural networks: Application to a marketing management problem. Journal of Information Science and Engineering, 25, 1605-1616.

Kyndt, E., Musso, M., Cascallar, E., & Dochy, F. (2011, August). Predicting academic performance in higher education: Role of cognitive, learning and motivation. Symposium conducted at the 14th EARLI Conference, Exeter, UK.

Kyndt, E., Musso, M., Cascallar, E., & Dochy, F. (2015, in press). Predicting academic performance: The role of cognition, motivation and learning approaches. A neural network analysis. In V. Donche & S. De Maeyer (Eds.), Methodological challenges in research on student learning. Antwerp, Belgium: Garant.

Laguna, M., & Marti, R. (2002). Neural network prediction in a system for optimizing simulations. IIE Transactions, 34, 273-282. doi: 10.1080/07408170208928869

Lee, C., Rey, T., Mentele, J., & Garver, M. (2005). Structured neural network techniques for modeling loyalty and profitability. Proceedings of the Thirtieth Annual SAS® Users Group International Conference. Cary, NC: SAS Institute Inc.

Lei, P. W., & Qiong Wu, Q. (2007). Introduction to structural equation modelling: Issues and practical considerations. Items – Instructional Topics in Educational Measurement - Fall 2007, NCME Instructional Module, 33-43.

Lek, S., Belaud, A., Baran, P., Dimopoulos, I., & Delacoste, M. (1996). Role of some environmental variables in trout abundance models using neural networks. Aquat. Living Resour, 9, 23-29. doi: 10.1051/alr:1996004

Luft, C. D. B., Gomes, J. S., Priori, D., & Takase, E. (2013). Using online cognitive tasks to predict mathematics low school achievement. Computers & Education, 67, 219-228. doi: 10.1016/j.compedu.2013.04.001

MacKay, D. J. C. (1992). A practical Bayesian framework for backpropagation networks. Neural computation, 4, 448- 472. doi: 10.1162/neco.1992.4.3.448

Marquez, L., Hill, T., Worthley, R., & Remus, W. (1991). Neural network models as an alternative to regression. Proceedings of the IEEE 24th Annual Hawaii International Conference on Systems Sciences, 4, 129-135. doi: 10.1109/HICSS.1991.184052

Monteith, K., Carroll, J., Seppi, K., & Martinez, T. (2011). Turning Bayesian Model Averaging into Bayesian Model Combination. In: Proceedings of the International Joint Conference on Neural Networks (IJCNN) 2011, 2657–2663.

Musso, M. F., & Cascallar, E. C. (2009a). New approaches for improved quality in educational assessments: Using automated predictive systems in reading and mathematics. Journal of Problems of Education in the 21st Century, 17, 134-151.

Musso, M. F. & Cascallar, E. C. (2009b).Predictive systems using artificial neural networks: An introduction to concepts and applications in education and social sciences. In M. C. Richaud & J. E. Moreno (Eds.). Research in behavioural sciences (Volume I), (pp. 433-459). Buenos Aires, Argentina: CIIPME/CONICET.

Musso, M. F., Kyndt, E., Cascallar, E. C., & Dochy, F. (2012). Predicting mathematical performance: The effect of cognitive processes and self-regulation factors. Education Research International. Vol 2012, Article ID 250719, 13 pages. doi: 10.1155/2012/250719

Musso, M. F., Kyndt, E., Cascallar, E. C., & Dochy, F. (2013). Predicting general academic performance and identifying differential contribution of participating variables using artificial neural networks. Frontline Learning Research, 1, 42-71. doi: 10.14786/flr.v1i1.13

Musso, M. F., Boekaerts, M., Segers, M., & Cascallar, E. C. (in preparation). A comparative analysis of the prediction of student academic performance.

Neal, W., & Wurst, J. (2001). Advances in market segmentation. Marketing Research, 13, 14-18.

Nguyen, N., & Cripps, A. (2001). Predicting housing value: A comparison of multiple regression and artificial neural networks. Journal of Real Estate Research, 22, 313-336.

Nokelainen, P. & Silander, T. (2014). Using New Models to Analyse True Complex Regularities of the World: Commentary on Musso et al. (2013). Frontiers in Psychology, 3, 78-82. doi: .org/10.14786/flr.v2i1.107.

Olden, J. D., & Jackson, D. A. (2002). Illuminating the ''black box'': a randomization approach for understanding variable contributions in artificial neural networks. Ecological Modelling, 154, 135-150. doi: 10.1016/S0304-3800(02)00064-9

Olden, J. D., Joy, M. K. & Death, R. G. (2004). An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data. Ecological Modelling, 178, 389-397. doi: 10.1016/j.ecolmodel.2004.03.013

Orre, R., Lansner, A., Bate, A., & Lindquist, M. (2000). Bayesian neural networks with confidence estimations applied to data mining. Computational Statistics & Data Analysis, 34, 473-493. doi: 10.1016/S0167-9473(99)00114-0

Perkins, K., Gupta, L., & Tamanna (1995). Predict item difficulty in a reading comprehension test with an artificial neural network. Language Testing, 12, 34-53. doi: 10.1177/026553229501200103

Pinninghoff Junemann, M. A., Salcedo Lagos, P. A., & Contreras Arriagada, R. (2007). Neural networks to predict schooling failure/success. In J. Mira & J. R. Alvarez (Eds.), Nature Inspired Problem-Solving Methods in Knowledge Engineering, (Part II), (pp. 571–579). Berlin/Heidelberg: Springer-Verlag. doi: 10.1007/978-3-540-73055-2_59

Ramaswami, M. M., & Bhaskaran, R. R. (2010). A CHAID based performance prediction model in educational data mining. International Journal of Computer Science Issues, 7, 10-18.

Roli, F., Giacinto, G., & Vernazza, G. (2001). Methods for designing multiple classifier systems. In J. Kittler & F. Roli (Eds.), Multiple Classifier Systems, (pp. 78-87). Berlin/Heidelberg: Springer-Verlag. doi: 10.1007/3-540-48219-9_8

Ripley, B. D. (1996). Pattern recognition and neural networks. Cambridge: Cambridge University Press. doi: 10.1017/CBO9780511812651

Rivals, I., & Personnaz, L. (2000). Construction of confidence intervals for neural networks based on least squares estimations. Neural Networks, 13, 463-484. doi: 10.1016/S0893-6080(99)00080-5

Rokach, L. (2010). Ensemble-based classifiers. Artificial Intelligence Review, 33, 1-39. doi: 10.1007/s10462-009-9124-7

Sahu, A., Runger, G., Apley, D. (2011). Image denoising with a multi-phase kernel principal component approach and an ensemble version. IEEE Applied Imagery Pattern Recognition Workshop, 1-7.

Schermelleh-Engel, K., Kerwer, M., & Klein, A. G. (2014). Evaluation of model fit in nonlinear multilevel structural equation modelling. Frontiers in Psychology, 5, Article 181, 1-11. doi: 10.3389/fpsyg.2014.00181.

Suppes, P. (1962). Models of Data. In E. Nagel, P. Suppes & A. Tarski (Eds.), Logic, methodology and philosophy of science: Proceedings of the 1960 International Congress. Stanford: Stanford University Press, 252-261.

Thrush, S. F., Coco, G., & Hewitt, J. E. (2008). Complex positive connections between functional groups are revealed by neural network analysis of ecological time series. American Naturalist 171, 669-677. doi: 10.1086/587069

Tzeng, F. Y., & Ma, K. L. (2005). Intelligent feature extraction and tracking for visualizing large-scale 4D flow simulations. In DVD Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC '05). November, 2005.

Weiss, S. M., & Kulikowski, C. A. (1991). Computer systems that learn. San Mateo, CA: Morgan Kaufmann Publishers.

West, P. M., Brockett, P. L., & Golden, L. L. (1997). A comparative analysis of neural networks and statistical methods for predicting consumer choice. Marketing Science, 16, 370-391. doi: 10.1287/mksc.16.4.370

Weston, R., & Gore, P. A. (2006). A brief guide to structural equation modeling. The Counseling Psychologist, 34, 719-751. doi: 10.1177/0011000006286345

White, H., & Racine, J. (2001). Statistical inference, the bootstrap, and neural network modelling with application to foreign exchange rates. IEEE Transactions on Neural Networks, 12, 657-673. doi: 10.1109/72.935080

Wilson, R. L., & Hardgrave, B. C. (1995). Predicting graduate student success in an MBA program: Regression versus classification. Educational and Psychological Measurement, 55, 186-195. doi: 10.1177/0013164495055002003

Yeh, I. C., & Cheng, W. L. (2010). First and second order sensitivity analysis of MLP. Neurocomputing, 73, 2225-2233. doi: 10.1016/j.neucom.2010.01.011

Zambrano Matamala, C., Rojas Díaz, D., Carvajal Cuello, K., & Acu-a Leiva, G. (2011). Análisis de rendimiento académico estudiantil usando data warehouse y redes neuronales. [Analysis of students' academic performance using data warehouse and neural networks] Ingeniare. Revista Chilena de Ingeniería, 19, 369-381. doi: 10.4067/S0718-33052011000300007

Zapranis, A., & Livanis, E. (2005). Prediction intervals for neural network models. Proceedings of the 9th WSEAS International Conference on Computers (ICCOMP'05). World Scientific and Engineering Academy and Society (WSEAS). Stevens Point, Wisconsin, USA.