Statistical indices from bifactor models
Abstract
Many instruments are created with the primary purpose of scaling individuals on a single trait. However psychological traits are often complex and contain domain specific manifestations. As such, many instruments produce data that are consistent with both unidimensional and multidimensional structures. Unfortunately, oftentimes, applied researchers make determinations about the final structure based solely on fit indices obtained from structural equation models. Given that fit indices generally favor the bifactor model over competing measurement models it is imperative that researchers make use of the available information the bifactor has to offer in order to compute informative indices including omega reliability coefficients, construct reliability, explained common variance, and percentage of uncontaminated correlations. Said indices provide unique information about the strength of both the general and specific factors in order to draw conclusions about dimensionality and overall scoring of scales (and subscales). Herein, we describe these indices and offer a new module which easily facilitates their computation.
Downloads
References
Bonifay, W., Lane, S.P., & Reise, S.P. (2017). Three concerns with applying a bifactor model as a structure of psychopathology. Clinical Psychological Science, 5(1), 184 – 186. doi: 10.1177/2167702616657069
Canivez, G. L. (2016). Bifactor modeling in construct validation of multifactored tests: Implications for multidimensionality and test interpretation. In K. Schweizer & C. DiStefano (Eds.), Principles and methods of test construction: Standards and recent advancements (pp. 247–271). Gottingen, Germany: Hogrefe.
Dominguez-Lara, S. (2014). Análisis Psicométrico de la Escala de Bienestar Psicológico para Adultos en estudiantes universitarios de Lima: un enfoque de ecuaciones estructurales. Psychologia: Avances en la disciplina, 8(1), 23-31.
Dominguez-Lara, S. (2016a). Evaluación de la confiabilidad del constructo mediante el Coeficiente H: breve revisión conceptual y aplicaciones. Psychologia: Avances en la disciplina, 10(2), 87 – 94.
Dominguez-Lara, S. (2016b). Evaluación de modelos estructurales, más allá de los índices de ajuste. Enfermería Intensiva, 27, 84-85. doi: 10.1016/j.enfi.2016.03.003
Dominguez-Lara, S. (2016c). Inventario de la Ansiedad ante Exámenes-Estado: análisis preliminar de validez y confiabilidad en estudiantes de psicología. Liberabit, 22(2), 219 – 228.
Dominguez-Lara, S., & Merino-Soto, C. (2016). Análisis Estructural de la Escala de Afrontamiento ante la Ansiedad e Incertidumbre Pre-examen (COPEAU) en universitarios peruanos. Revista Digital de Investigación en Docencia Universitaria, 10(2), 32 – 47. doi: 10.19083/ridu.10.474
Gignac, G.E. (2016). The higher-order model imposes a proportionality constraint: That is why the bifactor model tends to fit better. Intelligence, 55, 57 – 68. doi: 10.1016/j.intell.2016.01.006
Hancock, G. R. (2001). Effect size, power, and sample size determination for structured means modeling and MIMIC approaches to between- groups hypothesis testing of means on a single latent construct. Psychometrika, 66, 373–388. doi: 10.1007/BF02294440
Hancock, G. R., & Mueller, R. O. (2001). Rethinking construct reliability within latent variable systems. In R. Cudeck, S. du Toit, & D. Sörbom (Eds.), Structural equation modeling: Present and future—A Festschrift in honor of Karl Jöreskog (pp. 195–216). Lincolnwood, IL: Scientific Software International
Holzinger, K. J., & Harman, H. H. (1938). Comparison of two factorial analyses. Psychometrika, 3(1), 45-60. doi: 10.1007/BF02287919
Holzinger, K. J., & Swineford, F. (1937). The bi-factor method. Psychometrika, 2(1), 41-54. doi: 10.1007/BF02287965
Morgan, B., de Bruin, G.P., & de Bruin, K. (2014). Operatoinalizing burnout in the Maslach Burnout Inventory – Student Survey: personal efficacy versus personal inefficacy. South African Journal of Psychology, 44(2), 216 – 227. doi: 10.1177/0081246314528834
Morgan, G.B., Hodge, K.J., Wells, K.E., & Watkins, M.W. (2015). Are fit indices biased in favor of bi-factor models in cognitive ability research?: A comparison of fit in correlated factors, higher-order, and bi-factor models via Monte Carlo simulations. Journal of Intelligence, 3, 2 – 20. doi: 10.3390/jintelligence3010002
Moscoso, M., Merino-Soto, C., Dominguez-Lara, S., Chau, C., & Claux, M. (2016). Análisis factorial confirmatorio del Inventario Multicultural de la Expresión de la Ira y Hostilidad. Liberabit, 22(2), 137 – 152.
Raykov, T., & Hancock, G. R. (2005). Examining change in maximal reliability for multiple-component measuring instruments. British Journal of Mathematical and Statistical Psychology, 58(1), 65–82. doi: 10.1348/000711005X38753
Reise, S.P. (2012). The rediscovery of bifactor measurement models. Multivariate Behavioral Research, 47(5), 667-696. doi: 10.1080/00273171.2012.715555
Reise, S.P., Morizot, J., & Hays, R.D. (2007). The role of the bifactor model in resolving dimensionality issues in health outcomes measures. Quality of Life Research, 16(1), 19 – 31. doi: 10.1007/s11136-007-9183-7
Reise, S.P. Scheines, R., Widaman, K.F., & Haviland, M.G. (2013). Multidimensionality and structural coefficient bias in structural equation modeling: A bifactor perspective. Educational and Psychological Measurement, 73(1), 5 – 26. doi: 10.1177/0013164412449831
Rodriguez, A., Reise, S.P., & Haviland, M.G. (2016a). Evaluating bifactor models: calculating and interpreting statistical indices. Psychological Methods, 21(2), 137 – 150. doi: 10.1037/met0000045
Rodriguez, A., Reise, S.P., & Haviland, M.G. (2016b). Applying Bifactor Statistical Indices in the Evaluation of Psychological Measures. Journal of Personality Assessment, 98(3):223-237. doi: 10.1080/00223891.2015.1089249
Stefansson, K.K., Gestsdottir, S., Geldhof, G.J., Skulason, S., & Lerner, R.M. (2016). A Bifactor Model of School Engagement. Assessing General and Specific Aspects of Behavioral, Emotional and Cognitive Engagement among Adolescents. International Journal of Behavioral Development, 40(5), 471 – 480. doi: 10.1177/0165025415604056
Sijtsma, K. (2009). On the use, the misuse, and the very limited usefulness of Cronbach’s alpha. Psychometrika, 74, 107 – 120. doi: 10.1007/s11336-008-9101-0
Smits, I.A.M., Timmerman, M.E., Barelds, D.P.H., & Meijer, R.R. (2015). The Dutch symptom checklist-90-revised: is the use of the subscales justified? European Journal of Psychological Assessment, 31(4), 263-271. doi: 10.1027/1015-5759/a000233
Stucky, B. D. & Edelen, M. O. (2015). Using heierarchical IRT models to create unidimensional measures from multidimensional data. In S. P. Reise & D. A. Revicki (Eds.), Handbook of item response theory modeling: Applications to typical performance assessment, (pp. 183-206). New York: Routledge.
Stucky, B.D., Thissen, D., & Edelen, M.O. (2013). Using logistic approximations of marginal trace lines to develop short assessments. Applied Psychological Measurement, 37(1), 41 – 57. doi: 10.1177/0146621612462759
Ten Berge, J. M., & Sočan, G. (2004). The greatest lower bound to the reliability of a test and the hypothesis of unidimensionality. Psychometrika,69(4), 613-625. doi: 10.1007/BF02289858
Tsubakita, T., & Shimazaki, K. (2016). Constructing the Japanese version of the Maslach Burnout Inventory – Student Survey: confirmatory factor analysis. Japan Journal of Nursing Science, 13, 183 – 188. doi: 10.1111/jjns.12082
Watkins, M. W. (2013). Omega [Computer software]. Phoenix, AZ: Ed & Psych Associates.
Zinbarg, R. E., Yovel, I., Revelle, W., & McDonald, R. P. (2006). Estimating generalizability to a latent variable common to all of a scale’s indicators: A comparison of estimators for ωH. Applied Psychological Measurement, 30(2), 121-144. doi: 10.1177/0146621605278814
Copyright (c) 2017 Interacciones
This work is licensed under a Creative Commons Attribution 4.0 International License.
The authors retain the copyright and give the journal the right of the first publication and that they can edit it, reproduce it, distribute it, exhibit it and communicate it in the country and abroad through printed and digital media.
The digital version of the journal is registered under a Creative Commons license (Under Creative Commons License): Attribution 4.0 International (CC BY 4.0). Therefore, this work can be reproduced, distributed and publicly communicated in digital format, provided that the names of the authors and Interacciones.
Therefore, it is established that authors can make other independent and additional behavioural agreements for the non-exclusive distribution of the version of the article published in this journal (eg, include it in institutional repositories or publish it in a book) as long as it is clearly indicated that the work was published for the first time in this journal.