What is true score variance?

The concept of true score variance plays a crucial role in human resource management (HRM), especially in evaluating the standardisation and dependability of assessment instruments (Schmidt & Hunter, 1999). Within psychometric testing, an individual's test performance comprises two elements: the true score and the error score. The true score represents the assessment of the genuine ability or inherent trait, whereas the error score accommodates any discrepancies or imprecisions in the measurement procedure (Cronbach, 1951).

As the term implies, true score variance refers to the variability of true scores among individuals (Cronbach, 1951). It signifies the degree to which individuals' true scores differ from one another in a given assessment (Anastasi & Urbina, 1997). Understanding this variance is crucial as it helps gauge the dispersion of true scores, which is essential when evaluating the effectiveness and reliability of an assessment tool (Guion, 2011).

High true score variance indicates that the assessment tool effectively captures significant differences in individuals' abilities or traits (Salgado, 2017). This allows HR professionals to make more precise and well-informed decisions during the selection and assessment (Hogan & Holland, 2003). Conversely, low true score variance implies that the assessment tool may not be sufficiently distinguishing between individuals, potentially leading to less accurate decision-making (Viswesvaran & Ones, 2004).

References

  • Anastasi, A., & Urbina, S. (1997). Psychological Testing (7th ed.). Prentice Hall.
  • Cronbach, L.J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297–334. https://doi.org/10.1007/BF02310555
  • Guion, R.M. (2011). Assessment, Measurement, and Prediction for Personnel Decisions (2nd ed.). Routledge.
  • Hogan, J., & Holland, B. (2003). Using theory to evaluate personality and job-performance relations: A socioanalytic perspective. Journal of Applied Psychology, 88(1), 100-112. https://doi.org/10.1037/0021-9010.88.1.100
  • Salgado, J.F. (2017). Personnel selection. Oxford Research Encyclopedia of Psychology, 1-32. https://doi.org/10.1093/acrefore/9780190236557.013.8
  • Schmidt, F.L., & Hunter, J.E. (1999). Theory testing and measurement error. Intelligence (Norwood), 27(3), 183-198. https://doi.org/10.1016/S0160-2896(99)00024-0
  • Viswesvaran, C., & Ones, D.S. (2004). Importance of perceived personnel selection system fairness determinants: Relations with demographic, personality, and job characteristics. International Journal of Selection and Assessment, 12(1-2), 172-186. https://doi.org/10.1111/j.0965-075X.2004.00272.x
Define test-retest, parallel forms, internal consistency, and interrater agreement forms of psychometric evidence.

Test-retest, parallel forms, internal consistency, and interrater agreement are essential psychometric evidence for establishing the reliability and standardisation of HRM assessment tools, particularly in selection and evaluation processes (Putka & Sackett, 2010). Each method is critical for assessing the consistency and accuracy of various tests and measurements used in recruitment (Gatewood et al., 2016).

Test-retest assesses a test's stability over time by administering it to the same group of individuals on two separate occasions (Dessler, 2019). The scores from both instances are compared to determine the consistency of the test results. A high correlation between the two scores indicates good test-retest reliability, indicating a stable and consistent test.

Parallel forms evaluate the equivalence of two different versions (states) of the same test, each containing unique items measuring the same construct or skill (Kaplan & Saccuzzo, 2017). The forms are given to the same group of individuals, and the results are compared. A high correlation between the scores from the two forms signifies that the alternate test versions are equivalent and can be used interchangeably without affecting the consistency of the measurements.

Internal consistency gauges the consistency of items within a single test, assessing the extent to which all items are related and contributing to the overall construct being measured (Cronbach, 1951). Techniques for determining internal consistency include Cronbach's alpha and split-half reliability. High internal consistency indicates that the test effectively and consistently measures a single construct across all items.

Interrater agreement refers to the consistency of ratings provided by different evaluators or raters, essential for tests involving subjective judgements, such as performance appraisals or interview evaluations (Kaplan & Saccuzzo, 2017). The interrater agreement is assessed by comparing scores given by multiple raters to the same set of individuals. The high interrater agreement signifies consistent evaluator judgements, reducing the likelihood of rater bias or subjectivity affecting the test results.

References

  • Cronbach, L.J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297–334. https://doi.org/10.1007/BF02310555
  • Dessler, G. (2019). Human Resource Management (16th ed). Pearson.
  • Gatewood, R.D., Felid, H.S., & Barrick, M.R. (2016). Human Resource Selection (8th ed.). Cengage Learning. Chapters 6-7, pp. 203-276.
  • Kaplan, R.M., & Saccuzzo, D.P. (2017). Psychological Testing: Principles, Applications, and Issues (9th ed.). Wadsworth Publishing.
  • Murphy, K.R., & Davidshofer, C.O. (2004). Psychological Testing: Principles and Applications (6th ed.). Pearson.
  • Putka, D.J., & Sackett, P.R. (2010). Reliability and validity. In J.L. Farr & N.T. Tippins (Eds.), Handbook of Employee Selection. Routledge. pp. 599-612.
Share this post