Some authors (e.g., Messick, 1995, 'Validity of Psychological Assessment') state that all validation strategies are, in fact, variations of construct validation strategies. What does this mean?

In HRM selection and assessment, validation and criterion measures are crucial for ensuring the accuracy and effectiveness of various recruitment and evaluation methods (Schmidt & Hunter, 2016). Messick (1995) argued in "Validity of Psychological Assessment" that all validation strategies can essentially be seen as forms of construct validation (Kane, 2006). To comprehend this claim, one must first understand construct validation.

Construct validation confirms a test or assessment tool's validity by proving it measures the intended theoretical construct or psychological attribute (Cronbach & Meehl, 1955). Its primary goal is to ensure the tool accurately assesses the intended construct without inadvertently measuring unrelated constructs (Strauss & Smith, 2009). Construct validation typically involves gathering evidence from multiple sources, such as correlational studies, experimental manipulations, and factor analyses, to support the tool's validity (Zumbo & Chan, 2014).

Messick's assertion implies that all validation strategies, including content and criterion-related validation, can be viewed as variations of construct validation (Kane, 2006). This is because all validation methods aim to provide evidence supporting the tool's ability to accurately measure the intended construct (Schmidt & Hunter, 2016).

For example, content validation focuses on how well a test's items or tasks represent the construct's domain. In contrast, criterion-related validation examines the relationship between test scores and external criteria, such as job performance or other relevant outcomes (Arthur et al., 2001). Both strategies contribute to the overall construct validity of a test by evaluating different aspects of the assessment tool (Schmidt & Hunter, 2016).

References

  • Arthur, W., Woehr, D.J., & Graziano, W.G. (2001). Personality testing in employment settings: Problems and issues in the application of typical selection practices. Personnel Review, 30(5), 657-676. https://doi.org/10.1108/EUM0000000005978
  • Cronbach, L.J., & Meehl, P.E. (1995). Construct validity in psychological tests. Psychological Bulletin, 52(4), 281-302. https://doi.org/10.1037/h0040957
  • Kane, M.T. (2006). Validation. In R.L. Brennan (Ed.), Educational Measurement (4th ed., pp. 17-64).
  • Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons' responses and performances as scientific inquiry into score meaning. American Psychologist, 50, 741-749. https://doi.org/10.1037/0003-066X.50.9.741
  • Schmidt, F.L., & Hunter, J.E. (2014). Methods of Meta-Analysis: Correcting Error and Bias in Research Findings (3rd ed.). Sage.
  • Strauss, M.E., & Smith, G.T. (2009). Construct validity: Advances in theory and methodology. Annual Review of Clinical Psychology, 5, 1-25. http://dx.doi.org/10.1146/annurev.clinpsy.032408.153639
  • Zumbo, B.D., & Chan, E.K.H. (2016). Validity and Validation in Social, Behavioural, and Health Sciences. Springer.
Describe the BARS approach to indicating job performance.

The Behaviourally Anchored Rating Scales (BARS) is a performance appraisal technique employed in HRM to evaluate employee job performance by focusing on specific, observable job behaviours and outcomes (Smith & Kendall, 1963). Developed by Smith and Kendall in the late 1960s, BARS aims to provide a more objective and precise assessment of an employee’s work performance (DeNisi & Murphy, 2017).

The BARS method identifies essential performance dimensions for a job, including communication, problem-solving, teamwork, customer service, and other job-specific competencies (Cascio & Aguinis, 2018). Behavioural anchors are established within each dimension, signifying various performance levels from poor to excellent (Smith & Kendall, 1963).

The development of BARS typically involves the following steps: 1. Identifying crucial job dimensions: Experts such as supervisors or experienced employees determine the most significant performance aspects for a specific job (DeNisi & Murphy, 2017). 2. Collecting critical incidents: Instances of effective and ineffective job behaviours related to each dimension are gathered from sources like performance appraisals or interviews (Cascio & Aguinis, 2018). 3. Categorising incidents: Incidents are sorted according to relevant performance dimensions, eliminating redundancies or irrelevant ones (Smith & Kendall, 1963). 4. Scaling and anchoring incidents: A group of raters evaluates the remaining incidents’ effectiveness or quality (Cascio & Aguinis, 2018). Based on these ratings, incidents are arranged along a low to high-performance scale, forming anchors for each dimension. 5. Developing the final BARS instrument: The anchored scales are integrated into a single tool for assessing employee performance (Cascio & Aguinis, 2018).

BARS offers numerous advantages over traditional rating methods, including reducing bias, establishing clear performance expectations, and facilitating better feedback for employee growth (Levy & Williams, 2004). However, the development and maintenance can be time-consuming and resource-intensive, particularly for organisations with a wide range of job roles (DeNisi & Murphy, 2017).

References

  • Cascio, W.F., & Aguinis, H. (2018). Applied Psychology in Talent Management (8th ed.). Sage.
  • DeNisi, A.S., & Murphy, K.R. (2017). Performance appraisal and performance management: 100 years of progress? Journal of Applied Psychology, 102(3), 421-433. https://doi.org/10.1037/apl0000085
  • Levy, P.E., & Williams, J.R. (2004). The social context of performance appraisal: A review and framework for the future. Journal of Management, 30(6), 881-905. https://doi.org/10.1016/j.jm.2004.06.005
  • Smith, P.C. & Kendall, L.M. (1963). Retranslation of expectations: An approach to the construction of unambiguous anchors for rating scales. Journal of Applied Psychology, 47(2), 149-155. https://doi.org/10.1037/h0047060
Share this post