mrssharonkilgore
Respond to colleagues’ posting in one or more of the following…

Respond to colleagues’ posting in one or more of the following ways:

Address the content of each colleague’s analysis and evaluation of the topic and of the integration of the relevant resources.
Extend or constructively challenge your colleagues’ work.

Please note that for each response you must include a minimum of one appropriately cited scholarly reference.

 

POSTING

Reliability and validity are the psychometric properties of measurement scales and are the standard with which the competence and correctness of measurement procedures are checked in research. It’s important to consider reliability and validity when creating your research design, planning your methods, and writing up your results, especially in quantitative analysis (Taber, 2018). Reliability shows the extent to which a variable (or a set of variables) is consistent with what they measure. Validity is a check used to ascertain if an instrument reflects the concept of the study. A researcher needs to know and understand the reliability and validity of their instruments (Green & Salkind, 2017). Failing to do so can lead to several types of research bias and seriously affect your work. 

         Reliability and validity are closely related, but they mean different things. Measurement can only be reliable by being valid. However, if a measurement is valid, it is usually also reliable. Reliability refers to how consistently a method measures something. The size is considered reliable if the same result can be achieved using the same methods under the same circumstances. Reliability assists the researcher in having a valid assessment, and its validity can make one confident in making a prediction (Green & Salkind, 2017). Even though reliability is essential for a study, more is needed to conclude that it is consistent except combined with validity. The instrumentation process and logical implications influence validity, while test length, score, and heterogeneity influence reliability. For instance, if we measure the temperature of a liquid sample several times under identical conditions, the thermometer will display the same temperature every time; the results are reliable. In contrast, If the thermometer shows different temperatures each time, even though you have carefully controlled conditions to ensure the sample’s temperature stays the same, the thermometer is probably malfunctioning, and therefore its measurements are invalid. Research with high validity produces results corresponding to natural properties and characteristics (Tang, 2015). Validity is more challenging to assess than reliability. The researcher’s methods to collect data must be valid to obtain valuable results: the research must measure what it claims to measure.

          In my chosen article, Chaudhary et al. (2013), the scale’s internal consistency and each factor were examined using Cronbach’s alpha. The Human Resource Development (HRD) climate survey instrument assessed the HRD climate level in the organizations under study. The HRD climate questionnaire consists of 38 items and uses a 5-point scale (almost always true, primarily accurate, sometimes true, rarely true, and not at all accurate); average scores of 3 and around indicate a moderate tendency on that dimension existing in that organization. Scores around 4 indicate a pretty good degree of that dimension in the organization. The Cronbach’s alpha value for the 38-item Human Resource Development (HRD) climate survey instrument was 0.953. The Cronbach’s alpha values for the factors were: 0.863 for Management Belief and Commitment to HRD (9 items), 0.868 for Employee Development (10 items), 0.768 for Autonomy, Openness & Authenticity (5 items), 0.788 for Rewards, Performance & Potential Appraisal (6 items), 0.684 for Superior-Subordinate Relationship (4 items) and 0.722 for Trust Collaboration and Team Spirit (4 items). The item analyses of the responses revealed that removing any items did not improve Cronbach’s alpha value. According to Bujang et al. (2018), Cronbach’s alpha test is generally applied to test the consistency and stability of the questionnaires, which measure dormant variables and also in questionnaire development or validation. It is considered a simple measure of scale reliability, assuming that multiple items measure the variable.

         Cronbach’s alpha is a significant concept used to evaluate the assessments and questionnaires. Use this statistic to help determine whether a collection of items consistently measures the same characteristic. Cronbach’s alpha quantifies the level of agreement on a standardized 0 to 1 scale (Chaudhary et al., 2013). If the instrument is reliable, there should be much covariance among the items relative to the variance. Cronbach’s alpha is equivalent to taking the average of all possible split-half reliabilities. Cronbach’s alpha is a measure used to assess the reliability, or internal consistency, of a set of scale or test items. In other words, the reliability of any given measurement refers to the extent to which it is a consistent measure of a concept. Cronbach’s alpha is one way of measuring the strength of that consistency. If all of the scale items are entirely independent of one another (i.e., are not correlated or share no covariance), then a= 0; and, if all of the things have high covariances, then a will approach one as the number of items in the scale approaches infinity. In other words, the higher the a coefficient, the more the things have shared covariance and probably measure the same underlying concept. However, Cronbach’s alpha is not a measure of dimensionality nor a test of one-dimensionality (Green & Salkind, 2017). It is also not a measure of validity or the extent to which a scale records the actual value or score of the concept researchers are trying to measure without capturing any unintended characteristics. Researchers will need more than a simple reliability test to fully assess how good a scale is at measuring a concept.

        Cronbach’s alpha is an excellent tool but insufficient to achieve my research goals. Despite its advantages, Cronbach’s alpha has some limitations that researchers should know. It assumes the items are unidimensional, meaning they measure only one construct or factor. However, this may not always be the case, mainly when the articles cover a broad or complex topic. Reliability and validity can be attained through data triangulation. Data triangulation involves combining various perspectives and methods to produce comprehensive study findings (Saunders et al., 2015). It can also be achieved by reducing personal bias during the data collection. 

       Reliability and validity should be considered in the very earliest stages of your research when the researcher decides how they will collect your data. When using a tool or technique to collect data, the results must be precise, stable, and reproducible. Appropriate sampling methods and methods of measurement are essential. 

References

Bujang, M. A., Omar, E. D., & Baharum, N. A. (2018). A Review on Sample Size Determination for Cronbach’s Alpha Test: A Simple Guide for Researchers. The Malaysian journal of medical sciences, 25(6), 85-99. https://doi.org/10.21315/mjms2018.25.6.9

Chaudhary, R., Rangnekar, S., & Barua, M. (2013). Human resource development climate in India: Examining the psychometric properties of HRD climate survey instrument. Vision, 17(1), 41-52. https://doi:10.1177/0972262912469564

Green, S. B., & Salkind, N. J. (2017). Using SPSS for Windows and Macintosh: Analyzing and understanding data (8th ed.). Upper Saddle River, NJ: Pearson

Saunders, M. N. K., Lewis, P., & Thornhill, A. (2019). Research methods for business students (8th ed.). Pearson Education Unlimited.

Taber, K.S. (2018). The Use of Cronbach’s Alpha When Developing and Reporting Research Instruments in Science Education. Research in Science Education 48(1), 1273-1296. https://doi.org/10.1007/s11165-016-9602-2 Links to an external site.

Tang, K. (2015). Estimating productivity costs in health economic evaluations: A review of instruments and psychometric evidence. Pharmacoeconomics, 33(1), 31-48. https://doi:10.1007/s40273-014-0209-zLinks to an external site.