Measurement of Reliability

Drag to rearrange sections
Rich Text Content

A form of measurement of reliability is through correlation, correlation is a measurement of the relationship between two variables rather than an agreement commonly used to assess reliability and validity.

Understanding correlation:

Two variables- two administrations of the same test

                         - administration of equivalent forms

Correlation ranges from +1.00 - -1.00

Perfect positive correlation- +1.00 (more positive relationship and more reliable)

Perfect negative correlation- -1.00

No correlation- 0

Correlation can be demonstrated through scattergram

Scattergram is a graphic representation of correlation. The more closely the dots on a scattergram approximate a straight line, the nearer to perfect the correlation.

uploaded image

 

Methods of measuring reliability

uploaded image

uploaded image

 

 

To formulate a better understanding of reliability a video was shown in class, about assessment, practice, principles, and possibilities by Daniel Hickey. This video explained the methods of measuring reliability.

The Test-Retest Reliability- is the consistency/stability of test results over time, this can run into difficulties as well such as time of testing and differentiating variables such as learning techniques and maturation, etc.

uploaded image

 

Equivalent forms reliability- determines if two forms are comparable. It is much more commonly used for standardized tests. It is also a much more complex mathematical procedure measured with correlation and coefficient.

uploaded image

 

Split-Half reliability- takes all available items on a test and splits it in half.

uploaded image

 

Kuder-Richardson 20- this evaluates consistency across items of an instrument with the right or wrong answers.

uploaded image

 

Coefficient Alpha- checks consistency across items of an instrument where credit varies across responses.

uploaded image

 

Interrater reliability – this is a consistency of test across examiners, one person administers and the second rescores, the scores are then correlated to determine how much variability exists between scores.

uploaded image

uploaded image

 

Each of these reliability method cases is exercised within the classroom setting toward the specific needs of students. It is necessary to evaluate which type of reliability is best, we learned is based on reliability types: consistency over time, consistency over items on a test, consistency of scores.

However, there are errors that occur during a basic assumption of assessments. These errors vary for example poor testing environment, errors, in the test, and students' variables (hungry, stressed, tired, etc.) when these errors occur an application of SEM uncomplicated the process of SEM varies by age group and subtest and when applied significant discrepancies may not be significant.

rich_text    
Drag to rearrange sections
Rich Text Content
rich_text    

Page Comments