You can get what you want; games, movie,software, template, applications, e-book and others in this blog

Friday, November 21, 2008

Reliability


Source:Google images

What Is Reliability? :)
Question: What Is Reliability?

Answer: Reliability refers to the consistency of a measure. A test is considered reliable if we get the same result repeatedly. For example, if a test is designed to measure a trait (such as introversion), then each time the test is administered to a subject, the results should be approximately the same. Unfortunately, it is impossible to calculate reliability exactly, but there several different ways to estimate reliability.

Test-Retest Reliability

To gauge test-retest reliability, the test is administered twice at two different points in time. This kind of reliability is used to assess the consistency of a test across time. This type of reliability assumes that there will be no change in the quality or construct being measured. Test-retest reliability is best used for things that are stable over time, such as intelligence. Generally, reliability will be higher when little time has passed between tests.

Inter-rater Reliability

This type of reliability is assessed by having two or more independent judges score the test. The scores are then compared to determine the consistency of the raters estimates. One way to test inter-rater reliability is to have each rater assign each test item a score. For example, each rater might score items on a scale from 1 to 10. Next, you would calculate the correlation between the two rating to determine the level of inter-rater reliability. Another means of testing inter-rater reliability is to have raters determine which category each observations falls into and then calculate the percentage of agreement between the raters. So, if the raters agree 8 out of 10 times, the test has an 80% inter-rater reliability rate.

Parallel-Forms Reliability

Parellel-forms reliability is gauged by comparing to different tests that were created using the same content. This is accomplished by creating a large pool of test items that measure the same quality and then randomly dividing the items into two separate tests. The two tests should then be administered to the same subjects at the same time.

Internal Consistency Reliability

This form of reliability is used to judge the consistency of results across items on the same test. Essentially, you are comparing test items that measure the same construct to determine the tests internal consistency. When you see a question that seems very similar to another test question, it may indicate that the two questions are being used to gauge reliability. Because the two questions are similar and designed to measure the same thing, the test taker should answer both questions the same, which would indicate that the test has internal consistency.
By Kendra Van Wagner
http://psychology.about.com/mbiopage.htm
-------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------
Added by: Pamenan Mato Nan Hilang
Internal consistency applies to the consistency among the variables in a summated scale. The rationale for internal consistency is that the individual items or indicators of the scale should all be measuring the same construct and thus be highly intercorelated (Hair et al.2006,p-137)
Because no single single item is a perfect measure of a concept, we must rely on a series of diagnostic measures to assess internal consistency.

1. The first measures we consider relate to each separate item, including the item to total correlation (the correlation of the item to the summated scale score) and the inter item correlation (the correlation among items). Rules of thumb suggest that the item-to-total correlation exceed 0.50 and that the inter-item correlation exceed 0.30.

2. The second type of diagnostic measure is the reliability coefficient that assesses the consistency of the entire scale, with cronbach's alpha being the most widely used measure. the generally agreed upon lower limit for CA is 0.70, although it may decrease to 0.60 in exploratory research. One issue in assessing CA is its positive relationship to the number of items, even with the same degree of intercorrelation, will increase the reliability value, researchers must place more stringent requirements for scale with numbers of items.

3. Also available are reliability measures derived from confirmatory factor analysis. Included in these measures are the composite reliability and the average variance extracted (AVE).

No comments:

FREE keyword suggestion tool

Enter a starting keyword to generate up to 100 related keywords and an estimate of internet user daily search volume. Now...have the result. More information, click this link free keyword suggestion tool from Wordtracker

My Blog List

Have a nice day

The term "Sistem Informasi Keperilakuan" is firstly pointed and popularated by Jogiyanto Hartono Mustakini in his book "Sistem Informasi Keperilakuan" (2007), Publisher Andi Offset Yogyakarta Indonesia.

Please...give us any suggestions, critics, and whatever that are relevant to this blog for improving its quality..thanks

Note:For the best appearance, use opera or IE as browser

Alfitman; Pamenan Mato Nan Hilang; Ikhlas Hati

About Me

Padang, West Sumatra, Indonesia
I wish I can do my best in human's life