Academic journal article Journal of Business and Behavior Sciences

The Consistency and Validity of Online User Ratings of Movie and Dvd Quality

Academic journal article Journal of Business and Behavior Sciences

The Consistency and Validity of Online User Ratings of Movie and Dvd Quality

Article excerpt

INTRODUCTION

Consumer buying behavior has changed considerably with the substantial growth of the Internet. Word-of-mouth now often takes the form of online ratings made by users of a product. Online reviews of products are available for many products from hotels to books (Zhang et al., 2010) and a major source of information for product purchases (Liu, 2006). With this increasing availability of ratings, researchers have begun to use rating information, especially average user ratings, typically referred to as valence, in many ways. For movies, valence has often been related to sales with some finding valence predicts sales (Dellarocas & Zhang, 2007) and others finding no relationship between the two (Chintagunta, Gopinath, & Venkataraman, 2010).

While it is possible this difference in results is due to differences in the movies considered (Purnawirawan et al., 2015), it is also possible investigators should not so quickly assume user ratings are reliable and valid sources of information or sites presumably measuring quality are valid for the same purposes. Further, as consistency across sites decreases, it can suggest low reliability which can lower the correlation between such ratings and criteria under investigation because the power to discover relationships is weakened (Nunnally, 1978, Traub, 1994) or it can suggest the validity of the ratings is not strong or the measures reflect different quality dimensions (Kline, 1993).

These problems are, in part, due to the nature of online ratings. They are not collected in controlled settings and the rating scales used can sometimes be weak. None have been developed for the purpose of academic research. Though the focus of this study is on movies, these weaknesses can foster random measurement error and result in rating inconsistency and validity problems across database sources for books, computers, or any other product. Generally, there is very little research on the cross-platform rating consistency and validity of user ratings.

PRIOR RESEARCH

The reliability of movie ratings has been a concern for quite some time and the results have varied. Cosley et al. (2003) found test-retest reliability to be relatively strong at .70 for 40 randomly selected movies rated by viewers over time, suggesting at least that users agree with their own ratings. Similarly, Amatriain, Pujol, and Oliver (2009) found strong test-retest reliability of .88 for 100 movies rated a second time at least 15 days from when first rated.

Thus, it appears users agree with themselves over time, but this is not the same as two users agreeing across two different rating platforms. It is possible for individual ratings to be reliable in a test-retest sense, but for users to differ. For example, agreement between movie critics has been found to be quite weak by some researchers (Agresti & Winner, 1997). Assessing the reliability of individual ratings is a good way for better understanding average ratings, but it does not assure the average ratings will be reliable or valid. And ultimately, they are the predictor of concern (Tett, Jackson, & Rothstein, 1991)

One method for assessing the reliability of average ratings is to check consistency across platforms. This type of research has been infrequent and shown varying results for movies. Plucker et al. (2009) found a low-moderate correlation of .43 between mean ratings by students and critics. However, they found a moderate correlation of .65 between mean student ratings and mean user ratings on both the IMDb and boxofficemojo.com sites.

One factor that seems to affect the consistency of mean ratings across users is user experience. For students who rarely saw movies, their average rating showed a low correlation in the Plucker et al. study (2009) of .22 with mean critic ratings and .30 and .29 for mean ratings on the IMDb and boxofficemojo.com platforms, respectively. However, the correlation between mean student ratings and mean critic ratings increased to . …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.