Collaborative Filtering: The Aim of Recommender Systems and the Significance of User Ratings

Jennifer Redpath, David Glass, Sally McClean, Luke Chen

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

7 Citations (Scopus)

Abstract

This paper investigates the significance of numeric user ratings in recommender systems by considering their inclusion / exclusion in both the generation and evaluation of recommendations. When standard evaluation metrics are used, experimental results show that inclusion of numeric rating values in the recommendation process does not enhance the results. However, evaluating the accuracy of a recommender algorithm requires identifying the aim of the system. Evaluation metrics such as precision and recall evaluate how well a system performs at recommending items that have been previously rated by the user. By contrast, a new metric, known as Approval Rate, is intended to evaluate how well a system performs at recommending items that would be rated highly by the user. Experimental results demonstrate that these two aims are not synonymous and that for an algorithm to attempt both obscures the investigation. The results also show that appropriate use of numeric rating valuesin the process of calculating user similarity can enhance the performance when Approval Rate is used.
Original languageEnglish
Title of host publicationAdvances in Information Retrieval
PublisherSpringer
Pages394-406
VolumeVolume
ISBN (Print)978-3-642-12274-3
DOIs
Publication statusPublished (in print/issue) - 2010

Fingerprint

Dive into the research topics of 'Collaborative Filtering: The Aim of Recommender Systems and the Significance of User Ratings'. Together they form a unique fingerprint.

Cite this