Collaborative Filtering: The Aim of Recommender Systems and the Significance of User Ratings

Jennifer Redpath, David Glass, Sally McClean, Luke Chen

Research output: Chapter in Book/Report/Conference proceedingChapter

7 Citations (Scopus)

Abstract

This paper investigates the significance of numeric user ratings in recommender systems by considering their inclusion / exclusion in both the generation and evaluation of recommendations. When standard evaluation metrics are used, experimental results show that inclusion of numeric rating values in the recommendation process does not enhance the results. However, evaluating the accuracy of a recommender algorithm requires identifying the aim of the system. Evaluation metrics such as precision and recall evaluate how well a system performs at recommending items that have been previously rated by the user. By contrast, a new metric, known as Approval Rate, is intended to evaluate how well a system performs at recommending items that would be rated highly by the user. Experimental results demonstrate that these two aims are not synonymous and that for an algorithm to attempt both obscures the investigation. The results also show that appropriate use of numeric rating valuesin the process of calculating user similarity can enhance the performance when Approval Rate is used.
LanguageEnglish
Title of host publicationAdvances in Information Retrieval
Pages394-406
VolumeVolume
DOIs
Publication statusPublished - 2010

Fingerprint

Collaborative filtering
Recommender systems

Cite this

@inbook{f67bfcfe7e2242e7a031837f3ac8cb9a,
title = "Collaborative Filtering: The Aim of Recommender Systems and the Significance of User Ratings",
abstract = "This paper investigates the significance of numeric user ratings in recommender systems by considering their inclusion / exclusion in both the generation and evaluation of recommendations. When standard evaluation metrics are used, experimental results show that inclusion of numeric rating values in the recommendation process does not enhance the results. However, evaluating the accuracy of a recommender algorithm requires identifying the aim of the system. Evaluation metrics such as precision and recall evaluate how well a system performs at recommending items that have been previously rated by the user. By contrast, a new metric, known as Approval Rate, is intended to evaluate how well a system performs at recommending items that would be rated highly by the user. Experimental results demonstrate that these two aims are not synonymous and that for an algorithm to attempt both obscures the investigation. The results also show that appropriate use of numeric rating valuesin the process of calculating user similarity can enhance the performance when Approval Rate is used.",
author = "Jennifer Redpath and David Glass and Sally McClean and Luke Chen",
year = "2010",
doi = "10.1007/978-3-642-12275-0_35",
language = "English",
isbn = "978-3-642-12274-3",
volume = "Volume",
pages = "394--406",
booktitle = "Advances in Information Retrieval",

}

Collaborative Filtering: The Aim of Recommender Systems and the Significance of User Ratings. / Redpath, Jennifer; Glass, David; McClean, Sally; Chen, Luke.

Advances in Information Retrieval. Vol. Volume 2010. p. 394-406.

Research output: Chapter in Book/Report/Conference proceedingChapter

TY - CHAP

T1 - Collaborative Filtering: The Aim of Recommender Systems and the Significance of User Ratings

AU - Redpath, Jennifer

AU - Glass, David

AU - McClean, Sally

AU - Chen, Luke

PY - 2010

Y1 - 2010

N2 - This paper investigates the significance of numeric user ratings in recommender systems by considering their inclusion / exclusion in both the generation and evaluation of recommendations. When standard evaluation metrics are used, experimental results show that inclusion of numeric rating values in the recommendation process does not enhance the results. However, evaluating the accuracy of a recommender algorithm requires identifying the aim of the system. Evaluation metrics such as precision and recall evaluate how well a system performs at recommending items that have been previously rated by the user. By contrast, a new metric, known as Approval Rate, is intended to evaluate how well a system performs at recommending items that would be rated highly by the user. Experimental results demonstrate that these two aims are not synonymous and that for an algorithm to attempt both obscures the investigation. The results also show that appropriate use of numeric rating valuesin the process of calculating user similarity can enhance the performance when Approval Rate is used.

AB - This paper investigates the significance of numeric user ratings in recommender systems by considering their inclusion / exclusion in both the generation and evaluation of recommendations. When standard evaluation metrics are used, experimental results show that inclusion of numeric rating values in the recommendation process does not enhance the results. However, evaluating the accuracy of a recommender algorithm requires identifying the aim of the system. Evaluation metrics such as precision and recall evaluate how well a system performs at recommending items that have been previously rated by the user. By contrast, a new metric, known as Approval Rate, is intended to evaluate how well a system performs at recommending items that would be rated highly by the user. Experimental results demonstrate that these two aims are not synonymous and that for an algorithm to attempt both obscures the investigation. The results also show that appropriate use of numeric rating valuesin the process of calculating user similarity can enhance the performance when Approval Rate is used.

U2 - 10.1007/978-3-642-12275-0_35

DO - 10.1007/978-3-642-12275-0_35

M3 - Chapter

SN - 978-3-642-12274-3

VL - Volume

SP - 394

EP - 406

BT - Advances in Information Retrieval

ER -