Sentence similarity based on semantic nets and corpus statistics

Yuhua Li, David McLean, Zuhair Bandar, James D. O’Shea, Keeley Crockett

Research output: Contribution to journalArticlepeer-review

734 Citations (Scopus)

Abstract

Sentence similarity measures play an increasingly important role in text-related research and applications in areas such as text mining, Web page retrieval, and dialogue systems. Existing methods for computing sentence similarity have been adopted from approaches used for long text documents. These methods process sentences in a very high-dimensional space and are consequently inefficient, require human input, and are not adaptable to some application domains. This paper focuses directly on computing the similarity between very short texts of sentence length. It presents an algorithm that takes account of semantic information and word order information implied in the sentences. The semantic similarity of two sentences is calculated using information from a structured lexical database and from corpus statistics. The use of a lexical database enables our method to model human common sense knowledge and the incorporation of corpus statistics allows our method to be adaptable to different domains. The proposed method can be used in a variety of applications that involve text knowledge representation and discovery. Experiments on two sets of selected sentence pairs demonstrate that the proposed method provides a similarity measure that shows a significant correlation to human intuition.
Original languageEnglish
Pages (from-to)1138-1150
JournalIEEE Transactions on Knowledge and Data Engineering
Volume18
Issue number8
Publication statusPublished (in print/issue) - Dec 2006

Fingerprint

Dive into the research topics of 'Sentence similarity based on semantic nets and corpus statistics'. Together they form a unique fingerprint.

Cite this