A Context-based Word Indexing Model for Document Summarization

Pawan Goyal, Laxmidhar Behera, TM McGinnity

Research output: Contribution to journalArticlepeer-review

45 Citations (Scopus)

Abstract

Existing models for document summarization mostly use the similarity between sentences in the document to extract the most salient sentences. The documents as well as the sentences are indexed using traditional term indexing measures, which do not take the context into consideration. Therefore, the sentence similarity values remain independent of the context. In this paper, we propose a context sensitive document indexing model based on the Bernoulli model of randomness. The Bernoulli model of randomness has been used to find the probability of the cooccurrences of two terms in a large corpus. A new approach using the lexical association between terms to give a context sensitive weight to the document terms has been proposed. The resulting indexing weights are used to compute the sentence similarity matrix. The proposed sentence similarity measure has been used with the baseline graph-based ranking models for sentence extraction. Experiments have been conducted over the benchmark DUC data sets and it has been shown that the proposed Bernoulli-based sentence similarity model provides consistent improvements over the baseline IntraLink and UniformLink methods.
Original languageEnglish
Pages (from-to)1693-1705
JournalIEEE Transactions on Knowledge and Data Engineering
Volume25
Issue number8
DOIs
Publication statusPublished (in print/issue) - Aug 2013

Fingerprint

Dive into the research topics of 'A Context-based Word Indexing Model for Document Summarization'. Together they form a unique fingerprint.

Cite this