From Complex Word Identification to Substitution: Instruction-Tuned Language Models for Lexical Simplification

Tonghui Han, Xinru Zhang, Y Bi, Maurice Mulvenna, Dongqiang Yang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Downloads (Pure)

Abstract

Lexical-level sentence simplification is essential for improving text accessibility, yet traditional methods often struggle to dynamically identify complex terms and generate contextually appropriate substitutions, resulting in limited generalization. While prompt-based approaches with large language models (LLMs) have shown strong performance and adaptability, they often lack interpretability and are prone to hallucinating. This study proposes a fine-tuning approach for mid-sized LLMs to emulate the lexical simplification pipeline. We transform complex word identification datasets into an instruction–response format to support instruction tuning. Experimental results show that our method substantially enhances complex word identification accuracy with reduced hallucinations while achieving competitive performance on lexical simplification benchmarks. Furthermore, we find that integrating fine-tuning with prompt engineering reduces dependency on manual prompt optimization, leading to a more efficient simplification framework.
Original languageEnglish
Title of host publicationProceedings of the 14th Joint Conference on Lexical and Computational Semantics (*SEM 2025)
Place of PublicationSuzhou, China
PublisherAssociation for Computational Linguistics
Pages48-58
Number of pages10
ISBN (Print)979-8-89176-340-1
Publication statusPublished online - 8 Nov 2025

Fingerprint

Dive into the research topics of 'From Complex Word Identification to Substitution: Instruction-Tuned Language Models for Lexical Simplification'. Together they form a unique fingerprint.

Cite this