Comparing general purpose pre-trained Word and Sentence embeddings for Requirements Classification

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

122 Downloads (Pure)

Abstract

The recent evolution of NLP has enriched the set of DL-based approaches to include a number of general purpose Large Language Models (LLMs). Whereas new models have been proven useful for generic text handling, their applicability to domain-specific NLP tasks still remains doubtful, particularly because of the limited amount of dataset available in certain domains, such as Requirements Engineering. In this study, different pre-trained embeddings were tested in three requirements classification tasks, in search of a tradeoff between accuracy and computational complexity. The best F1-score results were obtained with BERT (90.36% and 84.23%), with DistilBERT identified as optimal tradeoff (90.28% and 82.61%).
Original languageEnglish
Title of host publicationJoint Proceedings of REFSQ-2023 Workshops, Doctoral Symposium, Posters & Tools Track and Journal Early Feedback (REFSQ-JP 2023)
Subtitle of host publicationREFSQ Co-Located Events 2023
PublisherCEUR-WS
Volume3378
Publication statusPublished online - 20 Apr 2023
Event6th Workshop on Natural Language Processing for Requirements Engineering: REFSQ Co-Located Events 2023 - Barcelona, Spain
Duration: 17 Apr 2023 → …

Workshop

Workshop6th Workshop on Natural Language Processing for Requirements Engineering
Abbreviated titleNLP4RE
Country/TerritorySpain
CityBarcelona
Period17/04/23 → …

Fingerprint

Dive into the research topics of 'Comparing general purpose pre-trained Word and Sentence embeddings for Requirements Classification'. Together they form a unique fingerprint.

Cite this