Abstract
The recent evolution of NLP has enriched the set of DL-based approaches to include a number of general purpose Large Language Models (LLMs). Whereas new models have been proven useful for generic text handling, their applicability to domain-specific NLP tasks still remains doubtful, particularly because of the limited amount of dataset available in certain domains, such as Requirements Engineering. In this study, different pre-trained embeddings were tested in three requirements classification tasks, in search of a tradeoff between accuracy and computational complexity. The best F1-score results were obtained with BERT (90.36% and 84.23%), with DistilBERT identified as optimal tradeoff (90.28% and 82.61%).
Original language | English |
---|---|
Title of host publication | Joint Proceedings of REFSQ-2023 Workshops, Doctoral Symposium, Posters & Tools Track and Journal Early Feedback (REFSQ-JP 2023) |
Subtitle of host publication | REFSQ Co-Located Events 2023 |
Publisher | CEUR-WS |
Volume | 3378 |
Publication status | Published online - 20 Apr 2023 |
Event | 6th Workshop on Natural Language Processing for Requirements Engineering: REFSQ Co-Located Events 2023 - Barcelona, Spain Duration: 17 Apr 2023 → … |
Workshop
Workshop | 6th Workshop on Natural Language Processing for Requirements Engineering |
---|---|
Abbreviated title | NLP4RE |
Country/Territory | Spain |
City | Barcelona |
Period | 17/04/23 → … |