Space transformers: language modeling for space systems

Audrey Berquand, Paul Darm, Annalisa Riccardi

Research output: Contribution to journalArticlepeer-review

10 Citations (Scopus)
97 Downloads (Pure)

Abstract

The transformers architecture and transfer learning have radically modified the Natural Language Processing (NLP) landscape, enabling new applications in fields where open source labelled datasets are scarce. Space systems engineering is a field with limited access to large labelled corpora and a need for enhanced knowledge reuse of accumulated design data. Transformers models such as the
Bidirectional Encoder Representations from Transformers (BERT) and the Robustly Optimised BERT Pretraining Approach (RoBERTa) are however trained on general corpora. To answer the need for domain specific contextualised word embedding in the space field, we propose Space Transformers, a novel family
of three models, SpaceBERT, SpaceRoBERTa and SpaceSciBERT, respectively further pre-trained from BERT, RoBERTa and SciBERT on our domain-specific corpus. We collect and label a new dataset of space systems concepts based on space standards. We fine-tune and compare our domain-specific models to their
general counterparts on a domain-specific Concept Recognition (CR) task. Our study rightly demonstrates that the models further pre-trained on a space corpus outperform their respective baseline models in the Concept Recognition task, with SpaceRoBERTa achieving significant higher ranking overall.
Original languageEnglish
Pages (from-to)133111-133122
Number of pages12
JournalIEEE Access
Volume9
DOIs
Publication statusPublished - 24 Sept 2021

Keywords

  • language model
  • transformers
  • space systems
  • concept recognition
  • requirements

Fingerprint

Dive into the research topics of 'Space transformers: language modeling for space systems'. Together they form a unique fingerprint.

Cite this