No items found.

SenseBERT: Driving Some Sense into BERT

,
May 18, 2020

SenseBERT is a pre-trained model to predict not only the masked words but also their WordNet supersenses.

read paper

SenseBERT is a pre-trained model to predict not only the masked words but also their WordNet supersenses.

The ability to learn from large unlabeled corpora has allowed neural language models to advance the frontier in natural language understanding. However, existing self-supervision techniques operate at the word form level, which serves as a surrogate for the underlying semantic content. This paper proposes a method to employ weak-supervision directly at the word sense level.

Our model, named SenseBERT, is pre-trained to predict not only the masked words but also their WordNet supersenses. Accordingly, we attain a lexical-semantic level language model, without the use of human annotation. SenseBERT achieves significantly improved lexical understanding, as we demonstrate by experimenting on SemEval Word Sense Disambiguation, and by attaining a state of the art result on the Word in Context task.

More upcoming events