[go: up one dir, main page]

Empirical Analysis of Unlabeled Entity Problem in Named Entity RecognitionDownload PDF

Published: 12 Jan 2021, Last Modified: 03 Apr 2024ICLR 2021 PosterReaders: Everyone
Keywords: Named Entity Recognition, Unlabeled Entity Problem, Negative Sampling
Abstract: In many scenarios, named entity recognition (NER) models severely suffer from unlabeled entity problem, where the entities of a sentence may not be fully annotated. Through empirical studies performed on synthetic datasets, we find two causes of performance degradation. One is the reduction of annotated entities and the other is treating unlabeled entities as negative instances. The first cause has less impact than the second one and can be mitigated by adopting pretraining language models. The second cause seriously misguides a model in training and greatly affects its performances. Based on the above observations, we propose a general approach, which can almost eliminate the misguidance brought by unlabeled entities. The key idea is to use negative sampling that, to a large extent, avoids training NER models with unlabeled entities. Experiments on synthetic datasets and real-world datasets show that our model is robust to unlabeled entity problem and surpasses prior baselines. On well-annotated datasets, our model is competitive with the state-of-the-art method.
One-sentence Summary: This work studys what are the impacts of unlabeled entity problem on NER models and how to effectively eliminate them by a general method.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Code: [![github](/images/github_icon.svg) LeePleased/NegSampling-NER](https://github.com/LeePleased/NegSampling-NER)
Data: [OntoNotes 5.0](https://paperswithcode.com/dataset/ontonotes-5-0)
11 Replies

Loading