-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can you release the hyper-parameter of NER task? #223
Comments
@Albert-XY I heard some important details are not mentioned in the paper. They used the document context instead of the sentence context to get the bert activations for the words. I did a hyperparameter sweep and the best result I got on conll2003 is 0.907(F1 on test), which is quite far away from the numbers reported in the paper. |
Could you please tell me the hyper-parameter you changed and their values? Did you add a additional output layer for classify or just use a softmax-classifier? Thanks |
@Albert-XY I replaced the softmax layer with a CRF layer, add apply a dropout with rate 0.1 before the CRF layer. That's the best I can get with fine tuning. If use bert activations as input into a 2 layer bilstm-crf model, I could get 91.7 on the test set, still quite far away from the sota. |
i also have a similar experience. https://github.com/dsindex/BERT-BiLSTM-CRF-NER i can't find a way to increase the test f1 score over 91.3 by official conlleval.pl.
maybe i lost some important details. or miss implementation? |
@dsindex I head that they fed the document level context to the language model to get the activation for each word. I hope the authors could release more details about the NER experiment. @jacobdevlin-google |
here is the best result so far.
i carefully doubt the f1 score( dev f1 96.4, test f1 92.4) from the paper was calculated by token-based not by entity-based. [update] |
Is the result got by the official evaluate script 'conlleval.pl' ? |
those scores were reported during training.
this score was calculated after converting the predictions. converted prediction data looks like below.
|
Your code has bugs. It miss some token labels when predict. For example: token1 tag You can check the predicted result carefully. |
oh~ could you point out the code where the bug comes from? i checked the converted prediction file.
because i removed '-DOCSTART-' line.
so, there was same lines of output data matched. hmm... |
It does match the lines, because some line just '\n', not label. I found two cases, it should have maybe 40 cases. Check out the In label_test.txt line 1596, tokens: 'Pakistan', only '\n' follow this, no label. .... I'm checking the code... |
i got 92.16% with BERT large model(note that this is not on average)
|
@dsindex I tried it with the simple classifier that was described in the paper, but sadly only achieve performances of about 90% F1 (base cased BERT-model using official conlleval.pl script). Did anyone have any success with improving the performances when using a simple classifier like in the paper? |
i had same question about it. simple softmax layer on the top of bert did not perform well, far from the reported score. i guess ‘autoML’ would find a way to get there? |
I'm afraid that either an important aspect is missing or, what could also be, that a different F1-score metric was used in the paper, for example one that computes the score based on tokens and not on spans. NER sadly faces many issues with a bad reproducibility: http://science-miner.com/a-reproducibility-study-on-neural-ner/ I would love to see the official implementation. It's a bit pity to have a paper claiming a simple state-of-the-art architecture that cannot be reproduced. |
@dsindex Digging up an old thread, but from the paper:
This makes it sound a bit different than having the CRF layer learn the |
I wonder how it is done in training? Does it mean the loss correspond to "##..." tokens and "X" label are never considered in training? |
no, it means that ‘##...’ with ‘X’ tags are also trained along with actual tags(ex, PER, LOC, ...). |
Why is my running result all zero? Is the parameter not passed in? what should I do? Thank you |
In order to reproduce the conll score reported in BERT paper (92.4 bert-base and 92.8 bert-large) one trick is to apply a truecaser on article titles (all upper case sentences) as preprocessing step for conll train/dev/test. This can be simply done with the following method.
Also, i found useful to use : very small learning rate (5e-6) \ large batch size (128) \ high epoch num (>40). With these configurations and preprocessing, I was able to reach 92.8 with bert-large. |
thank you so much~! https://github.com/dsindex/ntagger/tree/master/data/conll2003_truecase with conversion to truecased conll2003 data, i got consistent improvement. https://github.com/dsindex/ntagger#conll-2003-english-1
but, it is very hard to reach 92.8%;; |
What are the HPs you are using? ... In my case, I can reach 92.8 on the test with:
to use large batch_size (96 or 128) for fine-tuning you can use the method below:
|
i think the differences are larger batch size, smaller sequence length and final CRF layer. |
@dsindex results were for the fine-tuning approach not feature based with LSTM. |
those are not the average scores but fixed seed yields very similar scores. https://github.com/dsindex/ntagger#conll-2003-english-1
|
@dsindex Thank u! |
Huggingface's |
@Albert-Ma ,hi,can you get the score 92.4 use huggingface code? |
page not found for that one? |
--- language: en datasets: - conll2003 --- # bert-large-NER ## Model description **bert-large-NER** is a fine-tuned BERT-Large model that is ready to use for **Named Entity Recognition** and achieves **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: location (LOC), organizations (ORG), person (PER) and Miscellaneous (MISC). Specifically, this model is a *bert-large-cased* model that was fine-tuned on the English version of the standard [CoNLL-2003 Named Entity Recognition](https://www.aclweb.org/anthology/W03-0419.pdf) dataset. If you'd like to use a smaller model fine-tuned on the same dataset, a [bert-base-cased](https://huggingface.co/dslim/bert-base-NER) version is also available. ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for NER. ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("dslim/bert-base-NER") model = AutoModelForTokenClassification.from_pretrained("dslim/bert-base-NER") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "My name is Wolfgang and I live in Berlin" ner_results = nlp(example) print(ner_results) ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases. ## Training data This model was fine-tuned on English version of the standard [CoNLL-2003 Named Entity Recognition](https://www.aclweb.org/anthology/W03-0419.pdf) dataset. The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: Abbreviation|Description -|- O|Outside of a named entity B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity I-MIS | Miscellaneous entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organization right after another organization I-ORG |organization B-LOC |Beginning of a location right after another location I-LOC |Location ### CoNLL-2003 English Dataset Statistics This dataset was derived from the Reuters corpus which consists of Reuters news stories. You can read more about how this dataset was created in the CoNLL-2003 paper. #### # of training examples per entity type Dataset|LOC|MISC|ORG|PER -|-|-|-|- Train|7140|3438|6321|6600 Dev|1837|922|1341|1842 Test|1668|702|1661|1617 #### # of articles/sentences/tokens per dataset Dataset |Articles |Sentences |Tokens -|-|-|- Train |946 |14,987 |203,621 Dev |216 |3,466 |51,362 Test |231 |3,684 |46,435 ## Training procedure This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original BERT paper](https://arxiv.org/pdf/1810.04805) which trained & evaluated the model on CoNLL-2003 NER task. ## Eval results metric|dev|test -|-|- f1 |95.1 |91.3 precision |95.0 |90.7 recall |95.3 |91.9 The test metrics are a little lower than the official Google BERT results which encoded document context & experimented with CRF. More on replicating the original results [here](google-research/bert#223). ### BibTeX entry and citation info ``` @Article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ``` @inproceedings{tjong-kim-sang-de-meulder-2003-introduction, title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition", author = "Tjong Kim Sang, Erik F. and De Meulder, Fien", booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003", year = "2003", url = "https://www.aclweb.org/anthology/W03-0419", pages = "142--147", } ```
My result of NER is not as good as your paper said.
The text was updated successfully, but these errors were encountered: