7️⃣NER: Token Classification
Named Entity Recognition: Token Classification
Token Classification
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("dslim/bert-base-NER")
model = AutoModelForTokenClassification.from_pretrained("dslim/bert-base-NER")
nlp = pipeline(
"ner",
model=model,
tokenizer=tokenizer
)/home/kubwa/anaconda3/envs/pytorch/lib/python3.11/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
tokenizer_config.json: 100%|██████████| 59.0/59.0 [00:00<00:00, 145kB/s]
config.json: 100%|██████████| 829/829 [00:00<00:00, 2.11MB/s]
vocab.txt: 100%|██████████| 213k/213k [00:00<00:00, 573kB/s]
added_tokens.json: 100%|██████████| 2.00/2.00 [00:00<00:00, 4.48kB/s]
special_tokens_map.json: 100%|██████████| 112/112 [00:00<00:00, 275kB/s]
model.safetensors: 100%|██████████| 433M/433M [00:22<00:00, 19.1MB/s]
Some weights of the model checkpoint at dslim/bert-base-NER were not used when initializing BertForTokenClassification: ['bert.pooler.dense.bias', 'bert.pooler.dense.weight']
- This IS expected if you are initializing BertForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).한국어
Last updated