Turkish Named Entity Recognition (NER) Model

This model is the fine-tuned version of Multilingual ModernBERT model "jhu-clsp/mmBERT-base" using a reviewed version of well known Turkish NER dataset (https://github.com/stefan-it/turkish-bert/files/4558187/nerdata.txt).

Fine-tuning parameters:

task = "ner"
model_checkpoint = "jhu-clsp/mmBERT-base"
batch_size = 8 
label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
max_length = 8192
learning_rate = 2e-5 
num_train_epochs = 5 
weight_decay = 0.01 

How to use:

from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
model = AutoModelForTokenClassification.from_pretrained("akdeniz27/mmbert-base-tr-uncased-ner")
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/mmbert-base-tr-uncased-ner")
# tokenizer.model_max_length = 512 # Model max_length could be set here (max 8192 as default)
ner = pipeline("token-classification", model=model, tokenizer=tokenizer, aggregation_strategy="first")
ner("your text here")

Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.

Reference test results:

  • accuracy: 0.991023766617932
  • f1: 0.9414858645627877
  • precision: 0.9397695785328861
  • recall: 0.9432084309133489
Downloads last month
3
Safetensors
Model size
0.3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for akdeniz27/mmbert-base-tr-uncased-ner

Finetuned
(26)
this model