Beijuka commited on
Commit
1b76869
·
verified ·
1 Parent(s): 40034d7

End of training

Browse files
Files changed (2) hide show
  1. README.md +103 -0
  2. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: mit
4
+ base_model: Davlan/afro-xlmr-base
5
+ tags:
6
+ - named-entity-recognition
7
+ - kanuri
8
+ - african-language
9
+ - pii-detection
10
+ - token-classification
11
+ - generated_from_trainer
12
+ datasets:
13
+ - Beijuka/Multilingual_PII_NER_dataset
14
+ metrics:
15
+ - precision
16
+ - recall
17
+ - f1
18
+ - accuracy
19
+ model-index:
20
+ - name: multilingual-Davlan/afro-xlmr-base-kanuri-ner-v1
21
+ results:
22
+ - task:
23
+ name: Token Classification
24
+ type: token-classification
25
+ dataset:
26
+ name: Beijuka/Multilingual_PII_NER_dataset
27
+ type: Beijuka/Multilingual_PII_NER_dataset
28
+ args: 'split: train+validation+test'
29
+ metrics:
30
+ - name: Precision
31
+ type: precision
32
+ value: 0.9328358208955224
33
+ - name: Recall
34
+ type: recall
35
+ value: 0.9529860228716646
36
+ - name: F1
37
+ type: f1
38
+ value: 0.9428032683846638
39
+ - name: Accuracy
40
+ type: accuracy
41
+ value: 0.9857189865087199
42
+ ---
43
+
44
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
45
+ should probably proofread and complete it, then remove this comment. -->
46
+
47
+ # multilingual-Davlan/afro-xlmr-base-kanuri-ner-v1
48
+
49
+ This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the Beijuka/Multilingual_PII_NER_dataset dataset.
50
+ It achieves the following results on the evaluation set:
51
+ - Loss: 0.0769
52
+ - Precision: 0.9328
53
+ - Recall: 0.9530
54
+ - F1: 0.9428
55
+ - Accuracy: 0.9857
56
+
57
+ ## Model description
58
+
59
+ More information needed
60
+
61
+ ## Intended uses & limitations
62
+
63
+ More information needed
64
+
65
+ ## Training and evaluation data
66
+
67
+ More information needed
68
+
69
+ ## Training procedure
70
+
71
+ ### Training hyperparameters
72
+
73
+ The following hyperparameters were used during training:
74
+ - learning_rate: 5e-05
75
+ - train_batch_size: 8
76
+ - eval_batch_size: 8
77
+ - seed: 42
78
+ - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
79
+ - lr_scheduler_type: linear
80
+ - num_epochs: 20
81
+
82
+ ### Training results
83
+
84
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
85
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
86
+ | No log | 1.0 | 301 | 0.1158 | 0.8646 | 0.8610 | 0.8628 | 0.9683 |
87
+ | 0.2058 | 2.0 | 602 | 0.0876 | 0.8848 | 0.9431 | 0.9130 | 0.9751 |
88
+ | 0.2058 | 3.0 | 903 | 0.0854 | 0.9078 | 0.9143 | 0.9110 | 0.9783 |
89
+ | 0.0658 | 4.0 | 1204 | 0.1092 | 0.8847 | 0.9383 | 0.9107 | 0.9755 |
90
+ | 0.0491 | 5.0 | 1505 | 0.0881 | 0.9046 | 0.9431 | 0.9234 | 0.9782 |
91
+ | 0.0491 | 6.0 | 1806 | 0.1227 | 0.9015 | 0.9323 | 0.9166 | 0.9770 |
92
+ | 0.0298 | 7.0 | 2107 | 0.1005 | 0.9218 | 0.9461 | 0.9338 | 0.9805 |
93
+ | 0.0298 | 8.0 | 2408 | 0.1454 | 0.8970 | 0.9395 | 0.9178 | 0.9774 |
94
+ | 0.0164 | 9.0 | 2709 | 0.1301 | 0.9146 | 0.9305 | 0.9225 | 0.9789 |
95
+ | 0.0089 | 10.0 | 3010 | 0.1297 | 0.9215 | 0.9425 | 0.9319 | 0.9806 |
96
+
97
+
98
+ ### Framework versions
99
+
100
+ - Transformers 4.55.4
101
+ - Pytorch 2.8.0+cu126
102
+ - Datasets 4.0.0
103
+ - Tokenizers 0.21.4
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d7d5a87749e63b57b85538e3d9d6ab05519dab277e529e413d4db4f5c7b70cbc
3
  size 1109925476
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b5fdb7768c84696e25b349cb97880420e4b8a8e4f57fde63e9c48e11bb6e6ba
3
  size 1109925476