Beijuka commited on
Commit
75f4429
·
verified ·
1 Parent(s): b9f266f

End of training

Browse files
Files changed (2) hide show
  1. README.md +107 -0
  2. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: mit
4
+ base_model: Davlan/afro-xlmr-base
5
+ tags:
6
+ - named-entity-recognition
7
+ - hausa
8
+ - african-language
9
+ - pii-detection
10
+ - token-classification
11
+ - generated_from_trainer
12
+ datasets:
13
+ - Beijuka/Multilingual_PII_NER_dataset
14
+ metrics:
15
+ - precision
16
+ - recall
17
+ - f1
18
+ - accuracy
19
+ model-index:
20
+ - name: multilingual-Davlan/afro-xlmr-base-hausa-ner-v1
21
+ results:
22
+ - task:
23
+ name: Token Classification
24
+ type: token-classification
25
+ dataset:
26
+ name: Beijuka/Multilingual_PII_NER_dataset
27
+ type: Beijuka/Multilingual_PII_NER_dataset
28
+ args: 'split: train+validation+test'
29
+ metrics:
30
+ - name: Precision
31
+ type: precision
32
+ value: 0.9298021697511167
33
+ - name: Recall
34
+ type: recall
35
+ value: 0.9256670902160101
36
+ - name: F1
37
+ type: f1
38
+ value: 0.9277300222858963
39
+ - name: Accuracy
40
+ type: accuracy
41
+ value: 0.9811780190852254
42
+ ---
43
+
44
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
45
+ should probably proofread and complete it, then remove this comment. -->
46
+
47
+ # multilingual-Davlan/afro-xlmr-base-hausa-ner-v1
48
+
49
+ This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the Beijuka/Multilingual_PII_NER_dataset dataset.
50
+ It achieves the following results on the evaluation set:
51
+ - Loss: 0.1152
52
+ - Precision: 0.9298
53
+ - Recall: 0.9257
54
+ - F1: 0.9277
55
+ - Accuracy: 0.9812
56
+
57
+ ## Model description
58
+
59
+ More information needed
60
+
61
+ ## Intended uses & limitations
62
+
63
+ More information needed
64
+
65
+ ## Training and evaluation data
66
+
67
+ More information needed
68
+
69
+ ## Training procedure
70
+
71
+ ### Training hyperparameters
72
+
73
+ The following hyperparameters were used during training:
74
+ - learning_rate: 5e-05
75
+ - train_batch_size: 8
76
+ - eval_batch_size: 8
77
+ - seed: 42
78
+ - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
79
+ - lr_scheduler_type: linear
80
+ - num_epochs: 20
81
+
82
+ ### Training results
83
+
84
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
85
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
86
+ | No log | 1.0 | 301 | 0.1139 | 0.8862 | 0.8862 | 0.8862 | 0.9694 |
87
+ | 0.2008 | 2.0 | 602 | 0.0925 | 0.8741 | 0.9155 | 0.8944 | 0.9729 |
88
+ | 0.2008 | 3.0 | 903 | 0.0910 | 0.8901 | 0.9125 | 0.9012 | 0.9747 |
89
+ | 0.0686 | 4.0 | 1204 | 0.1056 | 0.8947 | 0.9263 | 0.9102 | 0.9753 |
90
+ | 0.0501 | 5.0 | 1505 | 0.0921 | 0.9071 | 0.9305 | 0.9187 | 0.9775 |
91
+ | 0.0501 | 6.0 | 1806 | 0.0939 | 0.9062 | 0.9377 | 0.9217 | 0.9789 |
92
+ | 0.036 | 7.0 | 2107 | 0.1034 | 0.8926 | 0.9359 | 0.9137 | 0.9769 |
93
+ | 0.036 | 8.0 | 2408 | 0.1305 | 0.9019 | 0.9425 | 0.9218 | 0.9779 |
94
+ | 0.0219 | 9.0 | 2709 | 0.1320 | 0.9037 | 0.9335 | 0.9184 | 0.9778 |
95
+ | 0.0089 | 10.0 | 3010 | 0.1241 | 0.9271 | 0.9065 | 0.9167 | 0.9781 |
96
+ | 0.0089 | 11.0 | 3311 | 0.1386 | 0.9184 | 0.9311 | 0.9247 | 0.9791 |
97
+ | 0.0056 | 12.0 | 3612 | 0.1482 | 0.9094 | 0.9377 | 0.9233 | 0.9788 |
98
+ | 0.0056 | 13.0 | 3913 | 0.1550 | 0.9109 | 0.9311 | 0.9209 | 0.9783 |
99
+ | 0.0032 | 14.0 | 4214 | 0.1631 | 0.9078 | 0.9377 | 0.9225 | 0.9792 |
100
+
101
+
102
+ ### Framework versions
103
+
104
+ - Transformers 4.55.4
105
+ - Pytorch 2.8.0+cu126
106
+ - Datasets 4.0.0
107
+ - Tokenizers 0.21.4
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f8efdb3c2c7eea1feaecb1dd28a081c2d69eab9c7e324772d58b3be034c37feb
3
  size 1109925476
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:610a7b625661368d1edd0266803008c7e9fc17e4ed0a5a5dccfd9df0d99cf72d
3
  size 1109925476