Felladrin commited on
Commit
21840bb
·
verified ·
1 Parent(s): 7502eba

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: code
3
+ thumbnail: https://cdn-media.huggingface.co/CodeBERTa/CodeBERTa.png
4
+ datasets:
5
+ - code_search_net
6
+ library_name: transformers.js
7
+ base_model:
8
+ - huggingface/CodeBERTa-small-v1
9
+ pipeline_tag: fill-mask
10
+ ---
11
+
12
+
13
+
14
+ # CodeBERTa-small-v1 (ONNX)
15
+
16
+
17
+ This is an ONNX version of [huggingface/CodeBERTa-small-v1](https://huggingface.co/huggingface/CodeBERTa-small-v1). It was automatically converted and uploaded using [this Hugging Face Space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
18
+
19
+
20
+ ## Usage with Transformers.js
21
+
22
+
23
+ See the pipeline documentation for `fill-mask`: https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.FillMaskPipeline
24
+
25
+
26
+ ---
27
+
28
+
29
+ # CodeBERTa
30
+
31
+ CodeBERTa is a RoBERTa-like model trained on the [CodeSearchNet](https://github.blog/2019-09-26-introducing-the-codesearchnet-challenge/) dataset from GitHub.
32
+
33
+ Supported languages:
34
+
35
+ ```shell
36
+ "go"
37
+ "java"
38
+ "javascript"
39
+ "php"
40
+ "python"
41
+ "ruby"
42
+ ```
43
+
44
+ The **tokenizer** is a Byte-level BPE tokenizer trained on the corpus using Hugging Face `tokenizers`.
45
+
46
+ Because it is trained on a corpus of code (vs. natural language), it encodes the corpus efficiently (the sequences are between 33% to 50% shorter, compared to the same corpus tokenized by gpt2/roberta).
47
+
48
+ The (small) **model** is a 6-layer, 84M parameters, RoBERTa-like Transformer model – that’s the same number of layers & heads as DistilBERT – initialized from the default initialization settings and trained from scratch on the full corpus (~2M functions) for 5 epochs.
49
+
50
+ ### Tensorboard for this training ⤵️
51
+
52
+ [![tb](https://cdn-media.huggingface.co/CodeBERTa/tensorboard.png)](https://tensorboard.dev/experiment/irRI7jXGQlqmlxXS0I07ew/#scalars)
53
+
54
+ ## Quick start: masked language modeling prediction
55
+
56
+ ```python
57
+ PHP_CODE = """
58
+ public static <mask> set(string $key, $value) {
59
+ if (!in_array($key, self::$allowedKeys)) {
60
+ throw new \InvalidArgumentException('Invalid key given');
61
+ }
62
+ self::$storedValues[$key] = $value;
63
+ }
64
+ """.lstrip()
65
+ ```
66
+
67
+ ### Does the model know how to complete simple PHP code?
68
+
69
+ ```python
70
+ from transformers import pipeline
71
+
72
+ fill_mask = pipeline(
73
+ "fill-mask",
74
+ model="huggingface/CodeBERTa-small-v1",
75
+ tokenizer="huggingface/CodeBERTa-small-v1"
76
+ )
77
+
78
+ fill_mask(PHP_CODE)
79
+
80
+ ## Top 5 predictions:
81
+ #
82
+ ' function' # prob 0.9999827146530151
83
+ 'function' #
84
+ ' void' #
85
+ ' def' #
86
+ ' final' #
87
+ ```
88
+
89
+ ### Yes! That was easy 🎉 What about some Python (warning: this is going to be meta)
90
+
91
+ ```python
92
+ PYTHON_CODE = """
93
+ def pipeline(
94
+ task: str,
95
+ model: Optional = None,
96
+ framework: Optional[<mask>] = None,
97
+ **kwargs
98
+ ) -> Pipeline:
99
+ pass
100
+ """.lstrip()
101
+ ```
102
+
103
+ Results:
104
+ ```python
105
+ 'framework', 'Framework', ' framework', 'None', 'str'
106
+ ```
107
+
108
+ > This program can auto-complete itself! 😱
109
+
110
+ ### Just for fun, let's try to mask natural language (not code):
111
+
112
+ ```python
113
+ fill_mask("My name is <mask>.")
114
+
115
+ # {'sequence': '<s> My name is undefined.</s>', 'score': 0.2548016905784607, 'token': 3353}
116
+ # {'sequence': '<s> My name is required.</s>', 'score': 0.07290805131196976, 'token': 2371}
117
+ # {'sequence': '<s> My name is null.</s>', 'score': 0.06323737651109695, 'token': 469}
118
+ # {'sequence': '<s> My name is name.</s>', 'score': 0.021919190883636475, 'token': 652}
119
+ # {'sequence': '<s> My name is disabled.</s>', 'score': 0.019681859761476517, 'token': 7434}
120
+ ```
121
+
122
+ This (kind of) works because code contains comments (which contain natural language).
123
+
124
+ Of course, the most frequent name for a Computer scientist must be undefined 🤓.
125
+
126
+
127
+ ## Downstream task: [programming language identification](https://huggingface.co/huggingface/CodeBERTa-language-id)
128
+
129
+ See the model card for **[`huggingface/CodeBERTa-language-id`](https://huggingface.co/huggingface/CodeBERTa-language-id)** 🤯.
130
+
131
+ <br>
132
+
133
+ ## CodeSearchNet citation
134
+
135
+ <details>
136
+
137
+ ```bibtex
138
+ @article{husain_codesearchnet_2019,
139
+ title = {{CodeSearchNet} {Challenge}: {Evaluating} the {State} of {Semantic} {Code} {Search}},
140
+ shorttitle = {{CodeSearchNet} {Challenge}},
141
+ url = {http://arxiv.org/abs/1909.09436},
142
+ urldate = {2020-03-12},
143
+ journal = {arXiv:1909.09436 [cs, stat]},
144
+ author = {Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},
145
+ month = sep,
146
+ year = {2019},
147
+ note = {arXiv: 1909.09436},
148
+ }
149
+ ```
150
+
151
+ </details>
config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_attn_implementation_autoset": true,
3
+ "_name_or_path": "huggingface/CodeBERTa-small-v1",
4
+ "architectures": [
5
+ "RobertaForMaskedLM"
6
+ ],
7
+ "attention_probs_dropout_prob": 0.1,
8
+ "bos_token_id": 0,
9
+ "classifier_dropout": null,
10
+ "eos_token_id": 2,
11
+ "hidden_act": "gelu",
12
+ "hidden_dropout_prob": 0.1,
13
+ "hidden_size": 768,
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 3072,
16
+ "layer_norm_eps": 1e-05,
17
+ "max_position_embeddings": 514,
18
+ "model_type": "roberta",
19
+ "num_attention_heads": 12,
20
+ "num_hidden_layers": 6,
21
+ "pad_token_id": 1,
22
+ "position_embedding_type": "absolute",
23
+ "torch_dtype": "float32",
24
+ "transformers_version": "4.49.0",
25
+ "type_vocab_size": 1,
26
+ "use_cache": true,
27
+ "vocab_size": 52000
28
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
onnx/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab0701ac96e4778d1574c2a25ab2d7ad6b3359c01369bdf335a53a218c3e9250
3
+ size 334190701
onnx/model_bnb4.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:512cd7f47b72b4e25c5efb284792660f149f1462dcbae3d2532e459a43ad5656
3
+ size 186187506
onnx/model_fp16.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c52d1653d684dfbde3cc9224d0fd631eb49037e8deaa0698e01458350f39f525
3
+ size 167182219
onnx/model_int8.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:985639ea19b9cfca4690fac46ea49e2cc0fd8195ba6c8afad02e2820c3f56d28
3
+ size 84286447
onnx/model_q4.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8049218fb07a102fcdc9fe6ba6381ac6ca1bba2c051d3ae9b6bde428ef4dcede
3
+ size 188878307
onnx/model_q4f16.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6fb9a2af6fd4b868d2dd5d4d439041e781996c4dfa239021fa7a4547f59bc98
3
+ size 105293108
onnx/model_quantized.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:985639ea19b9cfca4690fac46ea49e2cc0fd8195ba6c8afad02e2820c3f56d28
3
+ size 84286447
onnx/model_uint8.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e20be90b446bce8ac59e68d2612e5840869c6604cf3d20497d7e2e4d44b9edfd
3
+ size 84286447
quantize_config.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "modes": [
3
+ "fp16",
4
+ "q8",
5
+ "int8",
6
+ "uint8",
7
+ "q4",
8
+ "q4f16",
9
+ "bnb4"
10
+ ],
11
+ "per_channel": true,
12
+ "reduce_range": true,
13
+ "block_size": null,
14
+ "is_symmetric": true,
15
+ "accuracy_level": null,
16
+ "quant_type": 1,
17
+ "op_block_list": null
18
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": true,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": true,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": true,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": true,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "<s>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "1": {
13
+ "content": "<pad>",
14
+ "lstrip": false,
15
+ "normalized": true,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "2": {
21
+ "content": "</s>",
22
+ "lstrip": false,
23
+ "normalized": true,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "3": {
29
+ "content": "<unk>",
30
+ "lstrip": false,
31
+ "normalized": true,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "4": {
37
+ "content": "<mask>",
38
+ "lstrip": true,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ }
44
+ },
45
+ "bos_token": "<s>",
46
+ "clean_up_tokenization_spaces": false,
47
+ "cls_token": "<s>",
48
+ "eos_token": "</s>",
49
+ "errors": "replace",
50
+ "extra_special_tokens": {},
51
+ "mask_token": "<mask>",
52
+ "max_len": 512,
53
+ "model_max_length": 512,
54
+ "pad_token": "<pad>",
55
+ "sep_token": "</s>",
56
+ "tokenizer_class": "RobertaTokenizer",
57
+ "trim_offsets": true,
58
+ "unk_token": "<unk>"
59
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff