|
|
--- |
|
|
language: [en, ko] |
|
|
license: unknown |
|
|
tags: |
|
|
- roberta |
|
|
- sequence-classification |
|
|
- code |
|
|
- small |
|
|
inference: false |
|
|
library_name: transformers |
|
|
pipeline_tag: text-classification |
|
|
datasets: |
|
|
- dacon |
|
|
--- |
|
|
|
|
|
# code-sim-roberta-small |
|
|
|
|
|
RoBERTa-small์ ์ฝ๋ ์ ์ฌ๋ ๋ถ๋ฅ ํ์คํฌ๋ก ํ์ธํ๋ํ ๊ฐ์ค์น์
๋๋ค. |
|
|
|
|
|
Task : https://dacon.io/competitions/official/235900/overview/description |
|
|
|
|
|
Decription : ๋ ์ฝ๋๊ฐ ์ ์ฌ์ฑ(๋์ผ ๊ฒฐ๊ณผ๋ฌผ ์ฐ์ถ ๊ฐ๋ฅํ์ง) ์ฌ๋ถ๋ฅผ ํ๋จํ ์ ์๋ AI ์๊ณ ๋ฆฌ์ฆ์ ๊ฐ๋ฐ |
|
|
|
|
|
์ฌ์ฉ pretrained_model : "hosung1/roberta_small_mlm_from_scratch" |
|
|
|
|
|
์ฌ์ฉ Datasets : Dacon์ ๊ณต |
|
|
|
|
|
## How to use |
|
|
```python |
|
|
from transformers import AutoTokenizer, AutoModelForSequenceClassification |
|
|
tok = AutoTokenizer.from_pretrained("hosung1/code-sim-roberta-small") |
|
|
mdl = AutoModelForSequenceClassification.from_pretrained("hosung1/code-sim-roberta-small") |
|
|
|