|
|
--- |
|
|
language: |
|
|
- en |
|
|
- zh |
|
|
license: llama3 |
|
|
tags: |
|
|
- Cantonese |
|
|
- chat |
|
|
- Llama3 |
|
|
datasets: |
|
|
- jed351/cantonese-wikipedia |
|
|
- lordjia/Cantonese_English_Translation |
|
|
pipeline_tag: text-generation |
|
|
model-index: |
|
|
- name: Llama-3-Cantonese-8B-Instruct |
|
|
results: |
|
|
- task: |
|
|
type: text-generation |
|
|
name: Text Generation |
|
|
dataset: |
|
|
name: IFEval (0-Shot) |
|
|
type: HuggingFaceH4/ifeval |
|
|
args: |
|
|
num_few_shot: 0 |
|
|
metrics: |
|
|
- type: inst_level_strict_acc and prompt_level_strict_acc |
|
|
value: 66.69 |
|
|
name: strict accuracy |
|
|
source: |
|
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lordjia/Llama-3-Cantonese-8B-Instruct |
|
|
name: Open LLM Leaderboard |
|
|
- task: |
|
|
type: text-generation |
|
|
name: Text Generation |
|
|
dataset: |
|
|
name: BBH (3-Shot) |
|
|
type: BBH |
|
|
args: |
|
|
num_few_shot: 3 |
|
|
metrics: |
|
|
- type: acc_norm |
|
|
value: 26.79 |
|
|
name: normalized accuracy |
|
|
source: |
|
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lordjia/Llama-3-Cantonese-8B-Instruct |
|
|
name: Open LLM Leaderboard |
|
|
- task: |
|
|
type: text-generation |
|
|
name: Text Generation |
|
|
dataset: |
|
|
name: MATH Lvl 5 (4-Shot) |
|
|
type: hendrycks/competition_math |
|
|
args: |
|
|
num_few_shot: 4 |
|
|
metrics: |
|
|
- type: exact_match |
|
|
value: 8.23 |
|
|
name: exact match |
|
|
source: |
|
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lordjia/Llama-3-Cantonese-8B-Instruct |
|
|
name: Open LLM Leaderboard |
|
|
- task: |
|
|
type: text-generation |
|
|
name: Text Generation |
|
|
dataset: |
|
|
name: GPQA (0-shot) |
|
|
type: Idavidrein/gpqa |
|
|
args: |
|
|
num_few_shot: 0 |
|
|
metrics: |
|
|
- type: acc_norm |
|
|
value: 5.82 |
|
|
name: acc_norm |
|
|
source: |
|
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lordjia/Llama-3-Cantonese-8B-Instruct |
|
|
name: Open LLM Leaderboard |
|
|
- task: |
|
|
type: text-generation |
|
|
name: Text Generation |
|
|
dataset: |
|
|
name: MuSR (0-shot) |
|
|
type: TAUR-Lab/MuSR |
|
|
args: |
|
|
num_few_shot: 0 |
|
|
metrics: |
|
|
- type: acc_norm |
|
|
value: 9.48 |
|
|
name: acc_norm |
|
|
source: |
|
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lordjia/Llama-3-Cantonese-8B-Instruct |
|
|
name: Open LLM Leaderboard |
|
|
- task: |
|
|
type: text-generation |
|
|
name: Text Generation |
|
|
dataset: |
|
|
name: MMLU-PRO (5-shot) |
|
|
type: TIGER-Lab/MMLU-Pro |
|
|
config: main |
|
|
split: test |
|
|
args: |
|
|
num_few_shot: 5 |
|
|
metrics: |
|
|
- type: acc |
|
|
value: 27.94 |
|
|
name: accuracy |
|
|
source: |
|
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lordjia/Llama-3-Cantonese-8B-Instruct |
|
|
name: Open LLM Leaderboard |
|
|
--- |
|
|
|
|
|
# Llama-3-Cantonese-8B-Instruct |
|
|
|
|
|
## Model Overview / 模型概述 |
|
|
|
|
|
Llama-3-Cantonese-8B-Instruct is a Cantonese language model based on Meta-Llama-3-8B-Instruct, fine-tuned using LoRA. It aims to enhance Cantonese text generation and comprehension capabilities, supporting various tasks such as dialogue generation, text summarization, and question-answering. |
|
|
|
|
|
Llama-3-Cantonese-8B-Instruct係基於Meta-Llama-3-8B-Struct嘅粵語語言模型,使用LoRA進行微調。 它旨在提高粵語文本的生成和理解能力,支持各種任務,如對話生成、文本摘要和問答。 |
|
|
|
|
|
## Model Features / 模型特性 |
|
|
|
|
|
- **Base Model**: Meta-Llama-3-8B-Instruct |
|
|
- **Fine-tuning Method**: LoRA instruction tuning |
|
|
- **Training Steps**: 4562 steps |
|
|
- **Primary Language**: Cantonese / 粵語 |
|
|
- **Datasets**: |
|
|
- [jed351/cantonese-wikipedia](https://huggingface.co/datasets/jed351/cantonese-wikipedia) |
|
|
- [lordjia/Cantonese_English_Translation](https://huggingface.co/datasets/lordjia/Cantonese_English_Translation) |
|
|
- **Training Tools**: [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) |
|
|
|
|
|
## Quantized Version / 量化版本 |
|
|
|
|
|
A 4-bit quantized version of this model is also available: [llama3-cantonese-8b-instruct-q4_0.gguf](https://huggingface.co/lordjia/Llama-3-Cantonese-8B-Instruct/blob/main/llama3-cantonese-8b-instruct-q4_0.gguf). |
|
|
|
|
|
此模型的4位量化版本也可用:[llama3-cantonese-8b-instruct-q4_0.gguf](https://huggingface.co/lordjia/Llama-3-Cantonese-8B-Instruct/blob/main/llama3-cantonese-8b-instruct-q4_0.gguf)。 |
|
|
|
|
|
## Alternative Model Recommendations / 備選模型舉薦 |
|
|
|
|
|
For alternatives, consider the following models, both fine-tuned by LordJia on Cantonese language tasks: |
|
|
|
|
|
揾其他嘅話,可以諗下呢啲模型,全部都係LordJia用廣東話嘅工作調教好嘅: |
|
|
|
|
|
1. [Qwen2-Cantonese-7B-Instruct](https://huggingface.co/lordjia/Qwen2-Cantonese-7B-Instruct) based on Qwen2-7B-Instruct. |
|
|
2. [Llama-3.1-Cantonese-8B-Instruct](https://huggingface.co/lordjia/Llama-3.1-Cantonese-8B-Instruct) based on Meta-Llama-3.1-8B-Instruct. |
|
|
|
|
|
## License / 許可證 |
|
|
|
|
|
This model is licensed under the Llama 3 Community License. Please review the terms before use. |
|
|
|
|
|
此模型根據Llama 3社區許可證獲得許可。 請在使用前仔細閱讀呢啲條款。 |
|
|
|
|
|
## Contributors / 貢獻 |
|
|
|
|
|
- LordJia [https://ai.chao.cool](https://ai.chao.cool/) |
|
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) |
|
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lordjia__Llama-3-Cantonese-8B-Instruct) |
|
|
|
|
|
| Metric |Value| |
|
|
|-------------------|----:| |
|
|
|Avg. |24.16| |
|
|
|IFEval (0-Shot) |66.69| |
|
|
|BBH (3-Shot) |26.79| |
|
|
|MATH Lvl 5 (4-Shot)| 8.23| |
|
|
|GPQA (0-shot) | 5.82| |
|
|
|MuSR (0-shot) | 9.48| |
|
|
|MMLU-PRO (5-shot) |27.94| |
|
|
|
|
|
|