lbourdois commited on
Commit
d7f2a33
·
verified ·
1 Parent(s): 9c8f2fd

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +79 -67
README.md CHANGED
@@ -1,67 +1,79 @@
1
- ---
2
- base_model: Qwen/Qwen2.5-0.5B-Instruct
3
- language:
4
- - en
5
- library_name: transformers
6
- license: apache-2.0
7
- license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
8
- pipeline_tag: text-generation
9
- tags:
10
- - chat
11
- - openvino
12
- - openvino-export
13
- ---
14
-
15
- This model was converted to OpenVINO from [`Qwen/Qwen2.5-0.5B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) using [optimum-intel](https://github.com/huggingface/optimum-intel)
16
- via the [export](https://huggingface.co/spaces/echarlaix/openvino-export) space.
17
-
18
- First make sure you have optimum-intel installed:
19
-
20
- ```bash
21
- pip install optimum[openvino]
22
- ```
23
-
24
- To load your model you can do as follows:
25
- In huggingface space
26
- app.py
27
- ```python
28
- import gradio as gr
29
- from huggingface_hub import InferenceClient
30
- from optimum.intel import OVModelForCausalLM
31
- from transformers import AutoTokenizer, pipeline
32
-
33
- # 載入模型和標記器
34
- model_id = "HelloSun/Qwen2.5-0.5B-Instruct-openvino"
35
- model = OVModelForCausalLM.from_pretrained(model_id)
36
- tokenizer = AutoTokenizer.from_pretrained(model_id)
37
-
38
- # 建立生成管道
39
- pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
40
-
41
- def respond(message, history):
42
- # 將當前訊息與歷史訊息合併
43
- #input_text = message if not history else history[-1]["content"] + " " + message
44
- input_text = message
45
- # 獲取模型的回應
46
- response = pipe(input_text, max_length=500, truncation=True, num_return_sequences=1)
47
- reply = response[0]['generated_text']
48
-
49
- # 返回新的消息格式
50
- print(f"Message: {message}")
51
- print(f"Reply: {reply}")
52
- return reply
53
-
54
- # 設定 Gradio 的聊天界面
55
- demo = gr.ChatInterface(fn=respond, title="Chat with Qwen(通義千問) 2.5-0.5B", description=" HelloSun/Qwen2.5-0.5B-Instruct-openvino 聊天!", type='messages')
56
-
57
- if __name__ == "__main__":
58
- demo.launch()
59
- ```
60
- requirements.txt
61
- ```requirements.txt
62
- huggingface_hub==0.25.2
63
- optimum[openvino]
64
- ```
65
-
66
-
67
-
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-0.5B-Instruct
3
+ language:
4
+ - zho
5
+ - eng
6
+ - fra
7
+ - spa
8
+ - por
9
+ - deu
10
+ - ita
11
+ - rus
12
+ - jpn
13
+ - kor
14
+ - vie
15
+ - tha
16
+ - ara
17
+ library_name: transformers
18
+ license: apache-2.0
19
+ license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
20
+ pipeline_tag: text-generation
21
+ tags:
22
+ - chat
23
+ - openvino
24
+ - openvino-export
25
+ ---
26
+
27
+ This model was converted to OpenVINO from [`Qwen/Qwen2.5-0.5B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) using [optimum-intel](https://github.com/huggingface/optimum-intel)
28
+ via the [export](https://huggingface.co/spaces/echarlaix/openvino-export) space.
29
+
30
+ First make sure you have optimum-intel installed:
31
+
32
+ ```bash
33
+ pip install optimum[openvino]
34
+ ```
35
+
36
+ To load your model you can do as follows:
37
+ In huggingface space
38
+ app.py
39
+ ```python
40
+ import gradio as gr
41
+ from huggingface_hub import InferenceClient
42
+ from optimum.intel import OVModelForCausalLM
43
+ from transformers import AutoTokenizer, pipeline
44
+
45
+ # 載入模型和標記器
46
+ model_id = "HelloSun/Qwen2.5-0.5B-Instruct-openvino"
47
+ model = OVModelForCausalLM.from_pretrained(model_id)
48
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
49
+
50
+ # 建立生成管道
51
+ pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
52
+
53
+ def respond(message, history):
54
+ # 將當前訊息與歷史訊息合併
55
+ #input_text = message if not history else history[-1]["content"] + " " + message
56
+ input_text = message
57
+ # 獲取模型的回應
58
+ response = pipe(input_text, max_length=500, truncation=True, num_return_sequences=1)
59
+ reply = response[0]['generated_text']
60
+
61
+ # 返回新的消息格式
62
+ print(f"Message: {message}")
63
+ print(f"Reply: {reply}")
64
+ return reply
65
+
66
+ # 設定 Gradio 的聊天界面
67
+ demo = gr.ChatInterface(fn=respond, title="Chat with Qwen(通義千問) 2.5-0.5B", description="與 HelloSun/Qwen2.5-0.5B-Instruct-openvino 聊天!", type='messages')
68
+
69
+ if __name__ == "__main__":
70
+ demo.launch()
71
+ ```
72
+ requirements.txt
73
+ ```requirements.txt
74
+ huggingface_hub==0.25.2
75
+ optimum[openvino]
76
+ ```
77
+
78
+
79
+