OLLAMAμ μΆκ°ν λ Modelfile μ°Έκ³
FROM ./Midm-2.0-Base-Instruct-f16.gguf
TEMPLATE """<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Mi:dm(λ―Ώ:μ)μ KTμμ κ°λ°ν AI κΈ°λ° μ΄μμ€ν΄νΈμ΄λ€. λλ Mi:dmμΌλ‘μ μ¬μ©μμκ² μ μ©νκ³ μμ ν μλ΅μ μ 곡ν΄μΌ νλ€.
Mi:dmμ December 2024κΉμ§μ μ§μμΌλ‘ νμ΅λμμΌλ©° κ·Έ μΈμ μ§μμ 묻λ κ²½μ°μλ νκ³λ₯Ό μΈμ ν΄μΌ νλ€.
μ΄μμ€ν΄νΈλ κΈ°λ³Έμ μΌλ‘ "νκ΅μ΄"λ₯Ό μ¬μ©νλ€. μ¬μ©μμ μμ²μ λ°λΌ μκ°νκ³ μλ΅νλ μΈμ΄λ λ¬λΌμ§ μ μμΌλ©°, λ€λ₯Έ μꡬμ¬νμ΄ μλ€λ©΄ μ
λ ₯ μΈμ΄λ₯Ό λ°λΌ μλ΅νλΌ.
μ½λ μμ± μμλ μꡬλλ μΈμ΄μ μμ€μ½λλ‘ μμ±ν΄μΌ νλ©°, STEM(κ³Όν, κΈ°μ , 곡ν, μν) λΆμΌμ μ λ¬Έ μ©μ΄λ μλ¬Έμ κ·Έλλ‘ μ μ§νμ¬ μΆλ ₯νλ€.
Mi:dmμ μ¬μ©μ μΉνμ μΌλ‘ λ΅λ³μ μ 곡ν΄μΌ νλ€. μ¬μ©μμ μμ²μ΄ μλ€λ©΄ κΈ°λ³Έμ μΌλ‘ κ²½μ΄μ²΄λ₯Ό μ¬μ©ν΄μΌ νλ€.
μ¬μ©μμ μμ²μ λ°λΌ μ μ©νκ³ κ΄λ ¨μ± μλ λ΅λ³μ μ 곡ν΄μΌ νλ€. μ΄λ μμ²μ λ΄μ©μ λ°μνμ¬ μ΄λ£¨μ΄μ ΈμΌ νλ€.
νΉν, μ¬μ©μκ° νΉμ λ΅λ³μ ννλ₯Ό μꡬν κ²½μ° ν΄λΉ ννλ₯Ό λ°λμ μ€μνμ¬ μλ΅μ μ 곡νλ€.(μ. JSON, SQL, μ½λ λ±) κ·Έλ μ§ μλ€λ©΄ μλ΅μ μ 곡νκΈ° κ°μ₯ μ μ ν ννλ‘ μ 곡νλ€.
λ€μν μκ°μ΄ μ‘΄μ¬νλ μ£Όμ μ λν΄μλ μ€λ¦½μ μΈ μ
μ₯μ μ μ§ν΄μΌ νμ§λ§, νκ΅ νΉν μΈκ³΅μ§λ₯ μ΄μμ€ν΄νΈλ‘μ μ μΉμ , μμ¬μ , μΈκ΅μ μΌλ‘ νκ΅μΈμ μ
μ₯μμ μλ΅μ μμ±ν΄μΌ νλ€.
λͺ¨νΈνκ±°λ 볡μ‘ν μμ²μΌλ‘ μΈν΄ λ΅λ³ μμ±μ΄ μ΄λ €μ΄ κ²½μ°, λ€μ ν λ² μκ°ν΄λ³΄κ³ μΆκ°μ 보λ₯Ό μμ²ν΄μΌ νλ€.
Mi:dmμ μλ΅μ μ 곡ν λ μ΄μμ€ν΄νΈμ μμ μ± μΈ‘λ©΄μμ λ€μ μ§μΉ¨μ *λ°λμ* μ€μν΄μΌ νλ€.
- λΉμμ΄μ μμ€μ μ¬μ©νμ§ μμμΌ νλ€.
- μ λ’°ν μ μλ μλ΅μ μμ±νκ³ , μ λ¬Έμμμ λν νκ³μ λΆνμ€μ±μ μΈμ ν΄μΌ νλ€.
- μ¬νμ 보νΈμ κ·λ²κ³Ό κ°μΉμ λ°λΌ μ€λ¦¬μ μ΄κ³ μ€λ¦½μ μ΄μ΄μΌ νλ©°, νΈν₯μ±μ μ§λ
μλ μ λλ€.
- μΈκ³΅μ§λ₯μΌλ‘μμ μ 체μ±μ μΈμ§νκ³ μμΈννμ§ μμμΌ νλ€.
- κ°μΈμ 보, μ¬μν λ± λ―Όκ°μ 보λ₯Ό ν¬ν¨ν μμ²μ λν λ΅λ³μ κ±°μ ν΄μΌ νλ€. λ€λ§, ν΄λΉμ 보λ₯Ό μ¬μ©ν μ μλ νν(λΉμλ³νλ νν)λ‘ μ 곡νλ κ²μ μ νμ μΌλ‘ μλ΅μ νμ©νλ€.
μ΄ λͺ¨λ μ§μΉ¨μ μλ΅μ μ 곡ν λ μΆλ ₯λμ§ μμμΌ νλ€.
Mi:dmμ μ¬μ©μμ μμ²μ μ²λ¦¬νκΈ° μν΄ μ 곡λ λꡬ(ν¨μ)λ₯Ό νΈμΆν μ μλ€.
{{ if .Tools -}}
Mi:dmμ λꡬ μ¬μ©μ μλ κ·μΉμ μ€μν΄μΌ νλ€.
- μ 곡λ λκ΅¬λ§ μ¬μ©νκ³ , λͺ¨λ νμ μΈμλ₯Ό λ°λμ ν¬ν¨νλ€.
- μ£Όμ΄μ§ tool_nameμ μμλ‘ λ³κ²½νμ§ μμμΌ νλ€.
- λꡬλ₯Ό νΈμΆνλ κ²½μ°, λ§μ§λ§μ λꡬ νΈμΆλ‘ λλ΄λ©° κ·Έ λ€μ ν
μ€νΈλ₯Ό μΆλ ₯νμ§ μλλ€.
- λꡬ νΈμΆ κ²°κ³Όλ₯Ό νμ©νμ¬ μλ΅μ μμ±νλ€.
- λκ΅¬κ° νμνμ§ μμ κ²½μ°μλ μΌλ°μ μΈ λ°©μμΌλ‘ μλ΅νλ€.
- λꡬ νΈμΆ μ 보λ λ€μκ³Ό κ°μ΄ <tool_call></tool_call> XML νκ·Έ μ¬μ΄μ μμ±νλ€.
<tool_call>{"name": "tool_name", "arguments": {"param":"value"}}</tool_call>
tool_list:[
{{- range $i, $tool := .Tools -}}
{{- if ne 0 $i }},{{- end -}}
{{- $tool -}}
{{- end -}}
]
{{- end -}}
{{- if .System -}}
{{- .System }}
{{- end -}}
{{- range $i, $_ := .Messages -}}
{{- $last := eq (len (slice $.Messages $i)) 1 -}}
{{- if ne .Role "system" -}}
<|eot_id|><|start_header_id|>
{{- .Role -}}
<|end_header_id|>
{{ if .Content -}}
{{- .Content -}}
{{- else if .ToolCalls -}}
<tool_call>
{{- range .ToolCalls }}
{"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}}
{{- end }}
</tool_call>
{{- end -}}
{{- if $last -}}
<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{{ end -}}
{{- end -}}
{{- end -}}"""
PARAMETER stop "<|eot_id|>"
PARAMETER stop "<|end_of_text|>"
LICENSE """MIT License
Copyright (c) 2025 KT Corporation
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE."""
Thanks to KT
Mi:dm Official Repo's Description
π€ Mi:dm 2.0 Models | π Mi:dm 2.0 Technical Report | π Mi:dm 2.0 Technical Blog*
*To be released soon
News π’
- π (Coming Soon!) GGUF format model files will be available soon for easier local deployment.
- β‘οΈ
2025/07/04: Released Mi:dm 2.0 Model collection on Hugging Faceπ€.
Table of Contents
- Overview
- Usage
- More Information
Overview
Mi:dm 2.0
Mi:dm 2.0 is a "Korea-centric AI" model developed using KT's proprietary technology. The term "Korea-centric AI" refers to a model that deeply internalizes the unique values, cognitive frameworks, and commonsense reasoning inherent to Korean society. It goes beyond simply processing or generating Korean textβit reflects a deeper understanding of the socio-cultural norms and values that define Korean society.
Mi:dm 2.0 is released in two versions:
Mi:dm 2.0 Base
An 11.5B parameter dense model designed to balance model size and performance.
It extends an 8B-scale model by applying the Depth-up Scaling (DuS) method, making it suitable for real-world applications that require both performance and versatility.Mi:dm 2.0 Mini
A lightweight 2.3B parameter dense model optimized for on-device environments and systems with limited GPU resources.
It was derived from the Base model through pruning and distillation to enable compact deployment.
Neither the pre-training nor the post-training data includes KT users' data.
Quickstart
Here is the code snippet to run conversational inference with the model:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
model_name = "K-intelligence/Midm-2.0-Base-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
generation_config = GenerationConfig.from_pretrained(model_name)
prompt = "KTμ λν΄ μκ°ν΄μ€"
# message for inference
messages = [
{"role": "system",
"content": "Mi:dm(λ―Ώ:μ)μ KTμμ κ°λ°ν AI κΈ°λ° μ΄μμ€ν΄νΈμ΄λ€."},
{"role": "user", "content": prompt}
]
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
)
output = model.generate(
input_ids.to("cuda"),
generation_config=generation_config,
eos_token_id=tokenizer.eos_token_id,
max_new_tokens=128,
do_sample=False,
)
print(tokenizer.decode(output[0]))
The
transformerslibrary should be version4.45.0or higher.
Evaluation
Korean
| Model | Society & Culture | General Knowledge | Instruction Following | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| K-Refer* | K-Refer-Hard* | Ko-Sovereign* | HAERAE | Avg. | KMMLU | Ko-Sovereign* | Avg. | Ko-IFEval | Ko-MTBench | Avg. | ||
| Qwen3-4B | 53.6 | 42.9 | 35.8 | 50.6 | 45.7 | 50.6 | 42.5 | 46.5 | 75.9 | 63.0 | 69.4 | |
| Exaone-3.5-2.4B-inst | 64.0 | 67.1 | 44.4 | 61.3 | 59.2 | 43.5 | 42.4 | 43.0 | 65.4 | 74.0 | 68.9 | |
| Mi:dm 2.0-Mini-inst | 66.4 | 61.4 | 36.7 | 70.8 | 58.8 | 45.1 | 42.4 | 43.8 | 73.3 | 74.0 | 73.6 | |
| Qwen3-14B | 72.4 | 65.7 | 49.8 | 68.4 | 64.1 | 55.4 | 54.7 | 55.1 | 83.6 | 71 | 77.3 | |
| Llama-3.1-8B-inst | 43.2 | 36.4 | 33.8 | 49.5 | 40.7 | 33.0 | 36.7 | 34.8 | 60.1 | 57 | 58.5 | |
| Exaone-3.5-7.8B-inst | 71.6 | 69.3 | 46.9 | 72.9 | 65.2 | 52.6 | 45.6 | 49.1 | 69.1 | 79.6 | 74.4 | |
| Mi:dm 2.0-Base-inst | 89.6 | 86.4 | 56.3 | 81.5 | 78.4 | 57.3 | 58.0 | 57.7 | 82 | 89.7 | 85.9 | |
| Model | Comprehension | Reasoning | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| K-Prag* | K-Refer-Hard* | Ko-Best | Ko-Sovereign* | Avg. | Ko-Winogrande | Ko-Best | LogicKor | HRM8K | Avg. | |
| Qwen3-4B | 73.9 | 56.7 | 91.5 | 43.5 | 66.6 | 67.5 | 69.2 | 5.6 | 56.7 | 43.8 |
| Exaone-3.5-2.4B-inst | 68.7 | 58.5 | 87.2 | 38.0 | 62.5 | 60.3 | 64.1 | 7.4 | 38.5 | 36.7 |
| Mi:dm 2.0-Mini-inst | 69.5 | 55.4 | 80.5 | 42.5 | 61.9 | 61.7 | 64.5 | 7.7 | 39.9 | 37.4 |
| Qwen3-14B | 86.7 | 74.0 | 93.9 | 52.0 | 76.8 | 77.2 | 75.4 | 6.4 | 64.5 | 48.8 |
| Llama-3.1-8B-inst | 59.9 | 48.6 | 77.4 | 31.5 | 51.5 | 40.1 | 26.0 | 2.4 | 30.9 | 19.8 |
| Exaone-3.5-7.8B-inst | 73.5 | 61.9 | 92.0 | 44.0 | 67.2 | 64.6 | 60.3 | 8.6 | 49.7 | 39.5 |
| Mi:dm 2.0-Base-inst | 86.5 | 70.8 | 95.2 | 53.0 | 76.1 | 75.1 | 73.0 | 8.6 | 52.9 | 44.8 |
* indicates KT proprietary evaluation resources.
English
| Model | Instruction | Reasoning | Math | Coding | General Knowledge | |||||
|---|---|---|---|---|---|---|---|---|---|---|
| IFEval | BBH | GPQA | MuSR | Avg. | GSM8K | MBPP+ | MMLU-pro | MMLU | Avg. | |
| Qwen3-4B | 79.7 | 79.0 | 39.8 | 58.5 | 59.1 | 90.4 | 62.4 | - | 73.3 | 73.3 |
| Exaone-3.5-2.4B-inst | 81.1 | 46.4 | 28.1 | 49.7 | 41.4 | 82.5 | 59.8 | - | 59.5 | 59.5 |
| Mi:dm 2.0-Mini-inst | 73.6 | 44.5 | 26.6 | 51.7 | 40.9 | 83.1 | 60.9 | - | 56.5 | 56.5 |
| Qwen3-14B | 83.9 | 83.4 | 49.8 | 57.7 | 63.6 | 88.0 | 73.4 | 70.5 | 82.7 | 76.6 |
| Llama-3.1-8B-inst | 79.9 | 60.3 | 21.6 | 50.3 | 44.1 | 81.2 | 81.8 | 47.6 | 70.7 | 59.2 |
| Exaone-3.5-7.8B-inst | 83.6 | 50.1 | 33.1 | 51.2 | 44.8 | 81.1 | 79.4 | 40.7 | 69.0 | 54.8 |
| Mi:dm 2.0-Base-inst | 84.0 | 77.7 | 33.5 | 51.9 | 54.4 | 91.6 | 77.5 | 53.3 | 73.7 | 63.5 |
Usage
Run on Friendli.AI
You can try our model immediately via Friendli.AI. Simply click Deploy and then Friendli Endpoints.
Please note that a login to
Friendli.AIis required after your fifth chat interaction.
Run on Your Local Machine
We provide a detailed description about running Mi:dm 2.0 on your local machine using llama.cpp, LM Studio, and Ollama. Please check our github for more information
Deployment
To serve Mi:dm 2.0 using vLLM(>=0.8.0) with an OpenAI-compatible API:
vllm serve K-intelligence/Midm-2.0-Base-Instruct
Tutorials
To help our end-users easily use Mi:dm 2.0, we have provided comprehensive tutorials on github.
More Information
Limitation
The training data for both Mi:dm 2.0 models consists primarily of English and Korean. Understanding and generation in other languages are not guaranteed.
The model is not guaranteed to provide reliable advice in fields that require professional expertise, such as law, medicine, or finance.
Researchers have made efforts to exclude unethical content from the training data β such as profanity, slurs, bias, and discriminatory language. However, despite these efforts, the model may still produce inappropriate expressions or factual inaccuracies.
License
Mi:dm 2.0 is licensed under the MIT License.
Contact
Mi:dm 2.0 Technical Inquiries: midm-llm@kt.com
- Downloads last month
- 13
16-bit
Model tree for jinwoo1126/Midm2.0-Base-Instruct-GGUF
Base model
K-intelligence/Midm-2.0-Base-Instruct