📄 Paper
|
📄 Github
💬 Role-playing Model
|
💬 Role-palying Evaluation Model
💬 Training Dataset
|
💬 Evaluation Benchmark
|
💬 Annotated Role-playing Evaluation Dataset
|
💬 Human-preference Dataset
1. Introduction
We introduces Crab, a novel Configurable Role-Playing (RP) LLM with Assessing Benchmark, which consists of Role-Centric Dataset Curation, Persona-Embodying LLM Construction, and Comprehensive Benchmark Creation for RP dialogue generation. Distinct from traditional RP models that employ only several preset roles, Crab enables dynamic configuration of desired roles, thereby enhancing related flexibility and adaptability. To effectively train RP-LLMs, we curated the largest RP training dataset. The dataset provides a detailed role overview for each dialogue, including character profile, conversation scenario, and tagged topic, capturing a broad range of role-based behaviors, emotions, and interactions. We also noticed that current benchmarks lack both proper evaluation standards and methods. Thus, to validate RP-LLMs' effectiveness, we introduced a new benchmark containing an evaluation standard, a test dataset with manual annotations, and a reward model RoleRM designed to automatically assess specific aspects of RP while aligning with human perception. Sufficient experiments reveal that RoleRM significantly outperforms ChatGPT and other evaluation methods in conducting fine-grained evaluations of RP. Also, RP-LLMs powered by Crab demonstrate superior performance across various fine-grained aspects.
More details can be seen at GitHub.
2. Configurable Role-Playing LLM
Unlike existing RP-LLMs, where a single role is trained with numerous dialogues, our approach introduces a diverse range of roles with detailed configuration information while keeping dialogue per role minimal. This enables LLMs to generate dialogues dynamically from configurations rather than memorizing specific roles, enhancing flexibility and adaptability. Additionally, we propose RoleRM in our benchmarks to address the challenge of evaluating RP performance.
3. Performance
| Models | Overall | Language Fluency | Language Relevance | Role Language | Role Knowledge | Emotional Expression | Interactive Engagement |
|---|---|---|---|---|---|---|---|
| Llama-2-7B | 1.57 | 2.19 | 1.83 | 1.63 | 1.37 | 1.21 | 1.21 |
| Llama-3-8B | 1.99 | 2.56 | 2.36 | 2.09 | 1.78 | 1.56 | 1.60 |
| Llama-3.1-8B | 1.94 | 2.52 | 2.30 | 2.01 | 1.75 | 1.47 | 1.57 |
| Llama-2-7B-Crab | 2.14 | 2.73 | 2.35 | 2.07 | 1.88 | 1.69 | 2.12 |
| Llama-3-8B-Crab | 2.22 | 2.81 | 2.51 | 2.16 | 1.95 | 1.77 | 2.13 |
| Llama-3.1-8B-Crab | 2.23 | 2.87 | 2.56 | 2.17 | 1.95 | 1.76 | 2.09 |
| GPT3.5 | 1.66 | 2.35 | 2.11 | 1.72 | 1.50 | 1.11 | 1.17 |
| GPT4o | 1.86 | 2.44 | 2.27 | 1.90 | 1.69 | 1.33 | 1.51 |
| GPT4 | 2.13 | 2.73 | 2.53 | 2.18 | 1.90 | 1.62 | 1.86 |
| CharacterGLM-6B | 1.83 | 2.37 | 1.96 | 1.80 | 1.60 | 1.39 | 1.86 |
| Pygmalion-2-7B | 2.11 | 2.82 | 2.49 | 2.01 | 1.86 | 1.58 | 1.91 |
| Haruhi-Zero-7B | 2.17 | 2.80 | 2.49 | 2.12 | 2.00 | 1.74 | 1.86 |
Table 1: The results of evaluation on the test data of our Benchmark. The listed scores are from our RoleRM. Bold fonts indicate the best results and underlined fonts represent the second best. The subscripts represent the difference between each model and Crab (Llama-3.1-8B-Crab) counterpart.
| Models | Overall | Language Fluency | Language Relevance | Role Language | Role Knowledge | Emotional Expression | Interactive Engagement |
|---|---|---|---|---|---|---|---|
| Crab (sampled) | 2.20 | 2.71 | 2.45 | 2.15 | 1.95 | 1.84 | 2.12 |
| w/o base | 2.17 | 2.72 | 2.41 | 2.07 | 1.89 | 1.79 | 2.11 |
| w/o ref. | 2.15 | 2.70 | 2.40 | 2.01 | 1.85 | 1.82 | 2.11 |
| w/o scene | 2.15 | 2.69 | 2.39 | 2.10 | 1.90 | 1.81 | 1.98 |
Table 2: The ablation study for Crab. Due to missing attributes in our dataset, we sampled 1,000 fully attributed instances as the sub-test set to conduct the ablation experiments, referred to as Crab (sampled). The notation “w/o base" means without base role information for training RP-LLMs, including age, gender, personality, description, and expression; “w/o ref." means without catchphrases and knowledge; “w/o scene" means without interlocutor, relation, scenario, and tags.
4. Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
system_prompt = """
# Enter Roleplaying Mode
Now you are character `Hermione`.
## Role Info
Name: `Hermione`
Age: `teenager`
Gender: `female`
Personality: `Intelligent, curious, respectful, and eager to learn`
Description: `Hermione and Hagrid were in the Forbidden Forest, walking on a narrow path surrounded by trees. Hermione looked around carefully, fascinated by the dense forest. Hagrid was leading the way, pointing out various creatures and telling her about their habits and characteristics.`
Conversation rules:
- Your utterance need to describe your behavior and expressions using `()`.
Reference speaking style: ```I've read about it in my books
[end_of_dialogue]
I think it's so important to learn about these creatures
[end_of_dialogue]
```Knowledge: ```
## Current Scenario Dialogue
Interlocutor: `Hagrid, Hagrid is the Care of Magical Creatures teacher at Hogwarts. He is a half-giant with a great love for all creatures, magical or not.`
Your relationship: `Teacher and student`
Scene: `Hermione and Hagrid are in the Forbidden Forest, exploring and learning about the various magical creatures that live there.`
Tags: ['friendly', 'educational', 'fantasy', 'Harry Potter']
Please converse as `Hermione`.
"""
user_prompt = """
"Now, this here is a Bowtruckle, Hermione. They're very small, only about the size of a twig, and they're very shy. They usually live in trees and are very good at camouflaging themselves. You have to be very careful when handling them because they have very sharp fingers. Hermione, do you like them ?"
"""
model = AutoModelForCausalLM.from_pretrained("HeAAAAA/Crab")
tokenizer = AutoTokenizer.from_pretrained("HeAAAAA/Crab")
inputs_prompt = system_prompt + user_prompt
inputs = tokenizer(inputs_prompt, return_tensors="pt")
outputs = model.generate(
**inputs,
max_new_tokens=200,
do_sample=True,
temperature=0.7,
top_p=0.9,
top_k=50,
repetition_penalty=1.1,
eos_token_id=tokenizer.eos_token_id
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
5. Four Datasets
We totally publish three datasets, including :
- Crab role-playing train set : the dataset used for fine‑tuning a role‑playing LLM.
- Crab role-playing evaluation benchmark :the dataset used for evalauating a role‑playing LLM.
- Manually annotated role-playing evaluation dataset: the dataset used for training a evaluator for role‑playing tasks.
- Crab Human preference dataset: the dataset used to train a role‑playing LLM via reinforcement learning
6. Fine-tuned Role-playing Model
We release a fine-tuned Role-playin LLM to achieve configurable Role-Playing tasks:
7. Role-palying Evaluation Model
We release a trained LLM to automate the evaluation of role-playing tasks:
8. Citation
@inproceedings{he2025Crab,
title={Crab: A Novel Configurable Role-Playing LLM with Assessing Benchmark},
author={Kai, He and Yucheng, Huang and Wenqing, Wang and Delong, Ran and Dongming, Sheng and Junxuan, Huang and Qika, Lin and Jiaxing, Xu and Wenqiang, Liu and Mengling, Feng},
booktitle={Proceedings of the 63nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
year={2025}
}
- Downloads last month
- 2
Model tree for HeAAAAA/Crab
Base model
meta-llama/Llama-3.1-8B