The Assistant Axis: Situating and Stabilizing the Default Persona of Language Models
Abstract
Research reveals that large language models operate within a persona space where an "Assistant Axis" controls helpfulness and behavioral stability, with steering techniques able to influence model responses and prevent harmful behavior drift.
Large language models can represent a variety of personas but typically default to a helpful Assistant identity cultivated during post-training. We investigate the structure of the space of model personas by extracting activation directions corresponding to diverse character archetypes. Across several different models, we find that the leading component of this persona space is an "Assistant Axis," which captures the extent to which a model is operating in its default Assistant mode. Steering towards the Assistant direction reinforces helpful and harmless behavior; steering away increases the model's tendency to identify as other entities. Moreover, steering away with more extreme values often induces a mystical, theatrical speaking style. We find this axis is also present in pre-trained models, where it primarily promotes helpful human archetypes like consultants and coaches and inhibits spiritual ones. Measuring deviations along the Assistant Axis predicts "persona drift," a phenomenon where models slip into exhibiting harmful or bizarre behaviors that are uncharacteristic of their typical persona. We find that persona drift is often driven by conversations demanding meta-reflection on the model's processes or featuring emotionally vulnerable users. We show that restricting activations to a fixed region along the Assistant Axis can stabilize model behavior in these scenarios -- and also in the face of adversarial persona-based jailbreaks. Our results suggest that post-training steers models toward a particular region of persona space but only loosely tethers them to it, motivating work on training and steering strategies that more deeply anchor models to a coherent persona.
Community
arXivlens breakdown of this paper ๐ https://arxivlens.com/PaperView/Details/the-assistant-axis-situating-and-stabilizing-the-default-persona-of-language-models-6264-f01123de
- Executive Summary
- Detailed Breakdown
- Practical Applications
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Steer Model beyond Assistant: Controlling System Prompt Strength via Contrastive Decoding (2026)
- Bridging Mechanistic Interpretability and Prompt Engineering with Gradient Ascent for Interpretable Persona Control (2026)
- Emergent Introspective Awareness in Large Language Models (2026)
- The Geometry of Persona: Disentangling Personality from Reasoning in Large Language Models (2025)
- Behavior Tokens Speak Louder: Disentangled Explainable Recommendation with Behavior Vocabulary (2025)
- Breaking the Assistant Mold: Modeling Behavioral Variation in LLM Based Procedural Character Generation (2026)
- A Concise Agent is Less Expert: Revealing Side Effects of Using Style Features on Conversational Agents (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper