Training Language Models To Explain Their Own Computations

This is a Llama-3.1-8B-Instruct explainer model fine-tuned for the input ablations task for the Llama-3.1-8B-Instruct target model, as described in this paper. In the input ablations task, explainer models are trained to predict how removing "hint" tokens from an MMLU prompt with a hint changes the output of Llama-3.1-8B-Instruct. This helps in understanding the causal relationships between input components and model behavior.

Repository | Paper

Sample Usage

To evaluate the explainer model on the input ablation task, you can use the evaluation script provided in the GitHub repository.

uv run --env-file .env evaluate.py \
  --config config/input_ablation/instruct_instruct_hint.yaml \
  --target_model_path meta-llama/Llama-3.1-8B-Instruct \
  --task hint_attribution \
  --model_path Transluce/input_ablation_llama3.1_8b_instruct_llama3.1_8b_instruct \
  --output_dir /PATH/TO/RESULTS/ \
  --batch_size 64

Citation

@misc{li2025traininglanguagemodelsexplain,
      title={Training Language Models to Explain Their Own Computations}, 
      author={Belinda Z. Li and Zifan Carl Guo and Vincent Huang and Jacob Steinhardt and Jacob Andreas},
      year={2025},
      eprint={2511.08579},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2511.08579}, 
}
Downloads last month
19
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including Transluce/input_ablation_llama3.1_8b_instruct_llama3.1_8b_instruct