metadata
license: llama2
library_name: transformers
pipeline_tag: image-text-to-text
base_model:
- unsloth/llama-2-13b
- layoric/llama-2-13b-code-alpaca
- vanillaOVO/WizardMath-13B-V1.0
- WizardLMTeam/WizardLM-13B-V1.2
tags:
- merge
license: llama2 library_name: transformers pipeline_tag: image-text-to-text base_model: - unsloth/llama-2-13b - layoric/llama-2-13b-code-alpaca - vanillaOVO/WizardMath-13B-V1.0 - WizardLMTeam/WizardLM-13B-V1.2 tags: - merge
AIM Paper Checkpoints Uploaded For Replication
This repository includes one of the checkpoints used in the paper "Activation-Informed Merging of Large Language Models". Specifics of this model are as follows:
- Merging Method: task_arithmetic
- Models Used In Merging
- Base Model: unsloth/llama-2-13b
- Code: layoric/llama-2-13b-code-alpaca
- Math: vanillaOVO/WizardMath-13B-V1.0
- Instruction Tuned: WizardLMTeam/WizardLM-13B-V1.2
- AIM: True
Benchmark results and paper details can be found at the official GitHub.