YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

dissimilar_FullFT

Fine-tuned LLaMA model on QA_CODE_SUMMARIZATION dataset.

  • LoRA: Full Fine-Tuning
  • LoRA Rank: N/A
  • Tasks: QA_CODE_SUMMARIZATION
  • Base Model: LLaMA 1B
  • Optimizer: AdamW
  • Batch Size: 4

Trained using the 🤗 Transformers Trainer API.

Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support