tiny-audio / README.md
mazesmazes's picture
Model save
370211e verified
metadata
library_name: transformers
tags:
  - generated_from_trainer
model-index:
  - name: tiny-audio
    results: []

tiny-audio

This model is a fine-tuned version of on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2566

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 936
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
0.2888 0.0149 1000 0.2819
0.3565 0.0298 2000 0.2919
0.3189 0.0447 3000 0.2879
0.3274 0.0596 4000 0.2929
0.3231 0.0745 5000 0.2870
0.3270 0.0894 6000 0.2853
0.3486 0.1043 7000 0.2860
0.3066 0.1192 8000 0.2865
0.3487 0.1341 9000 0.2866
0.3307 0.1490 10000 0.2871
0.3419 0.1639 11000 0.2852
0.3601 0.1788 12000 0.2848
0.3156 0.1936 13000 0.2860
0.3098 0.2085 14000 0.2830
0.3133 0.2234 15000 0.2851
0.3269 0.2383 16000 0.2826
0.3257 0.2532 17000 0.2822
0.3281 0.2681 18000 0.2822
0.3941 0.2830 19000 0.2813
0.3875 0.2979 20000 0.2854
0.3214 0.3128 21000 0.2795
0.2914 0.3277 22000 0.2792
0.2951 0.3426 23000 0.2805
0.3343 0.3575 24000 0.2779
0.3252 0.3724 25000 0.2771
0.3027 0.3873 26000 0.2768
0.3287 0.4022 27000 0.2759
0.3208 0.4171 28000 0.2749
0.3402 0.4320 29000 0.2730
0.2928 0.4469 30000 0.2726
0.3085 0.4618 31000 0.2737
0.3073 0.4767 32000 0.2705
0.3471 0.4916 33000 0.2708
0.2945 0.5065 34000 0.2690
0.3294 0.5214 35000 0.2696
0.3095 0.5363 36000 0.2679
0.3152 0.5512 37000 0.2659
0.3035 0.5660 38000 0.2674
0.3342 0.5809 39000 0.2656
0.3242 0.5958 40000 0.2653
0.2789 0.6107 41000 0.2643
0.3082 0.6256 42000 0.2643
0.3174 0.6405 43000 0.2633
0.2730 0.6554 44000 0.2628
0.2934 0.6703 45000 0.2609
0.2944 0.6852 46000 0.2606
0.3111 0.7001 47000 0.2614
0.3431 0.7150 48000 0.2605
0.3226 0.7299 49000 0.2601
0.2735 0.7448 50000 0.2591
0.3208 0.7597 51000 0.2590
0.3208 0.7746 52000 0.2584
0.3021 0.7895 53000 0.2578
0.2730 0.8044 54000 0.2583
0.2938 0.8193 55000 0.2581
0.2894 0.8342 56000 0.2574
0.2781 0.8491 57000 0.2572
0.3003 0.8640 58000 0.2568
0.2719 0.8789 59000 0.2568
0.2878 0.8938 60000 0.2567
0.3058 0.9087 61000 0.2568
0.3036 0.9236 62000 0.2568
0.3050 0.9384 63000 0.2568
0.3244 0.9533 64000 0.2567
0.3187 0.9682 65000 0.2566
0.3016 0.9831 66000 0.2566
0.2697 0.9980 67000 0.2566

Framework versions

  • Transformers 5.0.0.dev0
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.22.1