Edit model card

RobertaLr1e-8Wd0.02E30

This model is a fine-tuned version of deepset/roberta-base-squad2 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.9031

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-08
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss
4.2032 1.0 72 3.9788
2.5986 2.0 144 3.8826
4.2112 3.0 216 3.7919
4.317 4.0 288 3.7059
5.0045 5.0 360 3.6254
2.5192 6.0 432 3.5512
3.853 7.0 504 3.4843
4.114 8.0 576 3.4189
2.6909 9.0 648 3.3598
2.0035 10.0 720 3.3054
3.5397 11.0 792 3.2552
3.3885 12.0 864 3.2090
2.5693 13.0 936 3.1681
3.2048 14.0 1008 3.1314
2.9462 15.0 1080 3.0964
2.9265 16.0 1152 3.0652
3.3392 17.0 1224 3.0373
3.2634 18.0 1296 3.0125
3.7834 19.0 1368 2.9907
2.3272 20.0 1440 2.9720
3.0372 21.0 1512 2.9562
2.9964 22.0 1584 2.9421
3.1609 23.0 1656 2.9303
2.9584 24.0 1728 2.9214
2.51 25.0 1800 2.9144
3.3621 26.0 1872 2.9094
3.089 27.0 1944 2.9058
3.4029 28.0 2016 2.9040
3.7397 29.0 2088 2.9033
2.7515 30.0 2160 2.9031

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.4.0
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
4
Safetensors
Model size
124M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for hsmith-morganhill/RobertaLr1e-8Wd0.02E30

Finetuned
this model