sbaner24's picture
update model card README.md
78ecfec
metadata
license: apache-2.0
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: vit-base-patch16-224-Soybean_11-46
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: train
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.9305555555555556

vit-base-patch16-224-Soybean_11-46

This model is a fine-tuned version of google/vit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2058
  • Accuracy: 0.9306

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 60
  • eval_batch_size: 60
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 240
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.3661 1.0 11 1.3698 0.5069
0.9979 2.0 22 0.9817 0.6632
0.6746 3.0 33 0.7423 0.7396
0.6364 4.0 44 0.6075 0.7569
0.5425 5.0 55 0.5500 0.7951
0.5001 6.0 66 0.4883 0.8160
0.3519 7.0 77 0.4539 0.8264
0.4421 8.0 88 0.4483 0.8194
0.3207 9.0 99 0.3785 0.8438
0.3682 10.0 110 0.3385 0.8646
0.2642 11.0 121 0.3827 0.8403
0.3444 12.0 132 0.3462 0.8507
0.2423 13.0 143 0.3170 0.8681
0.3168 14.0 154 0.3168 0.8715
0.2781 15.0 165 0.3323 0.8333
0.2411 16.0 176 0.3200 0.8715
0.2276 17.0 187 0.3296 0.875
0.192 18.0 198 0.3119 0.8854
0.1612 19.0 209 0.3647 0.875
0.1084 20.0 220 0.2641 0.8993
0.2099 21.0 231 0.2807 0.8958
0.1666 22.0 242 0.2595 0.9097
0.1355 23.0 253 0.2735 0.8924
0.1165 24.0 264 0.3238 0.8785
0.112 25.0 275 0.3066 0.8889
0.1191 26.0 286 0.2427 0.9062
0.1293 27.0 297 0.2536 0.9201
0.2932 28.0 308 0.2707 0.8924
0.0918 29.0 319 0.2688 0.8924
0.1529 30.0 330 0.2715 0.8889
0.227 31.0 341 0.2664 0.9028
0.1044 32.0 352 0.2809 0.8993
0.0894 33.0 363 0.2863 0.8924
0.0566 34.0 374 0.2474 0.9201
0.0915 35.0 385 0.2428 0.9097
0.1136 36.0 396 0.2545 0.9097
0.0947 37.0 407 0.2599 0.9097
0.1012 38.0 418 0.2454 0.9167
0.0465 39.0 429 0.2435 0.9201
0.0299 40.0 440 0.2532 0.9062
0.0311 41.0 451 0.2298 0.9271
0.0796 42.0 462 0.2422 0.9167
0.058 43.0 473 0.2058 0.9306
0.0853 44.0 484 0.2266 0.9306
0.0868 45.0 495 0.2266 0.9236
0.0554 46.0 506 0.2163 0.9271
0.0508 47.0 517 0.2104 0.9306
0.0589 48.0 528 0.2172 0.9271
0.0369 49.0 539 0.2214 0.9271
0.0852 50.0 550 0.2241 0.9271

Framework versions

  • Transformers 4.30.0.dev0
  • Pytorch 1.12.1
  • Datasets 2.12.0
  • Tokenizers 0.13.1