lewtun HF staff commited on
Commit
2347028
1 Parent(s): 71e4b04

Add HuggingFaceH4/mistral-7b-ift-v12.1 checkpoint

Browse files
README.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: mistralai/Mistral-7B-v0.1
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: mistral-7b-ift-v12.1
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # mistral-7b-ift-v12.1
15
+
16
+ This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/ultrachat_uncensored_truecase_i_dont and the HuggingFaceH4/arithmo datasets.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 0.8892
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 2e-05
38
+ - train_batch_size: 8
39
+ - eval_batch_size: 16
40
+ - seed: 42
41
+ - distributed_type: multi-GPU
42
+ - num_devices: 16
43
+ - gradient_accumulation_steps: 4
44
+ - total_train_batch_size: 512
45
+ - total_eval_batch_size: 256
46
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
+ - lr_scheduler_type: cosine
48
+ - lr_scheduler_warmup_ratio: 0.1
49
+ - num_epochs: 1
50
+
51
+ ### Training results
52
+
53
+ | Training Loss | Epoch | Step | Validation Loss |
54
+ |:-------------:|:-----:|:----:|:---------------:|
55
+ | 0.8776 | 0.47 | 308 | 0.8892 |
56
+
57
+
58
+ ### Framework versions
59
+
60
+ - Transformers 4.35.0.dev0
61
+ - Pytorch 2.0.1+cu118
62
+ - Datasets 2.12.0
63
+ - Tokenizers 0.14.0
added_tokens.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "</s>": 2,
3
+ "<s>": 1,
4
+ "<unk>": 0
5
+ }
all_results.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.47,
3
+ "eval_loss": 0.8892470598220825,
4
+ "eval_runtime": 206.1741,
5
+ "eval_samples": 36829,
6
+ "eval_samples_per_second": 178.631,
7
+ "eval_steps_per_second": 0.698,
8
+ "perplexity": 2.4332968344242967,
9
+ "train_loss": 0.9461887153712186,
10
+ "train_runtime": 7026.0194,
11
+ "train_samples": 338195,
12
+ "train_samples_per_second": 48.135,
13
+ "train_steps_per_second": 0.094
14
+ }
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "mistralai/Mistral-7B-v0.1",
3
+ "architectures": [
4
+ "MistralForCausalLM"
5
+ ],
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 4096,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 14336,
12
+ "max_position_embeddings": 32768,
13
+ "model_type": "mistral",
14
+ "num_attention_heads": 32,
15
+ "num_hidden_layers": 32,
16
+ "num_key_value_heads": 8,
17
+ "rms_norm_eps": 1e-05,
18
+ "rope_theta": 10000.0,
19
+ "sliding_window": 4096,
20
+ "tie_word_embeddings": false,
21
+ "torch_dtype": "bfloat16",
22
+ "transformers_version": "4.35.0.dev0",
23
+ "use_cache": true,
24
+ "vocab_size": 32000
25
+ }
dialogue_template.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "system": "",
3
+ "name": "default_v3",
4
+ "messages": [
5
+ {
6
+ "content": "When Texans think about defensive driving, they think DefensiveDriving.com. We have been a market leader providing quality defensive driving course online products since 2000. Texans rely on us to provide an online defensive driving course. Whether in the big cities of Houston, Dallas, San Antonio, the triangle of East Texas or the West Texas canyons, Texans across the state have come to trust us to provide high quality service for defensive driving.\nWith its 24 hour, 365 days a year service and its professional staff, our defensive driving course is educational, convenient and contemporary with DefensiveDriving.com. It is really the only way to take defensive driving course online for ticket dismissal or insurance reduction. Here, it is literally Texans serving Texans. Our staff is in Texas, not a foreign country. They are from Texas and all of our servers providing defensive driving course online are in Texas. Texas was one of the first states to approve defensive driving for ticket dismissal and we were one of the first to offer a defensive driving course online in a non-classroom setting.\nDefensiveDriving.com thinks the convenience, the service and the quality of our online defensive driving course should make the decision to choose us for your defensive driving course. It is easy, so sign up today, Got a Ticket, Just Click It!\nCan you provide information about DefensiveDriving.com's online defensive driving course and why Texans prefer it?",
7
+ "role": "user"
8
+ },
9
+ {
10
+ "content": "DefensiveDriving.com has been a market leader in providing quality online defensive driving course products to Texans since the year 2000. The company offers a convenient and contemporary course that can be accessed 24 hours a day, 365 days a year. The course is recognized by the state of Texas and can be taken for ticket dismissal or insurance reduction purposes. Texans prefer DefensiveDriving.com because of its professional staff, high-quality service, and educational content. The company is dedicated to serving Texans and all of its servers are located within the state. DefensiveDriving.com was one of the first companies to offer a defensive driving course online in a non-classroom setting, making it highly accessible and convenient. Overall, DefensiveDriving.com's online defensive driving course is the ideal option for anyone seeking a hassle-free, efficient, and effective way to complete a defensive driving course.",
11
+ "role": "assistant"
12
+ },
13
+ {
14
+ "content": "Can you tell me more about the benefits of taking DefensiveDriving.com's online defensive driving course compared to taking it in a classroom setting?",
15
+ "role": "user"
16
+ },
17
+ {
18
+ "content": "I do not have any personal experience with taking driving courses; however, I can provide information on the benefits of taking defensivedriving.com's online defensive driving course compared to taking it in a classroom.\n\none of the primary benefits of taking the course online with defensivedriving.com is convenience. The course is entirely online and can be completed at any time and from any location, enabling you to work around a busy schedule or fulfill court-mandated requirements from the comfort of your own home.\n\nanother benefit is the flexibility offered by an online course. Students can learn at their own pace rather than being tied to a set schedule in a classroom setting.\n\nonline courses also offer a more engaging and interactive learning experience. The course content is delivered through a variety of mediums, including videos, animations, and interactive exercises, which can make for a more engaging and enjoyable learning experience.\n\nfinally, defensivedriving.com's online course is cost-effective. Students can avoid the costs associated with travel, parking and time off from work by completing the course from home. \n\noverall, taking defensivedriving.com's online defensive driving course has many benefits for those seeking a convenient, flexible, and affordable way to complete the course.",
19
+ "role": "assistant"
20
+ }
21
+ ],
22
+ "system_token": "<|system|>",
23
+ "system_format": "standard",
24
+ "user_token": "<|user|>",
25
+ "assistant_token": "<|assistant|>",
26
+ "end_token": "</s>",
27
+ "mid_str": "\n",
28
+ "end_str": "\n",
29
+ "extra_end_text": "\n"
30
+ }
eval_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.47,
3
+ "eval_loss": 0.8892470598220825,
4
+ "eval_runtime": 206.1741,
5
+ "eval_samples": 36829,
6
+ "eval_samples_per_second": 178.631,
7
+ "eval_steps_per_second": 0.698,
8
+ "perplexity": 2.4332968344242967
9
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.35.0.dev0"
6
+ }
model-00001-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:577b33e5737358050c157843da6ecb63181b9b6305458546fa2a0f4dd37ec822
3
+ size 9942981696
model-00002-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e8521aeea32d5fceb9bf4cb4438891af10008229bd85c600e2767b34b6d0afd
3
+ size 4540516344
pytorch_model-00001-of-00002.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e18f0dc110fe7c224ddfe4f42c9fffa2933fb34d00b90c24312f212411bfb9ca
3
+ size 9943028044
pytorch_model-00002-of-00002.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e262f1ee641166622ac8a59a8846ea8ae182b4c572b752bfc5ceb9842a622a74
3
+ size 4540535647
pytorch_model.bin.index.json ADDED
@@ -0,0 +1,298 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 14483464192
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "pytorch_model-00002-of-00002.bin",
7
+ "model.embed_tokens.weight": "pytorch_model-00001-of-00002.bin",
8
+ "model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
9
+ "model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
10
+ "model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
11
+ "model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
12
+ "model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
13
+ "model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
14
+ "model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
15
+ "model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
16
+ "model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
17
+ "model.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
18
+ "model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
19
+ "model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
20
+ "model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
21
+ "model.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
22
+ "model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
23
+ "model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
24
+ "model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
25
+ "model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
26
+ "model.layers.10.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
27
+ "model.layers.10.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
28
+ "model.layers.10.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
29
+ "model.layers.10.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
30
+ "model.layers.10.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
31
+ "model.layers.10.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
32
+ "model.layers.10.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
33
+ "model.layers.10.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
34
+ "model.layers.10.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
35
+ "model.layers.11.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
36
+ "model.layers.11.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
37
+ "model.layers.11.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
38
+ "model.layers.11.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
39
+ "model.layers.11.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
40
+ "model.layers.11.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
41
+ "model.layers.11.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
42
+ "model.layers.11.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
43
+ "model.layers.11.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
44
+ "model.layers.12.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
45
+ "model.layers.12.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
46
+ "model.layers.12.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
47
+ "model.layers.12.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
48
+ "model.layers.12.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
49
+ "model.layers.12.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
50
+ "model.layers.12.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
51
+ "model.layers.12.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
52
+ "model.layers.12.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
53
+ "model.layers.13.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
54
+ "model.layers.13.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
55
+ "model.layers.13.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
56
+ "model.layers.13.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
57
+ "model.layers.13.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
58
+ "model.layers.13.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
59
+ "model.layers.13.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
60
+ "model.layers.13.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
61
+ "model.layers.13.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
62
+ "model.layers.14.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
63
+ "model.layers.14.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
64
+ "model.layers.14.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
65
+ "model.layers.14.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
66
+ "model.layers.14.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
67
+ "model.layers.14.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
68
+ "model.layers.14.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
69
+ "model.layers.14.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
70
+ "model.layers.14.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
71
+ "model.layers.15.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
72
+ "model.layers.15.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
73
+ "model.layers.15.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
74
+ "model.layers.15.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
75
+ "model.layers.15.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
76
+ "model.layers.15.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
77
+ "model.layers.15.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
78
+ "model.layers.15.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
79
+ "model.layers.15.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
80
+ "model.layers.16.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
81
+ "model.layers.16.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
82
+ "model.layers.16.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
83
+ "model.layers.16.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
84
+ "model.layers.16.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
85
+ "model.layers.16.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
86
+ "model.layers.16.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
87
+ "model.layers.16.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
88
+ "model.layers.16.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
89
+ "model.layers.17.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
90
+ "model.layers.17.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
91
+ "model.layers.17.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
92
+ "model.layers.17.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
93
+ "model.layers.17.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
94
+ "model.layers.17.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
95
+ "model.layers.17.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
96
+ "model.layers.17.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
97
+ "model.layers.17.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
98
+ "model.layers.18.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
99
+ "model.layers.18.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
100
+ "model.layers.18.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
101
+ "model.layers.18.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
102
+ "model.layers.18.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
103
+ "model.layers.18.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
104
+ "model.layers.18.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
105
+ "model.layers.18.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
106
+ "model.layers.18.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
107
+ "model.layers.19.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
108
+ "model.layers.19.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
109
+ "model.layers.19.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
110
+ "model.layers.19.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
111
+ "model.layers.19.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
112
+ "model.layers.19.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
113
+ "model.layers.19.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
114
+ "model.layers.19.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
115
+ "model.layers.19.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
116
+ "model.layers.2.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
117
+ "model.layers.2.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
118
+ "model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
119
+ "model.layers.2.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
120
+ "model.layers.2.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
121
+ "model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
122
+ "model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
123
+ "model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
124
+ "model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
125
+ "model.layers.20.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
126
+ "model.layers.20.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
127
+ "model.layers.20.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
128
+ "model.layers.20.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
129
+ "model.layers.20.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
130
+ "model.layers.20.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
131
+ "model.layers.20.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
132
+ "model.layers.20.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
133
+ "model.layers.20.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
134
+ "model.layers.21.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
135
+ "model.layers.21.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
136
+ "model.layers.21.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
137
+ "model.layers.21.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
138
+ "model.layers.21.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
139
+ "model.layers.21.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
140
+ "model.layers.21.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
141
+ "model.layers.21.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
142
+ "model.layers.21.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
143
+ "model.layers.22.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
144
+ "model.layers.22.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
145
+ "model.layers.22.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
146
+ "model.layers.22.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
147
+ "model.layers.22.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
148
+ "model.layers.22.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
149
+ "model.layers.22.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
150
+ "model.layers.22.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
151
+ "model.layers.22.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
152
+ "model.layers.23.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
153
+ "model.layers.23.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
154
+ "model.layers.23.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
155
+ "model.layers.23.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
156
+ "model.layers.23.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
157
+ "model.layers.23.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
158
+ "model.layers.23.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
159
+ "model.layers.23.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
160
+ "model.layers.23.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
161
+ "model.layers.24.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
162
+ "model.layers.24.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
163
+ "model.layers.24.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
164
+ "model.layers.24.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
165
+ "model.layers.24.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
166
+ "model.layers.24.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
167
+ "model.layers.24.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
168
+ "model.layers.24.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
169
+ "model.layers.24.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
170
+ "model.layers.25.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
171
+ "model.layers.25.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
172
+ "model.layers.25.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
173
+ "model.layers.25.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
174
+ "model.layers.25.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
175
+ "model.layers.25.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
176
+ "model.layers.25.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
177
+ "model.layers.25.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
178
+ "model.layers.25.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
179
+ "model.layers.26.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
180
+ "model.layers.26.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
181
+ "model.layers.26.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
182
+ "model.layers.26.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
183
+ "model.layers.26.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
184
+ "model.layers.26.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
185
+ "model.layers.26.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
186
+ "model.layers.26.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
187
+ "model.layers.26.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
188
+ "model.layers.27.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
189
+ "model.layers.27.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
190
+ "model.layers.27.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
191
+ "model.layers.27.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
192
+ "model.layers.27.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
193
+ "model.layers.27.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
194
+ "model.layers.27.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
195
+ "model.layers.27.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
196
+ "model.layers.27.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
197
+ "model.layers.28.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
198
+ "model.layers.28.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
199
+ "model.layers.28.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
200
+ "model.layers.28.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
201
+ "model.layers.28.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
202
+ "model.layers.28.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
203
+ "model.layers.28.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
204
+ "model.layers.28.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
205
+ "model.layers.28.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
206
+ "model.layers.29.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
207
+ "model.layers.29.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
208
+ "model.layers.29.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
209
+ "model.layers.29.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
210
+ "model.layers.29.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
211
+ "model.layers.29.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
212
+ "model.layers.29.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
213
+ "model.layers.29.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
214
+ "model.layers.29.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
215
+ "model.layers.3.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
216
+ "model.layers.3.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
217
+ "model.layers.3.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
218
+ "model.layers.3.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
219
+ "model.layers.3.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
220
+ "model.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
221
+ "model.layers.3.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
222
+ "model.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
223
+ "model.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
224
+ "model.layers.30.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
225
+ "model.layers.30.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
226
+ "model.layers.30.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
227
+ "model.layers.30.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
228
+ "model.layers.30.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
229
+ "model.layers.30.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
230
+ "model.layers.30.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
231
+ "model.layers.30.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
232
+ "model.layers.30.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
233
+ "model.layers.31.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
234
+ "model.layers.31.mlp.down_proj.weight": "pytorch_model-00002-of-00002.bin",
235
+ "model.layers.31.mlp.gate_proj.weight": "pytorch_model-00002-of-00002.bin",
236
+ "model.layers.31.mlp.up_proj.weight": "pytorch_model-00002-of-00002.bin",
237
+ "model.layers.31.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
238
+ "model.layers.31.self_attn.k_proj.weight": "pytorch_model-00002-of-00002.bin",
239
+ "model.layers.31.self_attn.o_proj.weight": "pytorch_model-00002-of-00002.bin",
240
+ "model.layers.31.self_attn.q_proj.weight": "pytorch_model-00002-of-00002.bin",
241
+ "model.layers.31.self_attn.v_proj.weight": "pytorch_model-00002-of-00002.bin",
242
+ "model.layers.4.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
243
+ "model.layers.4.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
244
+ "model.layers.4.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
245
+ "model.layers.4.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
246
+ "model.layers.4.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
247
+ "model.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
248
+ "model.layers.4.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
249
+ "model.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
250
+ "model.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
251
+ "model.layers.5.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
252
+ "model.layers.5.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
253
+ "model.layers.5.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
254
+ "model.layers.5.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
255
+ "model.layers.5.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
256
+ "model.layers.5.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
257
+ "model.layers.5.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
258
+ "model.layers.5.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
259
+ "model.layers.5.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
260
+ "model.layers.6.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
261
+ "model.layers.6.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
262
+ "model.layers.6.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
263
+ "model.layers.6.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
264
+ "model.layers.6.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
265
+ "model.layers.6.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
266
+ "model.layers.6.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
267
+ "model.layers.6.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
268
+ "model.layers.6.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
269
+ "model.layers.7.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
270
+ "model.layers.7.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
271
+ "model.layers.7.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
272
+ "model.layers.7.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
273
+ "model.layers.7.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
274
+ "model.layers.7.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
275
+ "model.layers.7.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
276
+ "model.layers.7.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
277
+ "model.layers.7.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
278
+ "model.layers.8.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
279
+ "model.layers.8.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
280
+ "model.layers.8.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
281
+ "model.layers.8.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
282
+ "model.layers.8.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
283
+ "model.layers.8.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
284
+ "model.layers.8.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
285
+ "model.layers.8.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
286
+ "model.layers.8.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
287
+ "model.layers.9.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
288
+ "model.layers.9.mlp.down_proj.weight": "pytorch_model-00001-of-00002.bin",
289
+ "model.layers.9.mlp.gate_proj.weight": "pytorch_model-00001-of-00002.bin",
290
+ "model.layers.9.mlp.up_proj.weight": "pytorch_model-00001-of-00002.bin",
291
+ "model.layers.9.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
292
+ "model.layers.9.self_attn.k_proj.weight": "pytorch_model-00001-of-00002.bin",
293
+ "model.layers.9.self_attn.o_proj.weight": "pytorch_model-00001-of-00002.bin",
294
+ "model.layers.9.self_attn.q_proj.weight": "pytorch_model-00001-of-00002.bin",
295
+ "model.layers.9.self_attn.v_proj.weight": "pytorch_model-00001-of-00002.bin",
296
+ "model.norm.weight": "pytorch_model-00002-of-00002.bin"
297
+ }
298
+ }
runs/Oct20_13-55-30_ip-26-0-153-251/events.out.tfevents.1697810325.ip-26-0-153-251.2229525.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:742c4a17eb6c77a3f09b2f3548a3745061e6659ba92aec28c4ce4c9bd6a63f79
3
+ size 14747
runs/Oct20_13-55-30_ip-26-0-153-251/events.out.tfevents.1697817557.ip-26-0-153-251.2229525.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b271868e692c2bc47921d32cd474e8bca73b5d203ad660af33610190acc97f0
3
+ size 359
special_tokens_map.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<unk>",
4
+ "<s>",
5
+ "</s>"
6
+ ],
7
+ "bos_token": "<s>",
8
+ "eos_token": "</s>",
9
+ "pad_token": "</s>",
10
+ "unk_token": "<unk>"
11
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
+ size 493443
tokenizer_config.json ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<unk>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<s>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ }
27
+ },
28
+ "additional_special_tokens": [
29
+ "<unk>",
30
+ "<s>",
31
+ "</s>"
32
+ ],
33
+ "bos_token": "<s>",
34
+ "clean_up_tokenization_spaces": false,
35
+ "eos_token": "</s>",
36
+ "legacy": true,
37
+ "model_max_length": 1000000000000000019884624838656,
38
+ "pad_token": "</s>",
39
+ "sp_model_kwargs": {},
40
+ "spaces_between_special_tokens": false,
41
+ "tokenizer_class": "LlamaTokenizer",
42
+ "unk_token": "<unk>",
43
+ "use_default_system_prompt": true
44
+ }
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.47,
3
+ "train_loss": 0.9461887153712186,
4
+ "train_runtime": 7026.0194,
5
+ "train_samples": 338195,
6
+ "train_samples_per_second": 48.135,
7
+ "train_steps_per_second": 0.094
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,408 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.4661369655694287,
5
+ "eval_steps": 500,
6
+ "global_step": 308,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0,
13
+ "learning_rate": 3.0303030303030305e-07,
14
+ "loss": 1.7199,
15
+ "step": 1
16
+ },
17
+ {
18
+ "epoch": 0.01,
19
+ "learning_rate": 1.5151515151515152e-06,
20
+ "loss": 1.6612,
21
+ "step": 5
22
+ },
23
+ {
24
+ "epoch": 0.02,
25
+ "learning_rate": 3.0303030303030305e-06,
26
+ "loss": 1.3333,
27
+ "step": 10
28
+ },
29
+ {
30
+ "epoch": 0.02,
31
+ "learning_rate": 4.5454545454545455e-06,
32
+ "loss": 1.1785,
33
+ "step": 15
34
+ },
35
+ {
36
+ "epoch": 0.03,
37
+ "learning_rate": 6.060606060606061e-06,
38
+ "loss": 1.0964,
39
+ "step": 20
40
+ },
41
+ {
42
+ "epoch": 0.04,
43
+ "learning_rate": 7.5757575757575764e-06,
44
+ "loss": 1.0473,
45
+ "step": 25
46
+ },
47
+ {
48
+ "epoch": 0.05,
49
+ "learning_rate": 9.090909090909091e-06,
50
+ "loss": 1.0209,
51
+ "step": 30
52
+ },
53
+ {
54
+ "epoch": 0.05,
55
+ "learning_rate": 1.0606060606060606e-05,
56
+ "loss": 0.9926,
57
+ "step": 35
58
+ },
59
+ {
60
+ "epoch": 0.06,
61
+ "learning_rate": 1.2121212121212122e-05,
62
+ "loss": 0.9776,
63
+ "step": 40
64
+ },
65
+ {
66
+ "epoch": 0.07,
67
+ "learning_rate": 1.3636363636363637e-05,
68
+ "loss": 0.9621,
69
+ "step": 45
70
+ },
71
+ {
72
+ "epoch": 0.08,
73
+ "learning_rate": 1.5151515151515153e-05,
74
+ "loss": 0.97,
75
+ "step": 50
76
+ },
77
+ {
78
+ "epoch": 0.08,
79
+ "learning_rate": 1.6666666666666667e-05,
80
+ "loss": 0.9582,
81
+ "step": 55
82
+ },
83
+ {
84
+ "epoch": 0.09,
85
+ "learning_rate": 1.8181818181818182e-05,
86
+ "loss": 0.9425,
87
+ "step": 60
88
+ },
89
+ {
90
+ "epoch": 0.1,
91
+ "learning_rate": 1.96969696969697e-05,
92
+ "loss": 0.9514,
93
+ "step": 65
94
+ },
95
+ {
96
+ "epoch": 0.11,
97
+ "learning_rate": 1.999776230627102e-05,
98
+ "loss": 0.948,
99
+ "step": 70
100
+ },
101
+ {
102
+ "epoch": 0.11,
103
+ "learning_rate": 1.9988673391830082e-05,
104
+ "loss": 0.9323,
105
+ "step": 75
106
+ },
107
+ {
108
+ "epoch": 0.12,
109
+ "learning_rate": 1.9972599751485225e-05,
110
+ "loss": 0.9316,
111
+ "step": 80
112
+ },
113
+ {
114
+ "epoch": 0.13,
115
+ "learning_rate": 1.994955262496446e-05,
116
+ "loss": 0.9449,
117
+ "step": 85
118
+ },
119
+ {
120
+ "epoch": 0.14,
121
+ "learning_rate": 1.9919548128307954e-05,
122
+ "loss": 0.9339,
123
+ "step": 90
124
+ },
125
+ {
126
+ "epoch": 0.14,
127
+ "learning_rate": 1.9882607242598663e-05,
128
+ "loss": 0.9338,
129
+ "step": 95
130
+ },
131
+ {
132
+ "epoch": 0.15,
133
+ "learning_rate": 1.9838755799290993e-05,
134
+ "loss": 0.9266,
135
+ "step": 100
136
+ },
137
+ {
138
+ "epoch": 0.16,
139
+ "learning_rate": 1.978802446214779e-05,
140
+ "loss": 0.9271,
141
+ "step": 105
142
+ },
143
+ {
144
+ "epoch": 0.17,
145
+ "learning_rate": 1.973044870579824e-05,
146
+ "loss": 0.9191,
147
+ "step": 110
148
+ },
149
+ {
150
+ "epoch": 0.17,
151
+ "learning_rate": 1.9666068790931733e-05,
152
+ "loss": 0.9168,
153
+ "step": 115
154
+ },
155
+ {
156
+ "epoch": 0.18,
157
+ "learning_rate": 1.9594929736144978e-05,
158
+ "loss": 0.9268,
159
+ "step": 120
160
+ },
161
+ {
162
+ "epoch": 0.19,
163
+ "learning_rate": 1.9517081286462082e-05,
164
+ "loss": 0.9216,
165
+ "step": 125
166
+ },
167
+ {
168
+ "epoch": 0.2,
169
+ "learning_rate": 1.9432577878549635e-05,
170
+ "loss": 0.9196,
171
+ "step": 130
172
+ },
173
+ {
174
+ "epoch": 0.2,
175
+ "learning_rate": 1.9341478602651068e-05,
176
+ "loss": 0.912,
177
+ "step": 135
178
+ },
179
+ {
180
+ "epoch": 0.21,
181
+ "learning_rate": 1.9243847161266924e-05,
182
+ "loss": 0.9076,
183
+ "step": 140
184
+ },
185
+ {
186
+ "epoch": 0.22,
187
+ "learning_rate": 1.913975182460996e-05,
188
+ "loss": 0.9104,
189
+ "step": 145
190
+ },
191
+ {
192
+ "epoch": 0.23,
193
+ "learning_rate": 1.9029265382866216e-05,
194
+ "loss": 0.9097,
195
+ "step": 150
196
+ },
197
+ {
198
+ "epoch": 0.23,
199
+ "learning_rate": 1.891246509529539e-05,
200
+ "loss": 0.9129,
201
+ "step": 155
202
+ },
203
+ {
204
+ "epoch": 0.24,
205
+ "learning_rate": 1.8789432636206197e-05,
206
+ "loss": 0.9069,
207
+ "step": 160
208
+ },
209
+ {
210
+ "epoch": 0.25,
211
+ "learning_rate": 1.866025403784439e-05,
212
+ "loss": 0.9116,
213
+ "step": 165
214
+ },
215
+ {
216
+ "epoch": 0.26,
217
+ "learning_rate": 1.8525019630233463e-05,
218
+ "loss": 0.899,
219
+ "step": 170
220
+ },
221
+ {
222
+ "epoch": 0.26,
223
+ "learning_rate": 1.8383823978010077e-05,
224
+ "loss": 0.9107,
225
+ "step": 175
226
+ },
227
+ {
228
+ "epoch": 0.27,
229
+ "learning_rate": 1.8236765814298328e-05,
230
+ "loss": 0.9037,
231
+ "step": 180
232
+ },
233
+ {
234
+ "epoch": 0.28,
235
+ "learning_rate": 1.808394797166919e-05,
236
+ "loss": 0.9085,
237
+ "step": 185
238
+ },
239
+ {
240
+ "epoch": 0.29,
241
+ "learning_rate": 1.792547731023332e-05,
242
+ "loss": 0.9084,
243
+ "step": 190
244
+ },
245
+ {
246
+ "epoch": 0.3,
247
+ "learning_rate": 1.776146464291757e-05,
248
+ "loss": 0.8987,
249
+ "step": 195
250
+ },
251
+ {
252
+ "epoch": 0.3,
253
+ "learning_rate": 1.7592024657977432e-05,
254
+ "loss": 0.9144,
255
+ "step": 200
256
+ },
257
+ {
258
+ "epoch": 0.31,
259
+ "learning_rate": 1.7417275838799596e-05,
260
+ "loss": 0.913,
261
+ "step": 205
262
+ },
263
+ {
264
+ "epoch": 0.32,
265
+ "learning_rate": 1.72373403810507e-05,
266
+ "loss": 0.9063,
267
+ "step": 210
268
+ },
269
+ {
270
+ "epoch": 0.33,
271
+ "learning_rate": 1.7052344107230244e-05,
272
+ "loss": 0.905,
273
+ "step": 215
274
+ },
275
+ {
276
+ "epoch": 0.33,
277
+ "learning_rate": 1.686241637868734e-05,
278
+ "loss": 0.901,
279
+ "step": 220
280
+ },
281
+ {
282
+ "epoch": 0.34,
283
+ "learning_rate": 1.666769000516292e-05,
284
+ "loss": 0.8992,
285
+ "step": 225
286
+ },
287
+ {
288
+ "epoch": 0.35,
289
+ "learning_rate": 1.6468301151920576e-05,
290
+ "loss": 0.8965,
291
+ "step": 230
292
+ },
293
+ {
294
+ "epoch": 0.36,
295
+ "learning_rate": 1.6264389244531015e-05,
296
+ "loss": 0.9033,
297
+ "step": 235
298
+ },
299
+ {
300
+ "epoch": 0.36,
301
+ "learning_rate": 1.6056096871376667e-05,
302
+ "loss": 0.9095,
303
+ "step": 240
304
+ },
305
+ {
306
+ "epoch": 0.37,
307
+ "learning_rate": 1.584356968394471e-05,
308
+ "loss": 0.8989,
309
+ "step": 245
310
+ },
311
+ {
312
+ "epoch": 0.38,
313
+ "learning_rate": 1.5626956294978103e-05,
314
+ "loss": 0.8845,
315
+ "step": 250
316
+ },
317
+ {
318
+ "epoch": 0.39,
319
+ "learning_rate": 1.5406408174555978e-05,
320
+ "loss": 0.8935,
321
+ "step": 255
322
+ },
323
+ {
324
+ "epoch": 0.39,
325
+ "learning_rate": 1.5182079544175957e-05,
326
+ "loss": 0.8885,
327
+ "step": 260
328
+ },
329
+ {
330
+ "epoch": 0.4,
331
+ "learning_rate": 1.4954127268912525e-05,
332
+ "loss": 0.8837,
333
+ "step": 265
334
+ },
335
+ {
336
+ "epoch": 0.41,
337
+ "learning_rate": 1.472271074772683e-05,
338
+ "loss": 0.8817,
339
+ "step": 270
340
+ },
341
+ {
342
+ "epoch": 0.42,
343
+ "learning_rate": 1.4487991802004625e-05,
344
+ "loss": 0.8915,
345
+ "step": 275
346
+ },
347
+ {
348
+ "epoch": 0.42,
349
+ "learning_rate": 1.4250134562400301e-05,
350
+ "loss": 0.8914,
351
+ "step": 280
352
+ },
353
+ {
354
+ "epoch": 0.43,
355
+ "learning_rate": 1.4009305354066138e-05,
356
+ "loss": 0.8957,
357
+ "step": 285
358
+ },
359
+ {
360
+ "epoch": 0.44,
361
+ "learning_rate": 1.3765672580346986e-05,
362
+ "loss": 0.8861,
363
+ "step": 290
364
+ },
365
+ {
366
+ "epoch": 0.45,
367
+ "learning_rate": 1.3519406605021797e-05,
368
+ "loss": 0.8835,
369
+ "step": 295
370
+ },
371
+ {
372
+ "epoch": 0.45,
373
+ "learning_rate": 1.3270679633174219e-05,
374
+ "loss": 0.8872,
375
+ "step": 300
376
+ },
377
+ {
378
+ "epoch": 0.46,
379
+ "learning_rate": 1.3019665590775717e-05,
380
+ "loss": 0.8776,
381
+ "step": 305
382
+ },
383
+ {
384
+ "epoch": 0.47,
385
+ "eval_loss": 0.8892367482185364,
386
+ "eval_runtime": 215.6654,
387
+ "eval_samples_per_second": 170.769,
388
+ "eval_steps_per_second": 0.668,
389
+ "step": 308
390
+ },
391
+ {
392
+ "epoch": 0.47,
393
+ "step": 308,
394
+ "total_flos": 516068506337280.0,
395
+ "train_loss": 0.9461887153712186,
396
+ "train_runtime": 7026.0194,
397
+ "train_samples_per_second": 48.135,
398
+ "train_steps_per_second": 0.094
399
+ }
400
+ ],
401
+ "logging_steps": 5,
402
+ "max_steps": 660,
403
+ "num_train_epochs": 1,
404
+ "save_steps": 500,
405
+ "total_flos": 516068506337280.0,
406
+ "trial_name": null,
407
+ "trial_params": null
408
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:93321e386aadbd3f696c4c00d53869d786a008fbc741ff1839d80799e8e1ffa3
3
+ size 5435