rombodawg commited on
Commit
7009b5c
1 Parent(s): f6a6e62

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +327 -10
README.md CHANGED
@@ -1,22 +1,339 @@
1
  ---
2
- base_model: unsloth/qwen2-1.5b-instruct-bnb-4bit
3
- language:
4
- - en
5
  license: apache-2.0
 
6
  tags:
7
  - text-generation-inference
8
  - transformers
9
  - unsloth
10
  - qwen2
11
- - trl
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
- # Uploaded model
 
 
 
 
 
 
 
15
 
16
- - **Developed by:** Replete-AI
17
- - **License:** apache-2.0
18
- - **Finetuned from model :** unsloth/qwen2-1.5b-instruct-bnb-4bit
19
 
20
- This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
1
  ---
 
 
 
2
  license: apache-2.0
3
+ base_model: Qwen/Qwen2-1.5B
4
  tags:
5
  - text-generation-inference
6
  - transformers
7
  - unsloth
8
  - qwen2
9
+ datasets:
10
+ - Replete-AI/code_bagel_hermes-2.5
11
+ - Replete-AI/code_bagel
12
+ - Replete-AI/OpenHermes-2.5-Uncensored
13
+ - teknium/OpenHermes-2.5
14
+ - layoric/tiny-codes-alpaca
15
+ - glaiveai/glaive-code-assistant-v3
16
+ - ajibawa-2023/Code-290k-ShareGPT
17
+ - TIGER-Lab/MathInstruct
18
+ - chargoddard/commitpack-ft-instruct-rated
19
+ - iamturun/code_instructions_120k_alpaca
20
+ - ise-uiuc/Magicoder-Evol-Instruct-110K
21
+ - cognitivecomputations/dolphin-coder
22
+ - nickrosh/Evol-Instruct-Code-80k-v1
23
+ - coseal/CodeUltraFeedback_binarized
24
+ - glaiveai/glaive-function-calling-v2
25
+ - CyberNative/Code_Vulnerability_Security_DPO
26
+ - jondurbin/airoboros-2.2
27
+ - camel-ai
28
+ - lmsys/lmsys-chat-1m
29
+ - CollectiveCognition/chats-data-2023-09-22
30
+ - CoT-Alpaca-GPT4
31
+ - WizardLM/WizardLM_evol_instruct_70k
32
+ - WizardLM/WizardLM_evol_instruct_V2_196k
33
+ - teknium/GPT4-LLM-Cleaned
34
+ - GPTeacher
35
+ - OpenGPT
36
+ - meta-math/MetaMathQA
37
+ - Open-Orca/SlimOrca
38
+ - garage-bAInd/Open-Platypus
39
+ - anon8231489123/ShareGPT_Vicuna_unfiltered
40
+ - Unnatural-Instructions-GPT4
41
+ model-index:
42
+ - name: Replete-Coder-llama3-8b
43
+ results:
44
+ - task:
45
+ name: HumanEval
46
+ type: text-generation
47
+ dataset:
48
+ type: openai_humaneval
49
+ name: HumanEval
50
+ metrics:
51
+ - name: pass@1
52
+ type: pass@1
53
+ value:
54
+ verified: false
55
+ - task:
56
+ name: AI2 Reasoning Challenge
57
+ type: text-generation
58
+ dataset:
59
+ name: AI2 Reasoning Challenge (25-Shot)
60
+ type: ai2_arc
61
+ config: ARC-Challenge
62
+ split: test
63
+ args:
64
+ num_few_shot: 25
65
+ metrics:
66
+ - type: accuracy
67
+ value:
68
+ name: normalized accuracy
69
+ source:
70
+ url: https://www.placeholderurl.com
71
+ name: Open LLM Leaderboard
72
+ - task:
73
+ name: Text Generation
74
+ type: text-generation
75
+ dataset:
76
+ name: HellaSwag (10-Shot)
77
+ type: hellaswag
78
+ split: validation
79
+ args:
80
+ num_few_shot: 10
81
+ metrics:
82
+ - type: accuracy
83
+ value:
84
+ name: normalized accuracy
85
+ source:
86
+ url: https://www.placeholderurl.com
87
+ name: Open LLM Leaderboard
88
+ - task:
89
+ name: Text Generation
90
+ type: text-generation
91
+ dataset:
92
+ name: MMLU (5-Shot)
93
+ type: cais/mmlu
94
+ config: all
95
+ split: test
96
+ args:
97
+ num_few_shot: 5
98
+ metrics:
99
+ - type: accuracy
100
+ value:
101
+ name: accuracy
102
+ source:
103
+ url: https://www.placeholderurl.com
104
+ name: Open LLM Leaderboard
105
+ - task:
106
+ name: Text Generation
107
+ type: text-generation
108
+ dataset:
109
+ name: TruthfulQA (0-shot)
110
+ type: truthful_qa
111
+ config: multiple_choice
112
+ split: validation
113
+ args:
114
+ num_few_shot: 0
115
+ metrics:
116
+ - type: multiple_choice_accuracy
117
+ value:
118
+ source:
119
+ url: https://www.placeholderurl.com
120
+ name: Open LLM Leaderboard
121
+ - task:
122
+ name: Text Generation
123
+ type: text-generation
124
+ dataset:
125
+ name: Winogrande (5-shot)
126
+ type: winogrande
127
+ config: winogrande_xl
128
+ split: validation
129
+ args:
130
+ num_few_shot: 5
131
+ metrics:
132
+ - type: accuracy
133
+ value:
134
+ name: accuracy
135
+ source:
136
+ url: https://www.placeholderurl.com
137
+ name: Open LLM Leaderboard
138
+ - task:
139
+ name: Text Generation
140
+ type: text-generation
141
+ dataset:
142
+ name: GSM8k (5-shot)
143
+ type: gsm8k
144
+ config: main
145
+ split: test
146
+ args:
147
+ num_few_shot: 5
148
+ metrics:
149
+ - type: accuracy
150
+ value:
151
+ name: accuracy
152
+ source:
153
+ url: https://www.placeholderurl.com
154
+ name: Open LLM Leaderboard
155
+
156
  ---
157
+ This is the Qwen/Qwen2-1.5B-Instruct model with the Replete-AI/Adapter_For_Replete-Coder-Qwen2-1.5b applied on top of it.
158
+
159
+ This is mostly an experinment to see how the model would perform.
160
+
161
+ Links to the oringal model and adapter are bellow:
162
+
163
+ Orginal model:
164
+
165
+ - https://huggingface.co/Qwen/Qwen2-1.5B-Instruct
166
+
167
+ Adapter:
168
+
169
+ - https://huggingface.co/Replete-AI/Adapter_For_Replete-Coder-Qwen2-1.5b
170
+
171
+ _________________________________________________________________________________________________________
172
+
173
+ # Original model card for Replete-Coder-Qwen2-1.5b bellow
174
+ _________________________________________________________________________________________________________
175
+
176
+ # Replete-Coder-Qwen2-1.5b
177
+ Finetuned by: Rombodawg
178
+ ### More than just a coding model!
179
+ Although Replete-Coder has amazing coding capabilities, its trained on vaste amount of non-coding data, fully cleaned and uncensored. Dont just use it for coding, use it for all your needs! We are truly trying to make the GPT killer!
180
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/-0dERC793D9XeFsJ9uHbx.png)
181
+
182
+ Thank you to TensorDock for sponsoring Replete-Coder-llama3-8b and Replete-Coder-Qwen2-1.5b
183
+ you can check out their website for cloud compute rental bellow.
184
+ - https://tensordock.com
185
+ __________________________________________________________________________________________________
186
+ Replete-Coder-Qwen2-1.5b is a general purpose model that is specially trained in coding in over 100 coding languages. The data used to train the model contains 25% non-code instruction data and 75% coding instruction data totaling up to 3.9 million lines, roughly 1 billion tokens, or 7.27gb of instruct data. The data used to train this model was 100% uncensored, then fully deduplicated, before training happened.
187
+
188
+ The Replete-Coder models (including Replete-Coder-llama3-8b and Replete-Coder-Qwen2-1.5b) feature the following:
189
+
190
+ - Advanced coding capabilities in over 100 coding languages
191
+ - Advanced code translation (between languages)
192
+ - Security and vulnerability prevention related coding capabilities
193
+ - General purpose use
194
+ - Uncensored use
195
+ - Function calling
196
+ - Advanced math use
197
+ - Use on low end (8b) and mobile (1.5b) platforms
198
+
199
+ Notice: Replete-Coder series of models are fine-tuned on a context window of 8192 tokens. Performance past this context window is not guaranteed.
200
+ __________________________________________________________________________________________________
201
+
202
+ You can find the 25% non-coding instruction below:
203
+
204
+ - https://huggingface.co/datasets/Replete-AI/OpenHermes-2.5-Uncensored
205
+
206
+ And the 75% coding specific instruction data below:
207
+
208
+ - https://huggingface.co/datasets/Replete-AI/code_bagel
209
+
210
+ These two datasets were combined to create the final dataset for training, which is linked below:
211
+
212
+ - https://huggingface.co/datasets/Replete-AI/code_bagel_hermes-2.5
213
+ __________________________________________________________________________________________________
214
+ ## Prompt Template: ChatML
215
+ ```
216
+ <|im_start|>system
217
+ {}<|im_end|>
218
+
219
+ <|im_start|>user
220
+ {}<|im_end|>
221
+
222
+ <|im_start|>assistant
223
+ {}
224
+ ```
225
+ Note: The system prompt varies in training data, but the most commonly used one is:
226
+ ```
227
+ Below is an instruction that describes a task, Write a response that appropriately completes the request.
228
+ ```
229
+ End token:
230
+ ```
231
+ <|endoftext|>
232
+ ```
233
+ __________________________________________________________________________________________________
234
+ Thank you to the community for your contributions to the Replete-AI/code_bagel_hermes-2.5 dataset. Without the participation of so many members making their datasets free and open source for any to use, this amazing AI model wouldn't be possible.
235
+
236
+ Extra special thanks to Teknium for the Open-Hermes-2.5 dataset and jondurbin for the bagel dataset and the naming idea for the code_bagel series of datasets. You can find both of their huggingface accounts linked below:
237
+
238
+ - https://huggingface.co/teknium
239
+ - https://huggingface.co/jondurbin
240
+
241
+ Another special thanks to unsloth for being the main method of training for Replete-Coder. Bellow you can find their github, as well as the special Replete-Ai secret sause (Unsloth + Qlora + Galore) colab code document that was used to train this model.
242
+
243
+ - https://github.com/unslothai/unsloth
244
+ - https://colab.research.google.com/drive/1eXGqy5M--0yW4u0uRnmNgBka-tDk2Li0?usp=sharing
245
+ __________________________________________________________________________________________________
246
+
247
+ ## Join the Replete-Ai discord! We are a great and Loving community!
248
+
249
+ - https://discord.gg/ZZbnsmVnjD
250
+
251
+ ______________________________________________________________________________
252
+ # Original model card for Qwen/Qwen2-1.5B-Instruct bellow
253
+ ______________________________________________________________________________
254
+
255
+ # Qwen2-1.5B-Instruct
256
+
257
+ ## Introduction
258
+
259
+ Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 1.5B Qwen2 model.
260
+
261
+ Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
262
+
263
+ For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/).
264
+ <br>
265
+
266
+ ## Model Details
267
+ Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
268
+
269
+ ## Training details
270
+ We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
271
+
272
+
273
+ ## Requirements
274
+ The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
275
+ ```
276
+ KeyError: 'qwen2'
277
+ ```
278
+
279
+ ## Quickstart
280
+
281
+ Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
282
+
283
+ ```python
284
+ from transformers import AutoModelForCausalLM, AutoTokenizer
285
+ device = "cuda" # the device to load the model onto
286
+
287
+ model = AutoModelForCausalLM.from_pretrained(
288
+ "Qwen/Qwen2-1.5B-Instruct",
289
+ torch_dtype="auto",
290
+ device_map="auto"
291
+ )
292
+ tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-1.5B-Instruct")
293
+
294
+ prompt = "Give me a short introduction to large language model."
295
+ messages = [
296
+ {"role": "system", "content": "You are a helpful assistant."},
297
+ {"role": "user", "content": prompt}
298
+ ]
299
+ text = tokenizer.apply_chat_template(
300
+ messages,
301
+ tokenize=False,
302
+ add_generation_prompt=True
303
+ )
304
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
305
+
306
+ generated_ids = model.generate(
307
+ model_inputs.input_ids,
308
+ max_new_tokens=512
309
+ )
310
+ generated_ids = [
311
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
312
+ ]
313
+
314
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
315
+ ```
316
+
317
+ ## Evaluation
318
+
319
+ We briefly compare Qwen2-1.5B-Instruct with Qwen1.5-1.8B-Chat. The results are as follows:
320
 
321
+ | Datasets | Qwen1.5-0.5B-Chat | **Qwen2-0.5B-Instruct** | Qwen1.5-1.8B-Chat | **
322
+ Qwen2-1.5B-Instruct** |
323
+ | :--- | :---: | :---: | :---: | :---: |
324
+ | MMLU | 35.0 | **37.9** | 43.7 | **52.4** |
325
+ | HumanEval | 9.1 | **17.1** | 25.0 | **37.8** |
326
+ | GSM8K | 11.3 | **40.1** | 35.3 | **61.6** |
327
+ | C-Eval | 37.2 | **45.2** | 55.3 | **63.8** |
328
+ | IFEval (Prompt Strict-Acc.) | 14.6 | **20.0** | 16.8 | **29.0** |
329
 
330
+ ## Citation
 
 
331
 
332
+ If you find our work helpful, feel free to give us a cite.
333
 
334
+ ```
335
+ @article{qwen2,
336
+ title={Qwen2 Technical Report},
337
+ year={2024}
338
+ }
339
+ ```