duyntnet commited on
Commit
9cd4692
1 Parent(s): 13c6d61

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +95 -0
README.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ inference: false
7
+ tags:
8
+ - transformers
9
+ - gguf
10
+ - imatrix
11
+ - mpt-7b-8k-chat
12
+ ---
13
+ Quantizations of https://huggingface.co/mosaicml/mpt-7b-8k-chat
14
+
15
+
16
+ # From original readme
17
+
18
+ ## How to Use
19
+
20
+ This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
21
+
22
+ ```python
23
+ import transformers
24
+ model = transformers.AutoModelForCausalLM.from_pretrained(
25
+ 'mosaicml/mpt-7b-chat-8k',
26
+ trust_remote_code=True
27
+ )
28
+ ```
29
+ Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
30
+ This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
31
+ `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
32
+
33
+ To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
34
+ ```python
35
+ import torch
36
+ import transformers
37
+
38
+ name = 'mosaicml/mpt-7b-chat-8k'
39
+
40
+ config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
41
+ config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
42
+ config.init_device = 'cuda:0' # For fast initialization directly on GPU!
43
+
44
+ model = transformers.AutoModelForCausalLM.from_pretrained(
45
+ name,
46
+ config=config,
47
+ torch_dtype=torch.bfloat16, # Load model weights in bfloat16
48
+ trust_remote_code=True
49
+ )
50
+ ```
51
+
52
+ The model was trained initially with a sequence length of 2048 with an additional pretraining stage for sequence length adapation up to 8192. However, ALiBi enables users to increase the maximum sequence length even further during finetuning and/or inference. For example:
53
+
54
+ ```python
55
+ import transformers
56
+
57
+ name = 'mosaicml/mpt-7b-chat-8k'
58
+
59
+ config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
60
+ config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384
61
+
62
+ model = transformers.AutoModelForCausalLM.from_pretrained(
63
+ name,
64
+ config=config,
65
+ trust_remote_code=True
66
+ )
67
+ ```
68
+
69
+ This model was trained with the MPT-7B-chat tokenizer which is based on the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer and includes additional ChatML tokens.
70
+
71
+ ```python
72
+ from transformers import AutoTokenizer
73
+ tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-7b-8k')
74
+ ```
75
+
76
+ The model can then be used, for example, within a text-generation pipeline.
77
+ Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
78
+
79
+ ```python
80
+ from transformers import pipeline
81
+
82
+ with torch.autocast('cuda', dtype=torch.bfloat16):
83
+ inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
84
+ outputs = model.generate(**inputs, max_new_tokens=100)
85
+ print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
86
+
87
+ # or using the HF pipeline
88
+ pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
89
+ with torch.autocast('cuda', dtype=torch.bfloat16):
90
+ print(
91
+ pipe('Here is a recipe for vegan banana bread:\n',
92
+ max_new_tokens=100,
93
+ do_sample=True,
94
+ use_cache=True))
95
+ ```