xwinxu commited on
Commit
631d493
1 Parent(s): 289b777

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +4 -6
README.md CHANGED
@@ -22,8 +22,8 @@ tags:
22
  ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06)
23
 
24
  This repo contains the model checkpoints for:
25
- - model family <b>huggyllama/llama-30b</b>
26
- - optimized with the loss <b>CSFT</b>
27
  - aligned using the SHP, Anthropic HH and Open Assistant datasets.
28
 
29
  To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
@@ -40,10 +40,8 @@ Chocolate cake.
40
  ```
41
  Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
42
 
43
-
44
- For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
45
- To generate with these control tokens in the context, postpend either to the prompt.
46
-
47
 
48
  Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
49
 
 
22
  ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06)
23
 
24
  This repo contains the model checkpoints for:
25
+ - model family <b>llama30b</b>
26
+ - optimized with the loss <b>SFT+CSFT</b>
27
  - aligned using the SHP, Anthropic HH and Open Assistant datasets.
28
 
29
  To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
 
40
  ```
41
  Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
42
 
43
+ For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
44
+ To generate with these control tokens in the context, postpend either to the prompt.
 
 
45
 
46
  Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
47