Text Generation
PEFT
Safetensors
llama-2
Eval Results
dfurman commited on
Commit
3edeca7
1 Parent(s): 2e43fdd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -19,10 +19,9 @@ base_model: meta-llama/Llama-2-70b-hf
19
 
20
  # Llama-2-70B-Instruct-v0.1
21
 
22
- *Note*: This model was ranked **6th** on 🤗's Open LLM Leaderboard in Aug 2023
23
-
24
  This instruction model was built via parameter-efficient QLoRA finetuning of [llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) on the first 25k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) (an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)). Finetuning was executed on a single H100 (80 GB PCIe) for roughly 17 hours on the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
25
 
 
26
 
27
  ## Helpful links
28
 
@@ -32,7 +31,6 @@ This instruction model was built via parameter-efficient QLoRA finetuning of [ll
32
  * Loss curves: [plot](https://huggingface.co/dfurman/Llama-2-70B-Instruct-v0.1-peft#finetuning-description)
33
  * Runtime stats: [table](https://huggingface.co/dfurman/Llama-2-70B-Instruct-v0.1-peft#runtime-tests)
34
 
35
-
36
  ## Open LLM Leaderboard Evaluation Results
37
  Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dfurman__llama-2-70b-dolphin-peft)
38
 
 
19
 
20
  # Llama-2-70B-Instruct-v0.1
21
 
 
 
22
  This instruction model was built via parameter-efficient QLoRA finetuning of [llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) on the first 25k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) (an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)). Finetuning was executed on a single H100 (80 GB PCIe) for roughly 17 hours on the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
23
 
24
+ *Note*: This model was ranked **6th** on 🤗's Open LLM Leaderboard in Aug 2023
25
 
26
  ## Helpful links
27
 
 
31
  * Loss curves: [plot](https://huggingface.co/dfurman/Llama-2-70B-Instruct-v0.1-peft#finetuning-description)
32
  * Runtime stats: [table](https://huggingface.co/dfurman/Llama-2-70B-Instruct-v0.1-peft#runtime-tests)
33
 
 
34
  ## Open LLM Leaderboard Evaluation Results
35
  Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dfurman__llama-2-70b-dolphin-peft)
36