munish0838 commited on
Commit
c096585
1 Parent(s): 9b6f4e0

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ license: llama3
5
+ datasets:
6
+ - arcee-ai/EvolKit-20k
7
+ language:
8
+ - en
9
+ base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
10
+
11
+ ---
12
+
13
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
14
+
15
+
16
+ # QuantFactory/Llama-3.1-SuperNova-Lite-GGUF
17
+ This is quantized version of [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) created using llama.cpp
18
+
19
+ # Original Model Card
20
+
21
+ <div align="center">
22
+ <img src="https://i.ibb.co/r072p7j/eopi-ZVu-SQ0-G-Cav78-Byq-Tg.png" alt="Llama-3.1-SuperNova-Lite" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 100%; height: auto;">
23
+ </div>
24
+
25
+ ## Overview
26
+
27
+ Llama-3.1-SuperNova-Lite is an 8B parameter model developed by Arcee.ai, based on the Llama-3.1-8B-Instruct architecture. It is a distilled version of the larger Llama-3.1-405B-Instruct model, leveraging offline logits extracted from the 405B parameter variant. This 8B variation of Llama-3.1-SuperNova maintains high performance while offering exceptional instruction-following capabilities and domain-specific adaptability.
28
+
29
+ The model was trained using a state-of-the-art distillation pipeline and an instruction dataset generated with [EvolKit](https://github.com/arcee-ai/EvolKit), ensuring accuracy and efficiency across a wide range of tasks. For more information on its training, visit blog.arcee.ai.
30
+
31
+ Llama-3.1-SuperNova-Lite excels in both benchmark performance and real-world applications, providing the power of large-scale models in a more compact, efficient form ideal for organizations seeking high performance with reduced resource requirements.
32
+
33
+ # Evaluations
34
+ We will be submitting this model to the OpenLLM Leaderboard for a more conclusive benchmark - but here are our internal benchmarks using the main branch of lm evaluation harness:
35
+
36
+ | Benchmark | SuperNova-Lite | Llama-3.1-8b-Instruct |
37
+ |-------------|----------------|----------------------|
38
+ | IF_Eval | 81.1 | 77.4 |
39
+ | MMLU Pro | 38.7 | 37.7 |
40
+ | TruthfulQA | 64.4 | 55.0 |
41
+ | BBH | 51.1 | 50.6 |
42
+ | GPQA | 31.2 | 29.02 |
43
+
44
+ The script used for evaluation can be found inside this repository under /eval.sh, or click [here](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite/blob/main/eval.sh)
45
+
46
+ # note
47
+ This readme will be edited regularly on September 10, 2024 (the day of release). After the final readme is in place we will remove this note.