Text Generation
Transformers
PyTorch
llama
Eval Results
text-generation-inference
Inference Endpoints
Declare commited on
Commit
89de5d4
1 Parent(s): 8d9bf15

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -3,7 +3,7 @@ license: apache-2.0
3
  ---
4
  [**Paper**](https://openreview.net/pdf?id=jkcHYEfPv3) | [**Github**](https://github.com/declare-lab/red-instruct) | [**Dataset**](https://huggingface.co/datasets/declare-lab/HarmfulQA)
5
 
6
- We created **Starling** by fine-tuning Vicuna-7B on [**HarmfulQA**](https://huggingface.co/datasets/declare-lab/HarmfulQA), a ChatGPT-distilled dataset that we collected using the Chain of Utterances (CoU) prompt. More details are on our paper [**Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment**](https://openreview.net/pdf?id=jkcHYEfPv3)
7
 
8
  <img src="https://declare-lab.net/assets/images/logos/starling-final.png" alt="Image" width="100" height="100">
9
 
@@ -13,21 +13,21 @@ Experimental results on several safety benchmark datasets indicate that **Starli
13
 
14
  <h2>Experimental Results</h2>
15
 
16
- **Compared to Vicuna, Avg. 5.2% reduction in Attack Success Rate (ASR) on DangerousQA and HarmfulQA using three different prompts.**
17
 
18
- **Compared to Vicuna, Avg. 3-7% improvement in HHH score measured on BBH-HHH benchmark.**
19
 
20
  <img src="https://declare-lab.net/assets/images/logos/starling-results.png" alt="Image" width="1000" height="335">
21
 
22
- **TruthfulQA (MC2): 48.90 vs Vicuna's 47.00**
23
 
24
- **MMLU (5-shot): 46.69 vs Vicuna's 47.18**
25
 
26
- **BBH (3-shot): 33.47 vs Vicuna's 33.05**
27
 
28
  <h2>Jailbreak Prompt for harmfulness eval using Red Eval as reported in the paper</h2>
29
 
30
- **This jailbreak prompt (termed as Chain of Utterances (CoU) prompt in the paper) shows a 65% Attack Success Rate (ASR) on GPT-4 and 72% on ChatGPT.**
31
 
32
  <img src="https://declare-lab.net/assets/images/logos/jailbreakprompt_main_paper.png" alt="Image" width="1000" height="1000">
33
 
@@ -37,4 +37,4 @@ We also release our **HarmfulQA** dataset with 1,960 harmful questions (converti
37
 
38
  <img src="https://declare-lab.net/assets/images/logos/data_gen.png" alt="Image" width="1000" height="1000">
39
 
40
- Note: This model is referred to as Starling (Blue) in the paper. We shall soon release Starling (Blue-Red) which was trained on harmful data using an objective function that helps the model learn from the red (harmful) response data.
 
3
  ---
4
  [**Paper**](https://openreview.net/pdf?id=jkcHYEfPv3) | [**Github**](https://github.com/declare-lab/red-instruct) | [**Dataset**](https://huggingface.co/datasets/declare-lab/HarmfulQA)
5
 
6
+ As a part of our research efforts to make LLMs safer, we created **Starling**. It is obtained by fine-tuning Vicuna-7B on [**HarmfulQA**](https://huggingface.co/datasets/declare-lab/HarmfulQA), a ChatGPT-distilled dataset that we collected using the Chain of Utterances (CoU) prompt. More details are in our paper [**Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment**](https://openreview.net/pdf?id=jkcHYEfPv3)
7
 
8
  <img src="https://declare-lab.net/assets/images/logos/starling-final.png" alt="Image" width="100" height="100">
9
 
 
13
 
14
  <h2>Experimental Results</h2>
15
 
16
+ Compared to Vicuna, **Avg. 5.2% reduction in Attack Success Rate** (ASR) on DangerousQA and HarmfulQA using three different prompts.**
17
 
18
+ Compared to Vicuna, **Avg. 3-7% improvement in HHH score** measured on BBH-HHH benchmark.**
19
 
20
  <img src="https://declare-lab.net/assets/images/logos/starling-results.png" alt="Image" width="1000" height="335">
21
 
22
+ TruthfulQA (MC2): **48.90 vs Vicuna's 47.00**
23
 
24
+ MMLU (5-shot): **46.69 vs Vicuna's 47.18**
25
 
26
+ BBH (3-shot): **33.47 vs Vicuna's 33.05**
27
 
28
  <h2>Jailbreak Prompt for harmfulness eval using Red Eval as reported in the paper</h2>
29
 
30
+ This jailbreak prompt (termed as Chain of Utterances (CoU) prompt in the paper) shows a 65% Attack Success Rate (ASR) on GPT-4 and 72% on ChatGPT.
31
 
32
  <img src="https://declare-lab.net/assets/images/logos/jailbreakprompt_main_paper.png" alt="Image" width="1000" height="1000">
33
 
 
37
 
38
  <img src="https://declare-lab.net/assets/images/logos/data_gen.png" alt="Image" width="1000" height="1000">
39
 
40
+ _Note: This model is referred to as Starling (Blue) in the paper. We shall soon release Starling (Blue-Red) which was trained on harmful data using an objective function that helps the model learn from the red (harmful) response data._