Edit model card

Dromedary-2 (verbose, v1) Model Card

Model details

SALMON Logo Dromedary Logo

Model type: Dromedary-2 is an open-source self-aligned language model trained in minimal human supervision with the SALMON (Self-Alignment with Principle-Following Reward Models) technique. The base language model is LLaMA-70b, based on the transformer architecture.

NOTE: Dromedary-2 is trained with QLoRA and the bfloat16 data type. While it is possible to merge the QLoRA weights with the quantized model and thus enable inference with libraries such as TGI and vLLM, we found the merged weights can lead to degenerated performance. Therefore, we recommend directly loading the QLoRA weights with the PEFT-LoRA framework.

Please check the inference section of our repo for the complete inference code.

system_prompt = (
      "# Dromedary\n\n## System Overview\n\n"
      "Consider an AI assistant whose codename is Dromedary, developed by the Self-Align team. "
      "Dromedary is trained on data up until Sept-2022, and it endeavors to be a helpful, ethical and reliable assistant.\n\n"
      "## User Conversation\n\n"
)
user_prompt = "### User\n"
assistant_prompt = "### Dromedary\n"
seperator = "\n\n"

dtype = torch.bfloat16

model_path = "path/to/llama-2-70b-hf"
qlora_path = "path/to/dromedary-2-70b-qlora-delta-v0"  # i.e., this model hub

bnb_config = BitsAndBytesConfig(
      load_in_4bit=True,
      bnb_4bit_compute_dtype=dtype,
      bnb_4bit_use_double_quant=True,
      bnb_4bit_quant_type="nf4",
)

model = AutoModelForCausalLM.from_pretrained(
      model_path,
      load_in_4bit=True,
      device_map={"": "cuda:0"},
      quantization_config=bnb_config,
      torch_dtype=dtype,
)

model = PeftModel.from_pretrained(
      model,
      qlora_path,
      is_trainable=False,
)

Model date: Dromedary-2 was trained between July 2023 and Aug 2023, but its knowledge only goes up until Sept-2022.

License: LLaMA-2's bespoke license

More Information

Paper or resources for more information: https://arxiv.org/abs/2310.05910

Where to send questions or comments about the model: https://github.com/IBM/SALMON/issues

Organizations developing the model: The Self-Align team is a joint effort between CMU and IBM.

Intended use

Primary intended uses: The primary use of Dromedary is research on the alignment of large language models.

Primary intended users: The primary intended users of the model are researchers in artificial intelligence.

Training dataset

6 In-Context Learning (ICL) exemplars

90K unlabeled prompts from ShareGPT

10K unlabeled prompts from databricks-dolly-15k

10K unlabeled prompts from OpenAssistant Conversations

40K unlabeled prompts from OpenOrca

7.5K unlabeled prompts from MATH

Evaluation dataset

We evaluate Dromedary-2 on:

  1. Chatbot benchmarks: Vicuna-Bench, MT-Bench, AlpacaEval
  2. Capability benchmarks: Big-Bench Hard (reasoning), HumanEval (coding), TydiQA (multilingualism)
  3. Truthfulness benchmarks: TruthfulQA
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .