InvestLM
This is the repo for a new financial domain large language model, InvestLM, tuned on Mixtral-8x7B-v0.1, using a carefully curated instruction dataset related to financial investment. We provide guidance on how to use InvestLM for inference.
Github Link: InvestLM
About AWQ
AWQ is an efficient, accurate, and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
Inference
Please use the following command to log in hugging face first.
huggingface-cli login
Prompt template
[INST] {prompt} [/INST]
How to use this AWQ model from Python code
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
import transformers
from threading import Thread
from transformers import AutoTokenizer,AutoModelForCausalLM
quant_path = "yixuantt/InvestLM-mistral-8x7B-v2-AWQ"
# Load model
model = AutoModelForCausalLM.from_pretrained(
quant_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
tokenizer = AutoTokenizer.from_pretrained(quant_path)
# Convert prompt to tokens
prompt_template = "[INST] {prompt} [/INST]"
prompt = "What is finance?"
def chat_processor(chat, max_new_tokens=100, do_sample=True):
tokenizer.use_default_system_prompt = False
streamer = transformers.TextIteratorStreamer(tokenizer, timeout=10.0, skip_prompt=True, skip_special_tokens=True)
generate_params = dict(
tokenizer(prompt_template.format(prompt=chat), return_tensors="pt").to('cuda'),
streamer=streamer,
max_new_tokens=max_new_tokens,
do_sample=do_sample,
temperature= 0.5,
repetition_penalty=1.2,
)
t = Thread(target=model.generate, kwargs=generate_params)
t.start()
outputs = []
for text in streamer:
outputs.append(text)
print(text, end="", flush=True)
return outputs
################################################################################################
#Generation
outputs = chat_processor(prompt, max_new_tokens=1000, do_sample=True)
- Downloads last month
- 0
Inference API (serverless) is not available, repository is disabled.