bartowski/Meta-Llama-3.1-8B-Instruct-GGUF does not appear to have a file named config.json.

#11
by apapoutsis - opened

Hi
Thank you for all this job

Can you help a little here please?

Here is the code

Initialize the pipeline

pipe = pipeline(
"text-generation",
model="bartowski/Meta-Llama-3.1-8B-Instruct-GGUF",
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda",
)

You need to use llama.cpp or llama-cpp-python

If you click "use this model" on the huggingface page you can see example code by selecting "llama-cpp-python"


AttributeError Traceback (most recent call last)
Cell In[34], line 3
1 from llama_cpp import Llama
----> 3 llm = Llama.from_pretrained(
4 repo_id="bartowski/Meta-Llama-3.1-8B-Instruct-GGUF",
5 filename="Meta-Llama-3.1-8B-Instruct-IQ2_M.gguf",
6 )
8 llm.create_chat_completion(
9 messages = [
10 {
(...)
14 ]
15 )

AttributeError: type object 'Llama' has no attribute 'from_pretrained'

I still have errors

Sign up or log in to comment