The model 'PhiMoEForCausalLM' is not supported for text-generation. Supported models are ['BartForCausalLM', 'BertLMHeadModel', .......

#34
by xxbadarxx - opened

image.png

I have installed transformers 4.43.0 according to your requirement

Hi Thanks for your interest. Have you added 'trust_remote_code=True' in your load code?

e. g.

model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3.5-MoE-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)

This comment has been hidden

yes I have added it .The difference is that I loaded the model from a local path (Does this have any impact?)

1.png

Anyway, it works, but there's the warning output "The model 'PhiMoEForCausalLM' is not supported for text-generation. ". I'm not sure if it will have any impact.
image.png

Microsoft org

@xxbadarxx Loading the model from a local path is perfectly fine and has no impact. As for the text-generation warning, I couldn’t repro it. It doesn’t affect the quality, so you can disregard the warning.

Can you convert it to gguf?

1]:
import torch
from transformers import AutoModel For CausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model AutoModel For CausalLM.from_pretrained(
"/ /Phi-3.5-MoE-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer AutoTokenizer.from_pretrained("/_
/Phi-3.5-MoE-instruct")

config.json: 100%
 4.53k/4.53k [00:00<00:00, 160kB/s]
configuration_phimoe.py: 100%
 12.3k/12.3k [00:00<00:00, 575kB/s]
A new version of the following files was downloaded from https://huggingface.co/microsoft/Phi-3.5-MoE-instruct:
- configuration_phimoe.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
modeling_phimoe.py: 100%
 80.5k/80.5k [00:00<00:00, 3.19MB/s]

ImportError Traceback (most recent call last)
in <cell line: 6>()
4 torch.random.manual_seed(0)
5
----> 6 model = AutoModelForCausalLM.from_pretrained(
7 "microsoft/Phi-3.5-MoE-instruct",
8 device_map="cuda",

3 frames
/usr/local/lib/python3.10/dist-packages/transformers/dynamic_module_utils.py in check_imports(filename)
180
181 if len(missing_packages) > 0:
--> 182 raise ImportError(
183 "This modeling file requires the following packages that were not found in your environment: "
184 f"{', '.join(missing_packages)}. Run pip install {' '.join(missing_packages)}"

ImportError: This modeling file requires the following packages that were not found in your environment: flash_attn. Run pip install flash_attn


NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.

To view examples of installing some common dependencies, click the
"Open Examples" button below.

I could never download the flash_attn and I could not download the form

colab tpu

Software
PyTorch
Transformers
Flash-Attention
Hardware
Note that by default, the Phi-3.5-MoE-instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:

NVIDIA A100
NVIDIA A6000
NVIDIA H100

This comment has been hidden

@xxbadarxx Loading the model from a local path is perfectly fine and has no impact. As for the text-generation warning, I couldn’t repro it. It doesn’t affect the quality, so you can disregard the warning.

OK, Thank you for your reply! 😊

I could never download the flash_attn and I could not download the form

colab tpu

Software
PyTorch
Transformers
Flash-Attention
Hardware
Note that by default, the Phi-3.5-MoE-instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:

NVIDIA A100
NVIDIA A6000
NVIDIA H100

I have the same issue. Is any solution found? i tested with

NVIDIA A40
NVIDIA RTX A5000

Sign up or log in to comment