How do I load t5-v1_1-xxl-encoder-gguf?

#1
by YuFeiLiu - opened

Is there a node about it?
屏幕截图 2024-08-20 095744.png

Yes, just merged it a few minutes ago so you'll want to update. Should be this one:

image.png

Yes, just merged it a few minutes ago so you'll want to update. Should be this one:

image.png

and clip-vit-large-patch14.bin is?

I thought flux is using clip_l.safetensors from here? (just 200-MBs)
https://huggingface.co/lllyasviel/flux_text_encoders/tree/main

I can't use clip_l.safetensors with this DualCLIPLoader (GGUF),
image.png

so where can I get this clip-vit-large-patch14.bin?
here?
https://huggingface.co/openai/clip-vit-large-patch14/tree/main
I can't find the exact file.

EDIT: sorry it seems ComfyUI problem, portable version have problem, but my manual-install version works fine with clip_l.safetensors..

Thanks it works! Painfully slow on my Mac but no swap triggerred now and it finally gets there!

Does it work with Forge UI?

I can't use clip_l.safetensors with this DualCLIPLoader (GGUF) - same problem... I am still using portable version but I hope you have a solution for this. in the meantime, thank you for your efforts

I too face the same issue. On portable version and clip_l.safetensors isn't working.

There were recent updates to both the node and comfy, it may be worth updating both and retrying.

My bad.
Remember to set DualCLIPLoader (GGUF) type to flux.

Alreay set DualCLIPLoader (GGUF) type to flux... but same here.. :(

And different error today :

E:\ComfyUI_TEST\ComfyUI\custom_nodes\ComfyUI-GGUF\dequant.py:7: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
data = torch.tensor(tensor.data)
E:\ComfyUI_TEST\python_embeded\Lib\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: clean_up_tokenization_spaces was not set. It will be set to True by default. This behavior will be depracted in transformers v4.45, and will be then set to False by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884

Sign up or log in to comment