Edit model card

Running Flux.1-dev under 12GBs

This repository contains the NF4 params for the T5 and transformer of Flux.1-Dev. Check out this Colab Notebook for details on how they were obtained.

Check out this notebook that shows how to use the checkpoints and run in a free-tier Colab Notebook.

Respective diffusers PR: https://github.com/huggingface/diffusers/pull/9213/.

The checkpoints of this repository were optimized to run on a T4 notebook. More specifically, the compute datatype of the quantized checkpoints was kept to FP16. In practice, if you have a GPU card that supports BF16, you should change the compute datatype to BF16 (bnb_4bit_compute_dtype).

Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .