Potentionally zipfile corruption in the pytorch `.bin` files if using Python 3.12. (still everything seems to works just fine)

#3
by Noeda - opened

Howdy.

I'm someone who made a quant of this model for Q6_K .gguf here: https://huggingface.co/Noeda/Liberated-Qwen1.5-72B-GGUF

When I originally started .gguf python convert-hf-to-gguf.py from llama.cpp repository, I got Python exception from zipfile module claiming the first pytorch 00001 .bin file is corrupted due to overlapping files. This didn't happen with the original Qwen1.5-72B-Chat files.

With a bit of digging, I learned that in Python 3.12 that I was using, zipfile got some more sophisticated detection to catch bad zip files to avoid zip bomb DoS attacks.

I worked around this by downgrading to Python 3.10, which had an older version of zipfile.

I am not convinced there is actually any corruption at all, because I was unable to reproduce this with anything else. Not with regular unzip on a modern MacOS, not with Python 3.12 zipfile that decompressed the .bin files, not with zip -T (this command is supposed to test integrity). And the model empirically works just fine after I cooked the .ggufs.

The regular Qwen1.5-72B-Chat files turn into .ggufs with no complaints. To be precise: these files: https://huggingface.co/Qwen/Qwen1.5-72B-Chat-GGUF

I don't need any action from you, but I thought you might want to be aware there might be some kind of Python bug or something off with the .bin files here, in case you hear someone else complain about it. Easy work around for anyone doing the .gguf quants: Use older Python; I used Python 3.10. I suspect the bug might actually be in Python itself in zipfile, because only the scripts from llama.cpp seem to complain about the corruption. I am not sure and not investing more time to dig deeper.

Big thanks for the model. I can't tell if it's smarter or dumber than original Qwen but I enjoy that the responses seem to look different style from original so I'm having fun playing around with it.

Er, I gave the wrong link for the Qwen. I mean these files: https://huggingface.co/Qwen/Qwen1.5-72B-Chat (that I then .ggufified). I also add that I used checksums after downloading to make sure my downloads of all the files were not corrupted, so I think my files should be 100% exact compared to the repo here.

Noeda changed discussion title from Potentionally zipfile corruption in the pytorch `.bin` files. to Potentionally zipfile corruption in the pytorch `.bin` files if using Python 3.12. (still everything seems to works just fine)

Sign up or log in to comment