New to this . . .

#17
by PaintedDreams - opened

Not to sure why this is happening, launching from cpu (AMD Ryzen 5 3600) (32gb ram) and i get an error;


CUDA SETUP: Loading binary C:\Users\Jacob\Desktop\oobabooga-windows\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.dll...
C:\Users\Jacob\Desktop\oobabooga-windows\installer_files\env\lib\site-packages\bitsandbytes\cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
Loading vicuna-13b-GPTQ-4bit-128g...
Traceback (most recent call last):
File "C:\Users\Jacob\Desktop\oobabooga-windows\text-generation-webui\server.py", line 302, in
shared.model, shared.tokenizer = load_model(shared.model_name)
File "C:\Users\Jacob\Desktop\oobabooga-windows\text-generation-webui\modules\models.py", line 100, in load_model
from modules.GPTQ_loader import load_quantized
File "C:\Users\Jacob\Desktop\oobabooga-windows\text-generation-webui\modules\GPTQ_loader.py", line 14, in
import llama_inference_offload
ModuleNotFoundError: No module named 'llama_inference_offload'
Press any key to continue . . .

more info:


@echo off

@echo Starting the web UI...

cd /D "%~dp0"

set MAMBA_ROOT_PREFIX=%cd%\installer_files\mamba
set INSTALL_ENV_DIR=%cd%\installer_files\env

if not exist "%MAMBA_ROOT_PREFIX%\condabin\micromamba.bat" (
call "%MAMBA_ROOT_PREFIX%\micromamba.exe" shell hook >nul 2>&1
)
call "%MAMBA_ROOT_PREFIX%\condabin\micromamba.bat" activate "%INSTALL_ENV_DIR%" || ( echo MicroMamba hook not found. && goto end )
cd text-generation-webui

call python server.py --auto-devices --chat --wbits 4 --groupsize 128 --cpu --cpu-memory 3500MiB --pre_layer 30

:end
pause

Anyone know why this happens?

I have the same problem, have you managed to fix it?

Sign up or log in to comment