runtime error

Exit code: 1. Reason: ( File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1673, in get_hf_file_metadata r = _request_wrapper( File "/usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 376, in _request_wrapper response = _request_wrapper( File "/usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 400, in _request_wrapper hf_raise_for_status(response) File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 352, in hf_raise_for_status raise RepositoryNotFoundError(message, response) from e huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-66d2274a-468577977059ad4b3dd6b42b;7eea05b7-4b15-4869-b27a-3abc3091ea93) Repository Not Found for url: https://huggingface.co/decapoda-research/llama-7b-hf/resolve/main/tokenizer_config.json. Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated. Invalid username or password. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/user/app/app.py", line 14, in <module> tokenizer = LlamaTokenizer.from_pretrained(BASE_MODEL) File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2103, in from_pretrained resolved_config_file = cached_file( File "/usr/local/lib/python3.10/site-packages/transformers/utils/hub.py", line 426, in cached_file raise EnvironmentError( OSError: decapoda-research/llama-7b-hf is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`

Container logs:

Fetching error logs...