runtime error

Exit code: 1. Reason: late.jinja: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.38k/1.38k [00:00<00:00, 3.15MB/s] config.json: 0%| | 0.00/1.31k [00:00<?, ?B/s] config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.31k/1.31k [00:00<00:00, 3.27MB/s] `torch_dtype` is deprecated! Use `dtype` instead! model.safetensors: 0%| | 0.00/740M [00:00<?, ?B/s] model.safetensors: 9%|β–‰ | 69.0M/740M [00:01<00:15, 44.1MB/s] model.safetensors: 82%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 606M/740M [00:02<00:00, 272MB/s]  model.safetensors: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 740M/740M [00:02<00:00, 270MB/s] Traceback (most recent call last): File "/app/app.py", line 53, in <module> model = AutoModelForCausalLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 373, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3951, in from_pretrained model, missing_keys, unexpected_keys, mismatched_keys, offload_index, error_msgs = cls._load_pretrained_model( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4058, in _load_pretrained_model caching_allocator_warmup(model, expanded_device_map, hf_quantizer) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4591, in caching_allocator_warmup device_memory = torch_accelerator_module.mem_get_info(index)[0] File "/usr/local/lib/python3.10/site-packages/torch/cuda/memory.py", line 838, in mem_get_info return torch.cuda.cudart().cudaMemGetInfo(device) File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 489, in cudart _lazy_init() File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 412, in _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

Container logs:

Fetching error logs...