Some problem when deploying the lora adapter on vllm [AssertionError] [core.py:708] assert param_data.shape == loaded_weight.shape.
Hello, thank you for your great work.
I met some problem when deploying the lora adapter on vllm after fine tunning on unsloth.
I followed the colab notebook Qwen3_VL_(8B)-Vision.ipynb to fine tune the Qwen3-VL series models. The fine-tune process is successful and I can use unsloth to inference the lora weight. But when I use vllm to load the lora adapter, it would have a [AssertionError], which is come from [core.py:708] assert param_data.shape == loaded_weight.shape. Does anyone have the same problem?
This problem would not happen while fine-tunning Qwen2.5-VL series model.
Here is my vllm deploy command:
vllm serve unsloth/Qwen3-VL-4B-Instruct-unsloth-bnb-4bit \
--host 0.0.0.0 \
--port 8888 \
--max-model-len 18000 \
--gpu-memory-utilization 0.65 \
--trust_remote_code \
--enable-lora --lora-modules \
my_lora=Qwen3-VL-4B-Instruct_finetune_lora_model01 \
The same problem happened even I did not use the lora adapter. Only use the unsloth weight.
vllm serve unsloth/Qwen3-VL-4B-Instruct-unsloth-bnb-4bit \
--host 0.0.0.0 \
--port 8888 \
--max-model-len 18000 \
--gpu-memory-utilization 0.65 \
--trust_remote_code
Below are the print out error messages:
INFO 10-28 09:43:50 [__init__.py:216] Automatically detected platform cuda.
(APIServer pid=3054658) INFO 10-28 09:43:52 [api_server.py:1839] vLLM API server version 0.11.0
(APIServer pid=3054658) INFO 10-28 09:43:52 [utils.py:233] non-default args: {'model_tag': 'unsloth/Qwen3-VL-4B-Instruct-unsloth-bnb-4bit', 'host': '0.0.0.0', 'port': 8888, 'model': 'unsloth/Qwen3-VL-4B-Instruct-unsloth-bnb-4bit', 'trust_remote_code': True, 'max_model_len': 18000, 'gpu_memory_utilization': 0.65}
(APIServer pid=3054658) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
(APIServer pid=3054658) INFO 10-28 09:43:54 [model.py:547] Resolved architecture: Qwen3VLForConditionalGeneration
(APIServer pid=3054658) `torch_dtype` is deprecated! Use `dtype` instead!
(APIServer pid=3054658) WARNING 10-28 09:43:54 [model.py:1682] Your device 'NVIDIA TITAN RTX' (with compute capability 7.5) doesn't support torch.bfloat16. Falling back to torch.float16 for compatibility.
(APIServer pid=3054658) WARNING 10-28 09:43:54 [model.py:1733] Casting torch.bfloat16 to torch.float16.
(APIServer pid=3054658) INFO 10-28 09:43:54 [model.py:1510] Using max model len 18000
(APIServer pid=3054658) INFO 10-28 09:43:54 [scheduler.py:205] Chunked prefill is enabled with max_num_batched_tokens=2048.
INFO 10-28 09:43:57 [__init__.py:216] Automatically detected platform cuda.
(EngineCore_DP0 pid=3054747) INFO 10-28 09:43:59 [core.py:644] Waiting for init message from front-end.
(EngineCore_DP0 pid=3054747) INFO 10-28 09:43:59 [core.py:77] Initializing a V1 LLM engine (v0.11.0) with config: model='unsloth/Qwen3-VL-4B-Instruct-unsloth-bnb-4bit', speculative_config=None, tokenizer='unsloth/Qwen3-VL-4B-Instruct-unsloth-bnb-4bit', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.float16, max_seq_len=18000, download_dir=None, load_format=bitsandbytes, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=bitsandbytes, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=unsloth/Qwen3-VL-4B-Instruct-unsloth-bnb-4bit, enable_prefix_caching=True, chunked_prefill_enabled=True, pooler_config=None, compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output","vllm.mamba_mixer2","vllm.mamba_mixer","vllm.short_conv","vllm.linear_attention","vllm.plamo2_mamba_mixer","vllm.gdn_attention","vllm.sparse_attn_indexer"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"cudagraph_mode":[2,1],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"use_inductor_graph_partition":false,"pass_config":{},"max_capture_size":512,"local_cache_dir":null}
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:01 [fa_utils.py:57] Cannot use FA version 2 is not supported due to FA2 is only supported on devices with compute capability >= 8
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
(EngineCore_DP0 pid=3054747) INFO 10-28 09:44:01 [parallel_state.py:1208] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0
(EngineCore_DP0 pid=3054747) WARNING 10-28 09:44:01 [topk_topp_sampler.py:66] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
(EngineCore_DP0 pid=3054747) INFO 10-28 09:44:06 [gpu_model_runner.py:2602] Starting to load model unsloth/Qwen3-VL-4B-Instruct-unsloth-bnb-4bit...
(EngineCore_DP0 pid=3054747) INFO 10-28 09:44:06 [gpu_model_runner.py:2634] Loading model from scratch...
(EngineCore_DP0 pid=3054747) INFO 10-28 09:44:06 [cuda.py:372] Using FlexAttention backend on V1 engine.
(EngineCore_DP0 pid=3054747) INFO 10-28 09:44:06 [bitsandbytes_loader.py:759] Loading weights with BitsAndBytes quantization. May take a while ...
(EngineCore_DP0 pid=3054747) INFO 10-28 09:44:07 [weight_utils.py:392] Using model weights format ['*.safetensors']
(EngineCore_DP0 pid=3054747) INFO 10-28 09:44:07 [weight_utils.py:450] No model.safetensors.index.json found in remote.
Loading safetensors checkpoint shards: 0% Completed | 0/1 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00, 54.65it/s]
(EngineCore_DP0 pid=3054747)
Loading safetensors checkpoint shards: 0% Completed | 0/1 [00:00<?, ?it/s]
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] EngineCore failed to start.
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] Traceback (most recent call last):
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 699, in run_engine_core
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] engine_core = EngineCoreProc(*args, **kwargs)
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 498, in __init__
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] super().__init__(vllm_config, executor_class, log_stats,
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 83, in __init__
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] self.model_executor = executor_class(vllm_config)
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/executor/executor_base.py", line 54, in __init__
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] self._init_executor()
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/executor/uniproc_executor.py", line 55, in _init_executor
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] self.collective_rpc("load_model")
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/executor/uniproc_executor.py", line 83, in collective_rpc
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] return [run_method(self.driver_worker, method, args, kwargs)]
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/utils/__init__.py", line 3122, in run_method
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] return func(*args, **kwargs)
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 213, in load_model
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] self.model_runner.load_model(eep_scale_up=eep_scale_up)
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 2635, in load_model
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] self.model = model_loader.load_model(
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/model_loader/base_loader.py", line 50, in load_model
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] self.load_weights(model, model_config)
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/model_loader/bitsandbytes_loader.py", line 767, in load_weights
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] loaded_weights = model.load_weights(qweight_iterator)
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/qwen3_vl.py", line 1603, in load_weights
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] return loader.load_weights(weights, mapper=self.hf_to_vllm_mapper)
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/utils.py", line 294, in load_weights
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] autoloaded_weights = set(self._load_module("", self.module, weights))
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/utils.py", line 252, in _load_module
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] yield from self._load_module(prefix,
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/utils.py", line 225, in _load_module
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] loaded_params = module_load_weights(weights)
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/qwen3.py", line 341, in load_weights
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] return loader.load_weights(weights)
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/utils.py", line 294, in load_weights
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] autoloaded_weights = set(self._load_module("", self.module, weights))
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/utils.py", line 252, in _load_module
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] yield from self._load_module(prefix,
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/utils.py", line 225, in _load_module
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] loaded_params = module_load_weights(weights)
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/qwen2.py", line 440, in load_weights
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] weight_loader(param, loaded_weight)
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py", line 1326, in weight_loader
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] assert param_data.shape == loaded_weight.shape
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) ERROR 10-28 09:44:08 [core.py:708] AssertionError
(EngineCore_DP0 pid=3054747) Process EngineCore_DP0:
(EngineCore_DP0 pid=3054747) Traceback (most recent call last):
(EngineCore_DP0 pid=3054747) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
(EngineCore_DP0 pid=3054747) self.run()
(EngineCore_DP0 pid=3054747) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/multiprocessing/process.py", line 108, in run
(EngineCore_DP0 pid=3054747) self._target(*self._args, **self._kwargs)
(EngineCore_DP0 pid=3054747) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 712, in run_engine_core
(EngineCore_DP0 pid=3054747) raise e
(EngineCore_DP0 pid=3054747) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 699, in run_engine_core
(EngineCore_DP0 pid=3054747) engine_core = EngineCoreProc(*args, **kwargs)
(EngineCore_DP0 pid=3054747) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 498, in __init__
(EngineCore_DP0 pid=3054747) super().__init__(vllm_config, executor_class, log_stats,
(EngineCore_DP0 pid=3054747) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 83, in __init__
(EngineCore_DP0 pid=3054747) self.model_executor = executor_class(vllm_config)
(EngineCore_DP0 pid=3054747) ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/executor/executor_base.py", line 54, in __init__
(EngineCore_DP0 pid=3054747) self._init_executor()
(EngineCore_DP0 pid=3054747) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/executor/uniproc_executor.py", line 55, in _init_executor
(EngineCore_DP0 pid=3054747) self.collective_rpc("load_model")
(EngineCore_DP0 pid=3054747) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/executor/uniproc_executor.py", line 83, in collective_rpc
(EngineCore_DP0 pid=3054747) return [run_method(self.driver_worker, method, args, kwargs)]
(EngineCore_DP0 pid=3054747) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/utils/__init__.py", line 3122, in run_method
(EngineCore_DP0 pid=3054747) return func(*args, **kwargs)
(EngineCore_DP0 pid=3054747) ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 213, in load_model
(EngineCore_DP0 pid=3054747) self.model_runner.load_model(eep_scale_up=eep_scale_up)
(EngineCore_DP0 pid=3054747) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 2635, in load_model
(EngineCore_DP0 pid=3054747) self.model = model_loader.load_model(
(EngineCore_DP0 pid=3054747) ^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/model_loader/base_loader.py", line 50, in load_model
(EngineCore_DP0 pid=3054747) self.load_weights(model, model_config)
(EngineCore_DP0 pid=3054747) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/model_loader/bitsandbytes_loader.py", line 767, in load_weights
(EngineCore_DP0 pid=3054747) loaded_weights = model.load_weights(qweight_iterator)
(EngineCore_DP0 pid=3054747) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/qwen3_vl.py", line 1603, in load_weights
(EngineCore_DP0 pid=3054747) return loader.load_weights(weights, mapper=self.hf_to_vllm_mapper)
(EngineCore_DP0 pid=3054747) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/utils.py", line 294, in load_weights
(EngineCore_DP0 pid=3054747) autoloaded_weights = set(self._load_module("", self.module, weights))
(EngineCore_DP0 pid=3054747) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/utils.py", line 252, in _load_module
(EngineCore_DP0 pid=3054747) yield from self._load_module(prefix,
(EngineCore_DP0 pid=3054747) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/utils.py", line 225, in _load_module
(EngineCore_DP0 pid=3054747) loaded_params = module_load_weights(weights)
(EngineCore_DP0 pid=3054747) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/qwen3.py", line 341, in load_weights
(EngineCore_DP0 pid=3054747) return loader.load_weights(weights)
(EngineCore_DP0 pid=3054747) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/utils.py", line 294, in load_weights
(EngineCore_DP0 pid=3054747) autoloaded_weights = set(self._load_module("", self.module, weights))
(EngineCore_DP0 pid=3054747) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/utils.py", line 252, in _load_module
(EngineCore_DP0 pid=3054747) yield from self._load_module(prefix,
(EngineCore_DP0 pid=3054747) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/utils.py", line 225, in _load_module
(EngineCore_DP0 pid=3054747) loaded_params = module_load_weights(weights)
(EngineCore_DP0 pid=3054747) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/qwen2.py", line 440, in load_weights
(EngineCore_DP0 pid=3054747) weight_loader(param, loaded_weight)
(EngineCore_DP0 pid=3054747) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py", line 1326, in weight_loader
(EngineCore_DP0 pid=3054747) assert param_data.shape == loaded_weight.shape
(EngineCore_DP0 pid=3054747) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3054747) AssertionError
Loading safetensors checkpoint shards: 0% Completed | 0/1 [00:00<?, ?it/s]
(EngineCore_DP0 pid=3054747)
[rank0]:[W1028 09:44:08.065548524 ProcessGroupNCCL.cpp:1538] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
(APIServer pid=3054658) Traceback (most recent call last):
(APIServer pid=3054658) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/bin/vllm", line 7, in <module>
(APIServer pid=3054658) sys.exit(main())
(APIServer pid=3054658) ^^^^^^
(APIServer pid=3054658) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/entrypoints/cli/main.py", line 54, in main
(APIServer pid=3054658) args.dispatch_function(args)
(APIServer pid=3054658) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/entrypoints/cli/serve.py", line 57, in cmd
(APIServer pid=3054658) uvloop.run(run_server(args))
(APIServer pid=3054658) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/uvloop/__init__.py", line 109, in run
(APIServer pid=3054658) return __asyncio.run(
(APIServer pid=3054658) ^^^^^^^^^^^^^^
(APIServer pid=3054658) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/asyncio/runners.py", line 195, in run
(APIServer pid=3054658) return runner.run(main)
(APIServer pid=3054658) ^^^^^^^^^^^^^^^^
(APIServer pid=3054658) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/asyncio/runners.py", line 118, in run
(APIServer pid=3054658) return self._loop.run_until_complete(task)
(APIServer pid=3054658) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=3054658) File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
(APIServer pid=3054658) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/uvloop/__init__.py", line 61, in wrapper
(APIServer pid=3054658) return await main
(APIServer pid=3054658) ^^^^^^^^^^
(APIServer pid=3054658) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 1884, in run_server
(APIServer pid=3054658) await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
(APIServer pid=3054658) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 1902, in run_server_worker
(APIServer pid=3054658) async with build_async_engine_client(
(APIServer pid=3054658) ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=3054658) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=3054658) return await anext(self.gen)
(APIServer pid=3054658) ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=3054658) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 180, in build_async_engine_client
(APIServer pid=3054658) async with build_async_engine_client_from_engine_args(
(APIServer pid=3054658) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=3054658) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=3054658) return await anext(self.gen)
(APIServer pid=3054658) ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=3054658) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 225, in build_async_engine_client_from_engine_args
(APIServer pid=3054658) async_llm = AsyncLLM.from_vllm_config(
(APIServer pid=3054658) ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=3054658) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/utils/__init__.py", line 1572, in inner
(APIServer pid=3054658) return fn(*args, **kwargs)
(APIServer pid=3054658) ^^^^^^^^^^^^^^^^^^^
(APIServer pid=3054658) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py", line 207, in from_vllm_config
(APIServer pid=3054658) return cls(
(APIServer pid=3054658) ^^^^
(APIServer pid=3054658) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py", line 134, in __init__
(APIServer pid=3054658) self.engine_core = EngineCoreClient.make_async_mp_client(
(APIServer pid=3054658) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=3054658) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 102, in make_async_mp_client
(APIServer pid=3054658) return AsyncMPClient(*client_args)
(APIServer pid=3054658) ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=3054658) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 769, in __init__
(APIServer pid=3054658) super().__init__(
(APIServer pid=3054658) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 448, in __init__
(APIServer pid=3054658) with launch_core_engines(vllm_config, executor_class,
(APIServer pid=3054658) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=3054658) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/contextlib.py", line 144, in __exit__
(APIServer pid=3054658) next(self.gen)
(APIServer pid=3054658) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/v1/engine/utils.py", line 732, in launch_core_engines
(APIServer pid=3054658) wait_for_engine_startup(
(APIServer pid=3054658) File "/home/os-jenghuo.tseng/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/v1/engine/utils.py", line 785, in wait_for_engine_startup
(APIServer pid=3054658) raise RuntimeError("Engine core initialization failed. "
(APIServer pid=3054658) RuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {}
my vllm environment
Package Version
--------------------------------- -------------
accelerate 1.10.1
aiohappyeyeballs 2.6.1
aiohttp 3.13.0
aiosignal 1.4.0
annotated-types 0.7.0
anyio 4.11.0
astor 0.8.1
attrs 25.4.0
bitsandbytes 0.48.1
blake3 1.0.7
cachetools 6.2.1
cbor2 5.7.0
certifi 2025.10.5
cffi 2.0.0
charset-normalizer 3.4.3
click 8.2.1
cloudpickle 3.1.1
compressed-tensors 0.11.0
cupy-cuda12x 13.6.0
cut-cross-entropy 25.1.1
datasets 4.2.0
depyf 0.19.0
dill 0.4.0
diskcache 5.6.3
distro 1.9.0
dnspython 2.8.0
einops 0.8.1
email-validator 2.3.0
fastapi 0.119.0
fastapi-cli 0.0.13
fastapi-cloud-cli 0.3.1
fastrlock 0.8.3
filelock 3.20.0
frozendict 2.4.6
frozenlist 1.8.0
fsspec 2025.9.0
gguf 0.17.1
h11 0.16.0
hf-xet 1.1.10
httpcore 1.0.9
httptools 0.7.1
httpx 0.28.1
huggingface-hub 0.35.3
idna 3.11
interegular 0.3.3
Jinja2 3.1.6
jiter 0.11.0
jsonschema 4.25.1
jsonschema-specifications 2025.9.1
lark 1.2.2
llguidance 0.7.30
llvmlite 0.44.0
lm-format-enforcer 0.11.3
markdown-it-py 4.0.0
MarkupSafe 3.0.3
mdurl 0.1.2
mistral_common 1.8.5
mpmath 1.3.0
msgpack 1.1.2
msgspec 0.19.0
multidict 6.7.0
multiprocess 0.70.16
networkx 3.5
ninja 1.13.0
numba 0.61.2
numpy 2.2.6
nvidia-cublas-cu12 12.8.4.1
nvidia-cuda-cupti-cu12 12.8.90
nvidia-cuda-nvrtc-cu12 12.8.93
nvidia-cuda-runtime-cu12 12.8.90
nvidia-cudnn-cu12 9.10.2.21
nvidia-cufft-cu12 11.3.3.83
nvidia-cufile-cu12 1.13.1.3
nvidia-curand-cu12 10.3.9.90
nvidia-cusolver-cu12 11.7.3.90
nvidia-cusparse-cu12 12.5.8.93
nvidia-cusparselt-cu12 0.7.1
nvidia-nccl-cu12 2.27.3
nvidia-nvjitlink-cu12 12.8.93
nvidia-nvtx-cu12 12.8.90
openai 2.3.0
openai-harmony 0.0.4
opencv-python-headless 4.12.0.88
outlines_core 0.2.11
packaging 25.0
pandas 2.3.3
partial-json-parser 0.2.1.1.post6
peft 0.17.1
pillow 11.3.0
pip 25.2
prometheus_client 0.23.1
prometheus-fastapi-instrumentator 7.1.0
propcache 0.4.1
protobuf 6.32.1
psutil 7.1.0
py-cpuinfo 9.0.0
pyarrow 21.0.0
pybase64 1.4.2
pycountry 24.6.1
pycparser 2.23
pydantic 2.12.1
pydantic_core 2.41.3
pydantic-extra-types 2.10.6
Pygments 2.19.2
python-dateutil 2.9.0.post0
python-dotenv 1.1.1
python-json-logger 4.0.0
python-multipart 0.0.20
pytz 2025.2
PyYAML 6.0.3
pyzmq 27.1.0
ray 2.50.0
referencing 0.37.0
regex 2025.9.18
requests 2.32.5
rich 14.2.0
rich-toolkit 0.15.1
rignore 0.7.0
rpds-py 0.27.1
safetensors 0.6.2
scipy 1.16.2
sentencepiece 0.2.1
sentry-sdk 2.41.0
setproctitle 1.3.7
setuptools 79.0.1
shellingham 1.5.4
six 1.17.0
sniffio 1.3.1
soundfile 0.13.1
soxr 1.0.0
starlette 0.48.0
sympy 1.14.0
tiktoken 0.12.0
tokenizers 0.22.1
torch 2.8.0
torchaudio 2.8.0
torchvision 0.23.0
tqdm 4.67.1
transformers 4.57.0
triton 3.4.0
trl 0.22.2
typer 0.19.2
typer-slim 0.19.2
typing_extensions 4.15.0
typing-inspection 0.4.2
tzdata 2025.2
unsloth 2025.10.7
unsloth_zoo 2025.10.8
urllib3 2.5.0
uvicorn 0.37.0
uvloop 0.21.0
vllm 0.11.0
watchfiles 1.1.0
websockets 15.0.1
wheel 0.45.1
xformers 0.0.32.post1
xgrammar 0.1.25
xxhash 3.6.0
yarl 1.22.0
Same problem here too when using vLLM