-
-
Notifications
You must be signed in to change notification settings - Fork 12k
Description
Your current environment
python collect_env.py
Collecting environment information...
uv is set
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collect
CMake version : version 3.28.3
Libc version : glibc-2.39
==============================
PyTorch Info
PyTorch version : 2.9.0+cu130
Is debug build : False
CUDA used to build PyTorch : 13.0
ROCM used to build PyTorch : N/A
==============================
Python Environment
Python version : 3.12.3 (main, Nov 6 2025, 13:44:16) [GCC 13.3.0] (64-bit runtime)
Python platform : Linux-6.14.0-37-generic-x86_64-with-glibc2.39
==============================
CUDA / GPU Info
Is CUDA available : True
CUDA runtime version : Could not collect
CUDA_MODULE_LOADING set to :
GPU models and configuration :
GPU 0: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition
GPU 1: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition
Nvidia driver version : 580.95.05
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper 9960X 24-Cores
CPU family: 26
Model: 8
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 36%
CPU max MHz: 5489.0000
CPU min MHz: 400.0000
BogoMIPS: 8387.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx_vnni avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect movdiri movdir64b overflow_recov succor smca fsrm avx512_vp2intersect flush_l1d debug_swap amd_lbr_pmc_freeze
Virtualization: AMD-V
L1d cache: 1.1 MiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 24 MiB (24 instances)
L3 cache: 128 MiB (4 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Ghostwrite: Not affected
Vulnerability Indirect target selection: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; IBPB on VMEXIT only
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsa: Not affected
Vulnerability Tsx async abort: Not affected
Vulnerability Vmscape: Mitigation; IBPB on VMEXIT
==============================
Versions of relevant libraries
[pip3] flashinfer-python==0.5.3
[pip3] numpy==2.2.6
[pip3] nvidia-cublas==13.0.0.19
[pip3] nvidia-cublas-cu12==12.9.1.4
[pip3] nvidia-cuda-cupti==13.0.48
[pip3] nvidia-cuda-nvrtc==13.0.48
[pip3] nvidia-cuda-runtime==13.0.48
[pip3] nvidia-cuda-runtime-cu12==12.9.79
[pip3] nvidia-cudnn-cu12==9.17.0.29
[pip3] nvidia-cudnn-cu13==9.13.0.50
[pip3] nvidia-cudnn-frontend==1.16.0
[pip3] nvidia-cufft==12.0.0.15
[pip3] nvidia-cufile==1.15.0.42
[pip3] nvidia-curand==10.4.0.35
[pip3] nvidia-cusolver==12.0.3.29
[pip3] nvidia-cusparse==12.6.2.49
[pip3] nvidia-cusparselt-cu13==0.8.0
[pip3] nvidia-cutlass-dsl==4.3.3
[pip3] nvidia-ml-py==13.590.44
[pip3] nvidia-nccl-cu13==2.27.7
[pip3] nvidia-nvjitlink==13.0.39
[pip3] nvidia-nvshmem-cu13==3.3.24
[pip3] nvidia-nvtx==13.0.39
[pip3] pyzmq==27.1.0
[pip3] torch==2.9.0+cu130
[pip3] torchaudio==2.9.0+cu130
[pip3] torchvision==0.24.0+cu130
[pip3] transformers==4.57.3
[pip3] triton==3.5.0
[conda] Could not collect
==============================
vLLM Info
ROCM Version : Could not collect
vLLM Version : 0.12.0
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled
GPU Topology:
GPU0 GPU1 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NODE 0-47 0 N/A
GPU1 NODE X 0-47 0 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
==============================
Environment Variables
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
🐛 Describe the bug
Hello,
When I run the following command using vLLM with tensor parallelism across two NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition GPUs:
export NCCL_P2P_DISABLE=1
vllm serve Qwen/Qwen3-4B --gpu-memory-utilization 0.8
It shows: No available shared memory broadcast block found in 60 seconds. This typically happens when some processes are hanging or doing some time-consuming work (e.g. compilation, weight/kv cache quantization).
If I do not set NCCL_P2P_DISABLE=1, the process hangs indefinitely.
Could you please help me resolve this issue?
Full error log:
(EngineCore_DP0 pid=26910) INFO 12-15 13:23:28 [shm_broadcast.py:501] No available shared memory broadcast block found in 60 seconds. This typically happens when some processes are hanging or doing some time-consuming work (e.g. compilation, weight/kv cache quantization).
(EngineCore_DP0 pid=26910) INFO 12-15 13:24:28 [shm_broadcast.py:501] No available shared memory broadcast block found in 60 seconds. This typically happens when some processes are hanging or doing some time-consuming work (e.g. compilation, weight/kv cache quantization).
(EngineCore_DP0 pid=26910) INFO 12-15 13:25:28 [shm_broadcast.py:501] No available shared memory broadcast block found in 60 seconds. This typically happens when some processes are hanging or doing some time-consuming work (e.g. compilation, weight/kv cache quantization).
(EngineCore_DP0 pid=26910) INFO 12-15 13:26:28 [shm_broadcast.py:501] No available shared memory broadcast block found in 60 seconds. This typically happens when some processes are hanging or doing some time-consuming work (e.g. compilation, weight/kv cache quantization).
(APIServer pid=26805) INFO: 127.0.0.1:58434 - "POST /v1/chat/completions HTTP/1.1" 200 OK
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [dump_input.py:72] Dumping input data for V1 LLM engine (v0.12.0) with config: model='Qwen/Qwen3-4B', speculative_config=None, tokenizer='Qwen/Qwen3-4B', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=40960, download_dir=None, load_format=auto, tensor_parallel_size=2, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='', reasoning_parser_plugin='', enable_in_reasoning=False), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, kv_cache_metrics=False, kv_cache_metrics_sample=0.01), seed=0, served_model_name=Qwen/Qwen3-4B, enable_prefix_caching=True, enable_chunked_prefill=True, pooler_config=None, compilation_config={'level': None, 'mode': <CompilationMode.VLLM_COMPILE: 3>, 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': ['none'], 'splitting_ops': ['vllm::unified_attention', 'vllm::unified_attention_with_output', 'vllm::unified_mla_attention', 'vllm::unified_mla_attention_with_output', 'vllm::mamba_mixer2', 'vllm::mamba_mixer', 'vllm::short_conv', 'vllm::linear_attention', 'vllm::plamo2_mamba_mixer', 'vllm::gdn_attention_core', 'vllm::kda_attention', 'vllm::sparse_attn_indexer'], 'compile_mm_encoder': False, 'compile_sizes': [], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': <CUDAGraphMode.FULL_AND_PIECEWISE: (2, 1)>, 'cudagraph_num_of_warmups': 1, 'cudagraph_capture_sizes': [1, 2, 4, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160, 168, 176, 184, 192, 200, 208, 216, 224, 232, 240, 248, 256, 272, 288, 304, 320, 336, 352, 368, 384, 400, 416, 432, 448, 464, 480, 496, 512], 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': False, 'fuse_act_quant': False, 'fuse_attn_quant': False, 'eliminate_noops': True, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False}, 'max_cudagraph_capture_size': 512, 'dynamic_shapes_config': {'type': <DynamicShapesType.BACKED: 'backed'>}, 'local_cache_dir': None},
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [dump_input.py:79] Dumping scheduler output for model execution: SchedulerOutput(scheduled_new_reqs=[NewRequestData(req_id=chatcmpl-b72ee5261b518af3,prompt_token_ids_len=33,mm_features=[],sampling_params=SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, seed=None, stop=[], stop_token_ids=[151643], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=500, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, structured_outputs=None, extra_args=None),block_ids=([1, 2, 3],),num_computed_tokens=0,lora_request=None,prompt_embeds_shape=None)], scheduled_cached_reqs=CachedRequestData(req_ids=[], resumed_req_ids=[], new_token_ids=[], all_token_ids={}, new_block_ids=[], num_computed_tokens=[], num_output_tokens=[]), num_scheduled_tokens={chatcmpl-b72ee5261b518af3: 33}, total_num_scheduled_tokens=33, scheduled_spec_decode_tokens={}, scheduled_encoder_inputs={}, num_common_prefix_blocks=[3], finished_req_ids=[], free_encoder_mm_hashes=[], preempted_req_ids=[], pending_structured_output_tokens=false, kv_connector_metadata=null, ec_connector_metadata=null)
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [dump_input.py:81] Dumping scheduler stats: SchedulerStats(num_running_reqs=1, num_waiting_reqs=0, step_counter=0, current_wave=0, kv_cache_usage=4.959497437595495e-05, prefix_cache_stats=PrefixCacheStats(reset=False, requests=1, queries=33, hits=0, preempted_requests=0, preempted_queries=0, preempted_hits=0), connector_prefix_cache_stats=None, kv_cache_eviction_events=[], spec_decoding_stats=None, kv_connector_stats=None, waiting_lora_adapters={}, running_lora_adapters={})
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] EngineCore encountered a fatal error.
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] Traceback (most recent call last):
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] File "/home/admin2/.vllm/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 338, in get_response
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] status, result = mq.dequeue(
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] ^^^^^^^^^^^
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] File "/home/admin2/.vllm/lib/python3.12/site-packages/vllm/distributed/device_communicators/shm_broadcast.py", line 571, in dequeue
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] with self.acquire_read(timeout, cancel, indefinite) as buf:
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] File "/usr/lib/python3.12/contextlib.py", line 137, in __enter__
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] return next(self.gen)
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] ^^^^^^^^^^^^^^
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] File "/home/admin2/.vllm/lib/python3.12/site-packages/vllm/distributed/device_communicators/shm_broadcast.py", line 495, in acquire_read
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] raise TimeoutError
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] TimeoutError
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845]
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] The above exception was the direct cause of the following exception:
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845]
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] Traceback (most recent call last):
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] File "/home/admin2/.vllm/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 836, in run_engine_core
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] engine_core.run_busy_loop()
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] File "/home/admin2/.vllm/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 863, in run_busy_loop
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] self._process_engine_step()
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] File "/home/admin2/.vllm/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 892, in _process_engine_step
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] outputs, model_executed = self.step_fn()
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] ^^^^^^^^^^^^^^
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] File "/home/admin2/.vllm/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 346, in step
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] model_output = future.result()
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] ^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] File "/home/admin2/.vllm/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 80, in result
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] return super().result()
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] ^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] File "/usr/lib/python3.12/concurrent/futures/_base.py", line 449, in result
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] return self.__get_result()
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] ^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] File "/usr/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] raise self._exception
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] File "/home/admin2/.vllm/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 84, in wait_for_response
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] response = self.aggregate(get_response())
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] ^^^^^^^^^^^^^^
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] File "/home/admin2/.vllm/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 342, in get_response
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] raise TimeoutError(f"RPC call to {method} timed out.") from e
(EngineCore_DP0 pid=26910) ERROR 12-15 13:27:28 [core.py:845] TimeoutError: RPC call to execute_model timed out.
(Worker_TP0 pid=27012) INFO 12-15 13:27:28 [multiproc_executor.py:709] Parent process exited, terminating worker
(Worker_TP1 pid=27013) INFO 12-15 13:27:28 [multiproc_executor.py:709] Parent process exited, terminating worker
(APIServer pid=26805) ERROR 12-15 13:27:28 [async_llm.py:546] AsyncLLM output_handler failed.
(APIServer pid=26805) ERROR 12-15 13:27:28 [async_llm.py:546] Traceback (most recent call last):
(APIServer pid=26805) ERROR 12-15 13:27:28 [async_llm.py:546] File "/home/admin2/.vllm/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py", line 498, in output_handler
(APIServer pid=26805) ERROR 12-15 13:27:28 [async_llm.py:546] outputs = await engine_core.get_output_async()
(APIServer pid=26805) ERROR 12-15 13:27:28 [async_llm.py:546] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=26805) ERROR 12-15 13:27:28 [async_llm.py:546] File "/home/admin2/.vllm/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 885, in get_output_async
(APIServer pid=26805) ERROR 12-15 13:27:28 [async_llm.py:546] raise self._format_exception(outputs) from None
(APIServer pid=26805) ERROR 12-15 13:27:28 [async_llm.py:546] vllm.v1.engine.exceptions.EngineDeadError: EngineCore encountered an issue. See stack trace (above) for the root cause.
(EngineCore_DP0 pid=26910) Process EngineCore_DP0:
(EngineCore_DP0 pid=26910) Traceback (most recent call last):
(EngineCore_DP0 pid=26910) File "/home/admin2/.vllm/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 338, in get_response
(EngineCore_DP0 pid=26910) status, result = mq.dequeue(
(EngineCore_DP0 pid=26910) ^^^^^^^^^^^
(EngineCore_DP0 pid=26910) File "/home/admin2/.vllm/lib/python3.12/site-packages/vllm/distributed/device_communicators/shm_broadcast.py", line 571, in dequeue
(EngineCore_DP0 pid=26910) with self.acquire_read(timeout, cancel, indefinite) as buf:
(EngineCore_DP0 pid=26910) File "/usr/lib/python3.12/contextlib.py", line 137, in __enter__
(EngineCore_DP0 pid=26910) return next(self.gen)
(EngineCore_DP0 pid=26910) ^^^^^^^^^^^^^^
(EngineCore_DP0 pid=26910) File "/home/admin2/.vllm/lib/python3.12/site-packages/vllm/distributed/device_communicators/shm_broadcast.py", line 495, in acquire_read
(EngineCore_DP0 pid=26910) raise TimeoutError
(EngineCore_DP0 pid=26910) TimeoutError
(EngineCore_DP0 pid=26910)
(EngineCore_DP0 pid=26910) The above exception was the direct cause of the following exception:
(EngineCore_DP0 pid=26910)
(EngineCore_DP0 pid=26910) Traceback (most recent call last):
(EngineCore_DP0 pid=26910) File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
(EngineCore_DP0 pid=26910) self.run()
(EngineCore_DP0 pid=26910) File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
(EngineCore_DP0 pid=26910) self._target(*self._args, **self._kwargs)
(EngineCore_DP0 pid=26910) File "/home/admin2/.vllm/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 847, in run_engine_core
(EngineCore_DP0 pid=26910) raise e
(EngineCore_DP0 pid=26910) File "/home/admin2/.vllm/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 836, in run_engine_core
(EngineCore_DP0 pid=26910) engine_core.run_busy_loop()
(EngineCore_DP0 pid=26910) File "/home/admin2/.vllm/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 863, in run_busy_loop
(EngineCore_DP0 pid=26910) self._process_engine_step()
(EngineCore_DP0 pid=26910) File "/home/admin2/.vllm/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 892, in _process_engine_step
(EngineCore_DP0 pid=26910) outputs, model_executed = self.step_fn()
(EngineCore_DP0 pid=26910) ^^^^^^^^^^^^^^
(EngineCore_DP0 pid=26910) File "/home/admin2/.vllm/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 346, in step
(EngineCore_DP0 pid=26910) model_output = future.result()
(EngineCore_DP0 pid=26910) ^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=26910) File "/home/admin2/.vllm/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 80, in result
(EngineCore_DP0 pid=26910) return super().result()
(EngineCore_DP0 pid=26910) ^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=26910) File "/usr/lib/python3.12/concurrent/futures/_base.py", line 449, in result
(EngineCore_DP0 pid=26910) return self.__get_result()
(EngineCore_DP0 pid=26910) ^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=26910) File "/usr/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
(EngineCore_DP0 pid=26910) raise self._exception
(EngineCore_DP0 pid=26910) File "/home/admin2/.vllm/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 84, in wait_for_response
(EngineCore_DP0 pid=26910) response = self.aggregate(get_response())
(EngineCore_DP0 pid=26910) ^^^^^^^^^^^^^^
(EngineCore_DP0 pid=26910) File "/home/admin2/.vllm/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 342, in get_response
(EngineCore_DP0 pid=26910) raise TimeoutError(f"RPC call to {method} timed out.") from e
(EngineCore_DP0 pid=26910) TimeoutError: RPC call to execute_model timed out.
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.