Skip to content

other(test): improve worker test coverage#517

Open
rebel-jinhwan wants to merge 13 commits intodevfrom
jinhwan/pytest-worker-improve
Open

other(test): improve worker test coverage#517
rebel-jinhwan wants to merge 13 commits intodevfrom
jinhwan/pytest-worker-improve

Conversation

@rebel-jinhwan
Copy link
Copy Markdown
Contributor

@rebel-jinhwan rebel-jinhwan commented Apr 9, 2026

🚀 Summary of Changes

What does this PR do? What feature, fix, or improvement does it bring?


📌 Related Issues / Tickets


✅ Type of Change

  • 🚀 Release (release)
  • ✨ Feature (feature)
  • 🧠 Model support (model)
  • 🧬 Core engine changes (core)
  • 🛠 Bug fix (fix)
  • ⚙️ Performance improvement (perf)
  • 🔁 Refactor or code cleanup (refactor)
  • 📄 Documentation (docs)
  • ❓ Other (other): please describe

🧪 How to Test

  1. Run ...
  2. Verify output: ...
  3. Edge case tested: ...

📸 Screenshots / Logs (if applicable)


📋 Checklist

  • PR title follows Conventional Commits format
  • This PR is linked to an existing issue
  • The test method is described, and the expected result is clearly stated
  • Relevant documentation has been updated (if applicable)

💬 Notes

Test Coverage Improvement Summary

Overall Metrics

Metric Before After Change
Total Coverage 22% 30% +8%p
Missing Lines 6,176 5,573 -603

Major File Improvements

The update significantly boosted coverage for core v1 worker modules, moving them from nearly untested to high stability.

File Path Before After Increase
v1/worker/metrics.py 21% 100% +79%p
v1/worker/rbln_worker.py 0% 98% +98%p
v1/worker/utils.py 0% 99% +99%p
v1/worker/rbln_model_runner.py 10% 17% +7%p

Test coverage (Before)

Name                                                                               Stmts   Miss Branch BrPart  Cover   Missing
------------------------------------------------------------------------------------------------------------------------------
vllm_rbln/__init__.py                                                                 38      5      4      2    83%   25-39, 46->exit
vllm_rbln/_version.py                                                                 11     11      0      0     0%   3-24
vllm_rbln/forward_context.py                                                          90     44     22      4    45%   52-60, 66-87, 111-121, 148, 153->163, 178-204
vllm_rbln/logger.py                                                                   92     43     28      5    48%   71, 77, 83, 100, 107, 114, 121, 130->133, 134-147, 152, 154->exit, 186-225, 239-248
vllm_rbln/lora/inputs.py                                                              10      0      0      0   100%
vllm_rbln/lora/layer.py                                                               33     25      8      0    20%   25-39, 46-79
vllm_rbln/lora/mask.py                                                                10      0      0      0   100%
vllm_rbln/model_executor/__init__.py                                                   0      0      0      0   100%
vllm_rbln/model_executor/layers/__init__.py                                            0      0      0      0   100%
vllm_rbln/model_executor/layers/attention/__init__.py                                  2      0      0      0   100%
vllm_rbln/model_executor/layers/attention/attention.py                                44     25      4      0    40%   39-51, 60-72, 88-99, 114-129
vllm_rbln/model_executor/layers/fused_moe/__init__.py                                  0      0      0      0   100%
vllm_rbln/model_executor/layers/fused_moe/layer.py                                   203    175     42      2    12%   34-36, 77-85, 102-159, 169-241, 245-256, 263-307, 318-356, 367-412, 418-459, 469-494, 503-508
vllm_rbln/model_executor/layers/fused_moe/shared_fused_moe.py                         26     15      8      0    32%   35-41, 47-81
vllm_rbln/model_executor/layers/logits_processor.py                                   18     10      6      1    38%   34-35, 40-49, 53-54
vllm_rbln/model_executor/layers/quantization/__init__.py                               0      0      0      0   100%
vllm_rbln/model_executor/layers/quantization/fp8.py                                  311    270     86      0    10%   73-82, 96, 114-126, 148-181, 198-271, 275-325, 335-367, 435-530, 552, 557-562, 573-705, 708-807, 817, 823, 833-894
vllm_rbln/model_executor/layers/quantization/kernels/mixed_precision/__init__.py      21     13      8      0    28%   40-61
vllm_rbln/model_executor/layers/quantization/kernels/mixed_precision/unpacked.py      51     38     18      0    19%   36, 40-48, 51-77, 82-95
vllm_rbln/model_executor/layers/quantization/mxfp4.py                                147    123     18      0    15%   46-79, 85-88, 140-224, 247, 252-261, 272-357, 360-375, 383, 389, 398-441
vllm_rbln/model_executor/layers/rotary_embedding/base.py                              45     37      8      0    15%   32-51, 72-116
vllm_rbln/model_executor/layers/rotary_embedding/deepseek_scaling_rope.py             30     25      8      0    13%   31-63
vllm_rbln/model_executor/layers/vocab_parallel_embedding.py                           57     47     18      0    13%   47-112, 124-145, 151-156
vllm_rbln/model_executor/model_loader/__init__.py                                      0      0      0      0   100%
vllm_rbln/model_executor/model_loader/rbln_model_loader.py                             5      5      0      0     0%   14-23
vllm_rbln/model_executor/model_loader/weight_loader.py                               387    363    220      0     4%   49-119, 123-185, 191-337, 343-447, 453-544, 548-625, 631-719
vllm_rbln/models/__init__.py                                                           0      0      0      0   100%
vllm_rbln/models/deepseek_v2.py                                                       50     43     20      0    10%   21-46, 54-95
vllm_rbln/models/gpt_oss.py                                                          117    104     46      0     8%   41-224, 227-233
vllm_rbln/models/minimax_m2.py                                                        10      5      2      0    42%   24-31
vllm_rbln/models/qwen2.py                                                             16      8      4      0    40%   29-44
vllm_rbln/models/qwen2_moe.py                                                         24     17     12      0    19%   22-36, 48-59
vllm_rbln/models/qwen3.py                                                             16      8      4      0    40%   29-44
vllm_rbln/models/qwen3_moe.py                                                         10      5      2      0    42%   23-33
vllm_rbln/models/utils.py                                                             66     59     34      0     7%   29-126
vllm_rbln/platform.py                                                                183     85     76     13    45%   49, 77-78, 82, 89-90, 99, 105-114, 120, 134, 146-147, 166-181, 190-193, 196->200, 206-220, 224-257, 275->exit, 278->287, 285, 288-291, 300-310, 315-320, 324-349, 357, 361
vllm_rbln/rbln_envs.py                                                                48     33     24      1    25%   58-64, 68-88, 92-109
vllm_rbln/triton_kernels/attention.py                                                143    125      4      0    12%   43-211, 234-402, 406-409, 424-468, 483-528, 543, 558
vllm_rbln/triton_kernels/causal_attention.py                                         137    119      4      0    13%   42-194, 216-368, 372-375, 389-431, 445-488, 502, 516
vllm_rbln/triton_kernels/flash_attention.py                                          173    155     20      0     9%   45-183, 208-345, 349-352, 367-414, 429-478, 493, 508
vllm_rbln/triton_kernels/flash_causal_attention.py                                   171    153     20      0     9%   44-262, 286-506, 510-513, 527-574, 588, 602-649, 663
vllm_rbln/triton_kernels/sliding_window_attention.py                                 149    131      4      0    12%   44-213, 237-421, 425-428, 443-488, 503, 518-563, 578
vllm_rbln/utils/__init__.py                                                           27     17     10      1    30%   28, 60-95
vllm_rbln/utils/optimum/__init__.py                                                    0      0      0      0   100%
vllm_rbln/utils/optimum/cache_blocks.py                                               54     44     22      0    13%   31-42, 46-52, 58-98, 110-123, 128-142
vllm_rbln/utils/optimum/common.py                                                      6      6      0      0     0%   15-22
vllm_rbln/utils/optimum/configuration.py                                              57     45     22      0    15%   44-53, 59-60, 74-105, 114-141, 154-191
vllm_rbln/utils/optimum/multimodal/__init__.py                                        22     10      4      0    46%   30-33, 57-66
vllm_rbln/utils/optimum/multimodal/blip2.py                                            5      3      0      0    40%   22-26
vllm_rbln/utils/optimum/multimodal/common.py                                           7      6      2      0    11%   20-30
vllm_rbln/utils/optimum/multimodal/gemma3.py                                           4      2      0      0    50%   22-25
vllm_rbln/utils/optimum/multimodal/idefics3.py                                         3      1      0      0    67%   22
vllm_rbln/utils/optimum/multimodal/llava.py                                            6      3      0      0    50%   22-28, 34
vllm_rbln/utils/optimum/multimodal/paligemma.py                                        6      4      0      0    33%   22-27
vllm_rbln/utils/optimum/multimodal/qwen.py                                             8      6      2      0    20%   25-37
vllm_rbln/utils/optimum/rbln_params.py                                                71     58     24      0    14%   41-43, 49-54, 58-70, 77-133
vllm_rbln/utils/optimum/registry.py                                                   64     47     18      0    21%   112, 116, 120, 124, 130-131, 137-143, 160-225
vllm_rbln/v1/__init__.py                                                               0      0      0      0   100%
vllm_rbln/v1/attention/__init__.py                                                     0      0      0      0   100%
vllm_rbln/v1/attention/backends/__init__.py                                            0      0      0      0   100%
vllm_rbln/v1/attention/backends/flash_attention.py                                   589    486    136      0    14%   59, 75, 93, 109, 126, 141, 158, 173, 214-242, 258, 276-302, 318, 357-497, 512, 529-658, 673, 714-785, 801, 820-900, 917, 927, 934, 941, 947, 951, 955, 976, 984, 991, 1038-1068, 1073, 1084-1235, 1238, 1256-1318, 1362-1717
vllm_rbln/v1/core/rbln_kv_cache_manager.py                                           259      3     88      6    97%   173->171, 408->420, 511->exit, 611, 638, 645
vllm_rbln/v1/core/rbln_scheduler.py                                                  344     90    168     40    69%   154, 201-202, 210, 224, 238, 276-298, 306, 310, 328->336, 331, 355-361, 363-366, 375, 412->386, 422-427, 446, 458-465, 478-480, 507-509, 542-544, 552-553, 562, 572, 586-600, 603-610, 628, 655-657, 687->697, 701-717, 720->724, 726-729, 732, 741->744, 745-751, 754-757, 790-791, 795, 814->821, 822-824, 898-901
vllm_rbln/v1/kv_cache.py                                                              33     12      2      0    60%   32, 35, 56, 64-68, 83, 86, 89, 92
vllm_rbln/v1/sample/__init__.py                                                        3      0      0      0   100%
vllm_rbln/v1/sample/ops/__init__.py                                                    1      0      0      0   100%
vllm_rbln/v1/sample/ops/penalties.py                                                  12      6      0      0    50%   24-34, 52-60
vllm_rbln/v1/sample/rbln_rejection_sampler.py                                        183     12     54     10    90%   136, 210->215, 378->375, 409, 419, 458, 470, 475-476, 533-537, 563
vllm_rbln/v1/sample/rbln_sampler.py                                                  117     81     38      0    23%   22, 51-63, 83-107, 114, 121, 134-136, 148-181, 187, 197-203, 213-219, 230-240, 246-247, 256-312, 323-325
vllm_rbln/v1/spec_decode/__init__.py                                                   0      0      0      0   100%
vllm_rbln/v1/spec_decode/eagle.py                                                    196     50     48     11    73%   43-47, 115-124, 148, 182, 189, 210->216, 225-227, 240, 242, 245, 253, 287-290, 437-475, 478-504
vllm_rbln/v1/spec_decode/medusa.py                                                    48      0      4      0   100%
vllm_rbln/v1/spec_decode/utils.py                                                     19      0      0      0   100%
vllm_rbln/v1/worker/__init__.py                                                        0      0      0      0   100%
vllm_rbln/v1/worker/bucketing/__init__.py                                             15      9      8      0    26%   43-65
vllm_rbln/v1/worker/bucketing/bucketing_manager.py                                    34     20     12      0    30%   25-32, 37, 42, 47, 52, 56-59, 73-84
vllm_rbln/v1/worker/bucketing/exponential_bucketing_manager.py                        18     14      6      0    17%   28-37, 41-53
vllm_rbln/v1/worker/bucketing/linear_bucketing_manager.py                             16     12      4      0    20%   28-37, 41-47
vllm_rbln/v1/worker/bucketing/manual_bucketing_manager.py                             11      7      2      0    31%   26-28, 34-42
vllm_rbln/v1/worker/metrics.py                                                       120     88     36      0    21%   42-49, 53-58, 62-67, 72-77, 82-96, 101-106, 111-116, 121-126, 130, 133-151, 158, 170, 176, 183, 190-194, 197-201, 213-226, 241-244, 247-264
vllm_rbln/v1/worker/optimum_input_batch.py                                            47     47     22      0     0%   15-118
vllm_rbln/v1/worker/optimum_model_runner.py                                          635    635    214      0     0%   14-1516
vllm_rbln/v1/worker/optimum_worker.py                                                120    120     24      0     0%   15-267
vllm_rbln/v1/worker/rbln_model_runner.py                                            1719   1494    606      8    10%   33, 155-163, 178-189, 222-550, 556-559, 562-573, 578, 587-618, 635-657, 661, 665, 678-913, 926-951, 964-985, 998-1005, 1019-1121, 1133-1143, 1167-1425, 1438-1481, 1500-1552, 1563, 1566-1578, 1581-1603, 1606-1613, 1623-1651, 1660-1701, 1710-1733, 1747-1805, 1822-1864, 1870, 1874-1996, 2008-2027, 2040-2064, 2073-2085, 2090-2120, 2127-2300, 2308-2369, 2375-2404, 2410-2450, 2476-2578, 2612-2660, 2666-2675, 2684-3036, 3047, 3053-3058, 3090-3092, 3110, 3116, 3124-3130, 3162, 3180-3181, 3193, 3201-3209, 3214-3227, 3231-3239, 3253-3369, 3372-3517, 3530-3541, 3544-3556, 3567-3579, 3593-3594, 3605-3699, 3709-3728, 3734-3801, 3809-3832, 3843, 3867-3918, 3924-3941, 3955-3968, 3998-4018, 4021, 4024-4027, 4043-4072, 4092-4176, 4194-4235, 4244-4263, 4272-4335, 4341-4359, 4371-4391, 4402-4406, 4409, 4415, 4431-4448
vllm_rbln/v1/worker/rbln_worker.py                                                   264    264     72      0     0%   16-582
vllm_rbln/v1/worker/utils.py                                                         150    150     50      0     0%   16-419
------------------------------------------------------------------------------------------------------------------------------
TOTAL                                                                               8237   6176   2480    104    22%

Test Coverage (After)

_________________________________________________________________________________ coverage: platform linux, python 3.12.9-final-0 __________________________________________________________________________________

Name                                                                               Stmts   Miss Branch BrPart  Cover   Missing
------------------------------------------------------------------------------------------------------------------------------
vllm_rbln/__init__.py                                                                 38      5      4      2    83%   25-39, 46->exit
vllm_rbln/_version.py                                                                 11     11      0      0     0%   3-24
vllm_rbln/forward_context.py                                                          90     44     22      4    45%   52-60, 66-87, 111-121, 148, 153->163, 178-204
vllm_rbln/logger.py                                                                   92     43     28      5    48%   71, 77, 83, 100, 107, 114, 121, 130->133, 134-147, 152, 154->exit, 186-225, 239-248
vllm_rbln/lora/inputs.py                                                              10      0      0      0   100%
vllm_rbln/lora/layer.py                                                               33     25      8      0    20%   25-39, 46-79
vllm_rbln/lora/mask.py                                                                10      0      0      0   100%
vllm_rbln/model_executor/__init__.py                                                   0      0      0      0   100%
vllm_rbln/model_executor/layers/__init__.py                                            0      0      0      0   100%
vllm_rbln/model_executor/layers/attention/__init__.py                                  2      0      0      0   100%
vllm_rbln/model_executor/layers/attention/attention.py                                44     25      4      0    40%   39-51, 60-72, 88-99, 114-129
vllm_rbln/model_executor/layers/fused_moe/__init__.py                                  0      0      0      0   100%
vllm_rbln/model_executor/layers/fused_moe/layer.py                                   203    175     42      2    12%   34-36, 77-85, 102-159, 169-241, 245-256, 263-307, 318-356, 367-412, 418-459, 469-494, 503-508
vllm_rbln/model_executor/layers/fused_moe/shared_fused_moe.py                         26     15      8      0    32%   35-41, 47-81
vllm_rbln/model_executor/layers/logits_processor.py                                   18     10      6      1    38%   34-35, 40-49, 53-54
vllm_rbln/model_executor/layers/quantization/__init__.py                               0      0      0      0   100%
vllm_rbln/model_executor/layers/quantization/fp8.py                                  311    270     86      0    10%   73-82, 96, 114-126, 148-181, 198-271, 275-325, 335-367, 435-530, 552, 557-562, 573-705, 708-807, 817, 823, 833-894
vllm_rbln/model_executor/layers/quantization/kernels/mixed_precision/__init__.py      21     13      8      0    28%   40-61
vllm_rbln/model_executor/layers/quantization/kernels/mixed_precision/unpacked.py      51     38     18      0    19%   36, 40-48, 51-77, 82-95
vllm_rbln/model_executor/layers/quantization/mxfp4.py                                147    123     18      0    15%   46-79, 85-88, 140-224, 247, 252-261, 272-357, 360-375, 383, 389, 398-441
vllm_rbln/model_executor/layers/rotary_embedding/base.py                              45     37      8      0    15%   32-51, 72-116
vllm_rbln/model_executor/layers/rotary_embedding/deepseek_scaling_rope.py             30     25      8      0    13%   31-63
vllm_rbln/model_executor/layers/vocab_parallel_embedding.py                           57     47     18      0    13%   47-112, 124-145, 151-156
vllm_rbln/model_executor/model_loader/__init__.py                                      0      0      0      0   100%
vllm_rbln/model_executor/model_loader/rbln_model_loader.py                             5      5      0      0     0%   14-23
vllm_rbln/model_executor/model_loader/weight_loader.py                               387    363    220      0     4%   49-119, 123-185, 191-337, 343-447, 453-544, 548-625, 631-719
vllm_rbln/models/__init__.py                                                           0      0      0      0   100%
vllm_rbln/models/deepseek_v2.py                                                       50     43     20      0    10%   21-46, 54-95
vllm_rbln/models/gpt_oss.py                                                          117    104     46      0     8%   41-224, 227-233
vllm_rbln/models/minimax_m2.py                                                        10      5      2      0    42%   24-31
vllm_rbln/models/qwen2.py                                                             16      8      4      0    40%   29-44
vllm_rbln/models/qwen2_moe.py                                                         24     17     12      0    19%   22-36, 48-59
vllm_rbln/models/qwen3.py                                                             16      8      4      0    40%   29-44
vllm_rbln/models/qwen3_moe.py                                                         10      5      2      0    42%   23-33
vllm_rbln/models/utils.py                                                             66     59     34      0     7%   29-126
vllm_rbln/platform.py                                                                183     85     76     13    45%   49, 77-78, 82, 89-90, 99, 105-114, 120, 134, 146-147, 166-181, 190-193, 196->200, 206-220, 224-257, 275->exit, 278->287, 285, 288-291, 300-310, 315-320, 324-349, 357, 361
vllm_rbln/rbln_envs.py                                                                48     33     24      1    25%   58-64, 68-88, 92-109
vllm_rbln/triton_kernels/attention.py                                                143    125      4      0    12%   43-211, 234-402, 406-409, 424-468, 483-528, 543, 558
vllm_rbln/triton_kernels/causal_attention.py                                         137    119      4      0    13%   42-194, 216-368, 372-375, 389-431, 445-488, 502, 516
vllm_rbln/triton_kernels/flash_attention.py                                          173    155     20      0     9%   45-183, 208-345, 349-352, 367-414, 429-478, 493, 508
vllm_rbln/triton_kernels/flash_causal_attention.py                                   171    153     20      0     9%   44-262, 286-506, 510-513, 527-574, 588, 602-649, 663
vllm_rbln/triton_kernels/sliding_window_attention.py                                 149    131      4      0    12%   44-213, 237-421, 425-428, 443-488, 503, 518-563, 578
vllm_rbln/utils/__init__.py                                                           27     17     10      1    30%   28, 60-95
vllm_rbln/utils/optimum/__init__.py                                                    0      0      0      0   100%
vllm_rbln/utils/optimum/cache_blocks.py                                               54     44     22      0    13%   31-42, 46-52, 58-98, 110-123, 128-142
vllm_rbln/utils/optimum/common.py                                                      6      6      0      0     0%   15-22
vllm_rbln/utils/optimum/configuration.py                                              57     45     22      0    15%   44-53, 59-60, 74-105, 114-141, 154-191
vllm_rbln/utils/optimum/multimodal/__init__.py                                        22     10      4      0    46%   30-33, 57-66
vllm_rbln/utils/optimum/multimodal/blip2.py                                            5      3      0      0    40%   22-26
vllm_rbln/utils/optimum/multimodal/common.py                                           7      6      2      0    11%   20-30
vllm_rbln/utils/optimum/multimodal/gemma3.py                                           4      2      0      0    50%   22-25
vllm_rbln/utils/optimum/multimodal/idefics3.py                                         3      1      0      0    67%   22
vllm_rbln/utils/optimum/multimodal/llava.py                                            6      3      0      0    50%   22-28, 34
vllm_rbln/utils/optimum/multimodal/paligemma.py                                        6      4      0      0    33%   22-27
vllm_rbln/utils/optimum/multimodal/qwen.py                                             8      6      2      0    20%   25-37
vllm_rbln/utils/optimum/rbln_params.py                                                71     58     24      0    14%   41-43, 49-54, 58-70, 77-133
vllm_rbln/utils/optimum/registry.py                                                   64     47     18      0    21%   112, 116, 120, 124, 130-131, 137-143, 160-225
vllm_rbln/v1/__init__.py                                                               0      0      0      0   100%
vllm_rbln/v1/attention/__init__.py                                                     0      0      0      0   100%
vllm_rbln/v1/attention/backends/__init__.py                                            0      0      0      0   100%
vllm_rbln/v1/attention/backends/flash_attention.py                                   589    486    136      0    14%   59, 75, 93, 109, 126, 141, 158, 173, 214-242, 258, 276-302, 318, 357-497, 512, 529-658, 673, 714-785, 801, 820-900, 917, 927, 934, 941, 947, 951, 955, 976, 984, 991, 1038-1068, 1073, 1084-1235, 1238, 1256-1318, 1362-1717
vllm_rbln/v1/core/rbln_kv_cache_manager.py                                           259      3     88      6    97%   173->171, 408->420, 511->exit, 611, 638, 645
vllm_rbln/v1/core/rbln_scheduler.py                                                  344     90    168     40    69%   154, 201-202, 210, 224, 238, 276-298, 306, 310, 328->336, 331, 355-361, 363-366, 375, 412->386, 422-427, 446, 458-465, 478-480, 507-509, 542-544, 552-553, 562, 572, 586-600, 603-610, 628, 655-657, 687->697, 701-717, 720->724, 726-729, 732, 741->744, 745-751, 754-757, 790-791, 795, 814->821, 822-824, 898-901
vllm_rbln/v1/kv_cache.py                                                              33     12      2      0    60%   32, 35, 56, 64-68, 83, 86, 89, 92
vllm_rbln/v1/sample/__init__.py                                                        3      0      0      0   100%
vllm_rbln/v1/sample/ops/__init__.py                                                    1      0      0      0   100%
vllm_rbln/v1/sample/ops/penalties.py                                                  12      6      0      0    50%   24-34, 52-60
vllm_rbln/v1/sample/rbln_rejection_sampler.py                                        183     12     54     10    90%   136, 210->215, 378->375, 409, 419, 458, 470, 475-476, 533-537, 563
vllm_rbln/v1/sample/rbln_sampler.py                                                  117     81     38      0    23%   22, 51-63, 83-107, 114, 121, 134-136, 148-181, 187, 197-203, 213-219, 230-240, 246-247, 256-312, 323-325
vllm_rbln/v1/spec_decode/__init__.py                                                   0      0      0      0   100%
vllm_rbln/v1/spec_decode/eagle.py                                                    196     50     48     11    73%   43-47, 115-124, 148, 182, 189, 210->216, 225-227, 240, 242, 245, 253, 287-290, 437-475, 478-504
vllm_rbln/v1/spec_decode/medusa.py                                                    48      0      4      0   100%
vllm_rbln/v1/spec_decode/utils.py                                                     19      0      0      0   100%
vllm_rbln/v1/worker/__init__.py                                                        0      0      0      0   100%
vllm_rbln/v1/worker/bucketing/__init__.py                                             15      9      8      0    26%   43-65
vllm_rbln/v1/worker/bucketing/bucketing_manager.py                                    34     20     12      0    30%   25-32, 37, 42, 47, 52, 56-59, 73-84
vllm_rbln/v1/worker/bucketing/exponential_bucketing_manager.py                        18     14      6      0    17%   28-37, 41-53
vllm_rbln/v1/worker/bucketing/linear_bucketing_manager.py                             16     12      4      0    20%   28-37, 41-47
vllm_rbln/v1/worker/bucketing/manual_bucketing_manager.py                             11      7      2      0    31%   26-28, 34-42
vllm_rbln/v1/worker/metrics.py                                                       120      0     36      0   100%
vllm_rbln/v1/worker/optimum_input_batch.py                                            47     47     22      0     0%   15-118
vllm_rbln/v1/worker/optimum_model_runner.py                                          635    635    214      0     0%   14-1516
vllm_rbln/v1/worker/optimum_worker.py                                                120    120     24      0     0%   15-267
vllm_rbln/v1/worker/rbln_model_runner.py                                            1719   1388    606     17    17%   33, 222-550, 556-559, 562-573, 578, 587-618, 661, 665, 678-913, 964-985, 1019-1121, 1133-1143, 1167-1425, 1438-1481, 1500-1552, 1569->1572, 1573-1576, 1583, 1588-1590, 1598-1601, 1623-1651, 1660-1701, 1710-1733, 1747-1805, 1822-1864, 1874-1996, 2073-2085, 2090-2120, 2127-2300, 2308-2369, 2375-2404, 2410-2450, 2476-2578, 2612-2660, 2666-2675, 2684-3036, 3047, 3053-3058, 3090-3092, 3110, 3116, 3124-3130, 3162, 3180-3181, 3193, 3201-3209, 3214-3227, 3231-3239, 3253-3369, 3372-3517, 3530-3541, 3544-3556, 3567-3579, 3593-3594, 3605-3699, 3709-3728, 3734-3801, 3809-3832, 3843, 3880->3875, 3883, 3916->3913, 3924-3941, 3955-3968, 4013, 4021, 4024-4027, 4043-4072, 4092-4176, 4194-4235, 4244-4263, 4272-4335, 4341-4359, 4371-4391, 4431-4448
vllm_rbln/v1/worker/rbln_worker.py                                                   264      4     72      4    98%   28, 269, 322, 356, 375->380
vllm_rbln/v1/worker/utils.py                                                         150      1     50      1    99%   331
------------------------------------------------------------------------------------------------------------------------------
TOTAL                                                                               8237   5573   2480    118    30%


@rebel-jinhwan rebel-jinhwan self-assigned this Apr 9, 2026
@rebel-jinhwan rebel-jinhwan added the torch.compile torch.compile based implementation label Apr 9, 2026
@rebel-jinhwan rebel-jinhwan marked this pull request as ready for review April 9, 2026 04:54
@rebel-jaehwang rebel-jaehwang requested a review from Copilot April 9, 2026 05:08
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds a comprehensive set of unit tests under tests/torch_compile/unit/v1/worker/ to substantially increase coverage for the v1 RBLN worker stack (worker, model runner, utils, and metrics), and consolidates shared pytest monkeypatch fixtures into the top-level torch-compile conftest.

Changes:

  • Add new unit test suites for RBLNWorker, RBLNModelRunner (incl. KV-cache helpers), v1.worker.utils, and v1.worker.metrics.
  • Move monkeypatch_class / monkeypatch_module fixtures from tests/torch_compile/e2e/conftest.py into tests/torch_compile/conftest.py.
  • Update .gitignore to stop ignoring .python-version.

Reviewed changes

Copilot reviewed 8 out of 10 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
tests/torch_compile/unit/v1/worker/test_worker.py Interface/contract tests around WorkerBase compliance for RBLNWorker.
tests/torch_compile/unit/v1/worker/test_rbln_worker.py Broad unit coverage for RBLNWorker behavior and distributed init helpers.
tests/torch_compile/unit/v1/worker/test_utils.py Unit tests for memory estimation and CPU affinity/threading helpers.
tests/torch_compile/unit/v1/worker/test_rbln_model_runner.py Unit tests for RBLNModelRunner helpers, outputs, and edge cases.
tests/torch_compile/unit/v1/worker/test_rbln_model_runner_kv_cache.py Unit tests focused on KV-cache-related helper logic.
tests/torch_compile/unit/v1/worker/test_metrics.py Unit tests for metrics collection and reporting classes.
tests/torch_compile/e2e/conftest.py Removes duplicated monkeypatch fixtures (moved to shared conftest).
tests/torch_compile/conftest.py Adds shared monkeypatch_class / monkeypatch_module fixtures.
.gitignore Un-ignores .python-version (potentially unrelated to test-coverage goal).

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@rebel-jinhwan rebel-jinhwan changed the base branch from dev-0.18 to dev April 9, 2026 22:58
@rebel-jinhwan rebel-jinhwan force-pushed the jinhwan/pytest-worker-improve branch from facd915 to 63fcf6a Compare April 9, 2026 23:00
@rebel-jinhwan rebel-jinhwan force-pushed the jinhwan/pytest-worker-improve branch from 63fcf6a to dd29ec6 Compare April 10, 2026 09:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

torch.compile torch.compile based implementation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants