Skip to content

fix: harden import-only pickle global detection#691

Open
mldangelo wants to merge 2 commits intomainfrom
feat/pickle-import-only-findings
Open

fix: harden import-only pickle global detection#691
mldangelo wants to merge 2 commits intomainfrom
feat/pickle-import-only-findings

Conversation

@mldangelo
Copy link
Member

@mldangelo mldangelo commented Mar 13, 2026

Summary

  • add dedicated import-only GLOBAL/STACK_GLOBAL classification for non-allowlisted refs
  • track which imports are later consumed by REDUCE/NEWOBJ/OBJ so only standalone refs are labeled import_only=true
  • preserve benign constructor/data-label cases with narrow safe-import and plausibility guardrails, plus regression coverage

Validation

  • uv run ruff format modelaudit/ tests/
  • uv run ruff check --fix modelaudit/ tests/
  • uv run ruff check modelaudit/ tests/
  • uv run ruff format --check modelaudit/ tests/
  • uv run mypy modelaudit/
  • uv run pytest tests/scanners/test_pickle_scanner.py -q
  • uv run pytest -n auto -m "not slow and not integration" --maxfail=1

Notes

  • executed call chains still surface module-reference checks, but they no longer claim to be import-only
  • added regressions for mixed-case malicious imports, benign datetime constructor refs, data-label-like module names, and origin-accurate duplicate refs

Summary by CodeRabbit

  • Bug Fixes

    • Improved detection of malicious import-only references in pickle files while reducing false positives for benign constructors and executed call chains.
    • More granular classification and severity messaging for suspicious global/import references, including clearer import-only vs truly dangerous distinctions.
  • Tests

    • Added extensive tests covering import-only/global detection, multi-stream edge cases, and safety helper behavior.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 13, 2026

Walkthrough

Adds import-only GLOBAL/STACK_GLOBAL detection to the pickle scanner: simulates symbolic reference maps, tracks import origins, classifies import targets (safe/unknown/dangerous), and surfaces import-only metadata in findings to reduce false positives.

Changes

Cohort / File(s) Summary
Security Changelog
CHANGELOG.md
Adds a security fixed note documenting detection of import-only pickle GLOBAL/STACK_GLOBAL references and refined reporting.
Core Scanner Implementation
modelaudit/scanners/pickle_scanner.py
Introduces _ResolvedImportRef dataclass and _simulate_symbolic_reference_maps; adds import classification helpers (_classify_import_reference, _is_safe_import_only_global, _is_resolved_import_target, _is_plausible_import_only_module, _is_actually_dangerous_string); refactors symbolic map construction to return stack_global_refs, callable_refs, callable_origin_refs; propagates import-origin info and adds import_only, classification, ml_context_confidence metadata to findings.
Test Coverage
tests/scanners/test_pickle_scanner.py
Exposes internal helpers for tests and adds TestPickleImportOnlyGlobalFindings with multiple tests covering malicious import-only/global detection, safe stdlib/constructor handling, multi-stream and reduce interactions, and edge cases.

Sequence Diagram(s)

sequenceDiagram
    participant Input as Pickle Bytecode
    participant Scanner as PickleScanner
    participant Simulator as SymbolicRefSimulator
    participant Classifier as ImportClassifier
    participant Reporter as FindingReporter

    Input->>Scanner: supply pickle stream
    Scanner->>Simulator: parse opcodes & simulate refs
    Simulator-->>Scanner: stack_global_refs, callable_refs, callable_origin_refs
    Scanner->>Classifier: resolve & classify import targets
    Classifier-->>Scanner: classification (safe/unknown/dangerous) + confidence
    Scanner->>Reporter: produce findings enriched with import_only, classification, ml_context_confidence
    Reporter-->>Scanner: suppress or emit final findings
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Poem

🐰 I nibbled bytes and traced each name,

origins found, no two the same,
import-only whispers now revealed,
safe seeds kept, bad kernels unsealed,
the scanner hops — the audit healed.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 44.44% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main change: hardening detection of import-only pickle global references. It is concise, specific, and directly relates to the primary improvement across all modified files.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/pickle-import-only-findings
📝 Coding Plan
  • Generate coding plan for human review comments

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@modelaudit/scanners/pickle_scanner.py`:
- Around line 1804-1808: The current _is_safe_import_only_global() lets
import-only allowlist entries like torch.load pass before checking
actual-dangery status; change its logic to first call
_is_actually_dangerous_global(mod, func) and if that returns True immediately
treat it as unsafe (return False), otherwise proceed to the existing allowlist
check against IMPORT_ONLY_SAFE_GLOBALS or _is_safe_ml_global; update references
in the same module to use this revised behavior. Also add regression tests
covering import-only payload "GLOBAL torch\nload\n" and stack/global payload
"STACK_GLOBAL torch.load" (both benign and malicious variants) to ensure
dangerous detections are preserved or strengthened. Ensure tests reference the
scanner functions (_is_safe_import_only_global and
_is_actually_dangerous_global) so failures point to the updated logic.

In `@tests/scanners/test_pickle_scanner.py`:
- Around line 1214-1252: Update the three tests
(test_import_only_global_malicious_is_flagged,
test_import_only_global_mixed_case_module_is_flagged,
test_import_only_stack_global_is_flagged) to assert that the positive detections
include import-only metadata: after you compute failed_checks and pick the
matching check(s) (variable c), add assertions that c.details["import_only"] is
True and that c.details.get("import_reference") is present (or equals the
expected value like "evilpkg.thing"/"EvilPkg.thing" for the GLOBAL cases);
ensure you check these fields for both the "Global Module Reference Check" and
the "STACK_GLOBAL Module Check" positive cases so the tests fail if a detection
is no longer classified as import-only.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: c4f1c4e2-cd2b-4b34-805b-e9d0e51f696c

📥 Commits

Reviewing files that changed from the base of the PR and between 9431fae and 4742262.

📒 Files selected for processing (3)
  • CHANGELOG.md
  • modelaudit/scanners/pickle_scanner.py
  • tests/scanners/test_pickle_scanner.py

@mldangelo mldangelo force-pushed the feat/pickle-import-only-findings branch from 4742262 to fd05e62 Compare March 14, 2026 00:19
@mldangelo
Copy link
Member Author

Rebased onto current main, addressed the two CodeRabbit findings, and reran validation locally.

Changes in this update:

  • keep dangerous import-only refs like torch.load out of the import-only safe path
  • add import_reference to failed STACK_GLOBAL import-only findings
  • harden the positive import-only tests to assert details.import_only and exact import_reference values
  • add explicit regressions for import-only GLOBAL torch.load and STACK_GLOBAL torch.load

Validation passed:

  • uv run ruff format modelaudit/ tests/
  • uv run ruff check --fix modelaudit/ tests/
  • uv run ruff check modelaudit/ tests/
  • uv run ruff format --check modelaudit/ tests/
  • uv run mypy modelaudit/
  • uv run pytest -n auto -m "not slow and not integration" --maxfail=1 -> 2190 passed, 57 skipped

@mldangelo
Copy link
Member Author

Rebased onto current , addressed the two CodeRabbit findings, and reran validation locally.\n\nChanges in this update:\n- keep dangerous import-only refs like \ out of the import-only safe path\n- add \ to failed \ import-only findings\n- harden the positive import-only tests to assert \ and exact \ values\n- add explicit regressions for import-only \ and \n\nValidation passed:\n- \331 files left unchanged\n- \All checks passed!\n- \All checks passed!\n- \331 files already formatted\n- \Success: no issues found in 191 source files\n- ============================= test session starts ==============================
platform darwin -- Python 3.11.14, pytest-9.0.2, pluggy-1.6.0
rootdir: /Users/mdangelo/projects/ma3
configfile: pyproject.toml
testpaths: tests
plugins: anyio-4.12.1, xdist-3.8.0, asyncio-1.3.0, cov-7.0.0
asyncio: mode=Mode.STRICT, debug=False, asyncio_default_fixture_loop_scope=None, asyncio_default_test_loop_scope=function
created: 14/14 workers
14 workers [2165 items]

........................................................................ [ 3%]
........................................................................ [ 6%]
........................................................................ [ 9%]
........................................................................ [ 13%]
........................................................................ [ 16%]
........................................................................ [ 19%]
...........................................s................s........... [ 23%]
........................................................................ [ 26%]
......................................s.sss.....s...........s........... [ 29%]
...............s........................................................ [ 33%]
........................................................................ [ 36%]
........................................................................ [ 39%]
.........................s.............................................. [ 43%]
..................................................................s..... [ 46%]
........................................................................ [ 49%]
...............s.............s.......................................... [ 53%]
........................................................................ [ 56%]
..................................sss................................... [ 59%]
........................................................................ [ 63%]
........................................................................ [ 66%]
......................................sss.s........................sss.. [ 69%]
...................sssssssss............................................ [ 73%]
........................................................................ [ 76%]
........................................................................ [ 79%]
........................................ss.............................. [ 83%]
........................................................................ [ 86%]
........................................................................ [ 89%]
........................................................................ [ 93%]
..............sss.....ss.s..............s.ss.s.s........................ [ 96%]
.......s...............s................................................ [ 99%]
..... [100%]
=========================== short test summary info ============================
SKIPPED [1] tests/scanners/test_joblib_scanner.py:5: could not import 'joblib': No module named 'joblib'
SKIPPED [1] tests/scanners/test_keras_h5_scanner.py:8: could not import 'h5py': No module named 'h5py'
SKIPPED [1] tests/scanners/test_onnx_scanner.py:7: could not import 'onnx': No module named 'onnx'
SKIPPED [1] tests/scanners/test_safetensors_scanner.py:9: could not import 'safetensors': No module named 'safetensors'
SKIPPED [1] tests/test_asset_inventory_integration.py:24: could not import 'safetensors': No module named 'safetensors'
SKIPPED [1] tests/test_asset_list.py:10: could not import 'safetensors': No module named 'safetensors'
SKIPPED [1] tests/test_pytorch_zip_detection.py:10: could not import 'torch': No module named 'torch'
SKIPPED [1] tests/utils/helpers/test_py_compile_improvements.py:10: could not import 'h5py': No module named 'h5py'
SKIPPED [1] tests/scanners/test_pytorch_binary_scanner.py:79: ML context filtering now ignores executable signatures in weight-like data to reduce false positives
SKIPPED [1] tests/scanners/test_pytorch_binary_scanner.py:134: ML context filtering now ignores executable signatures in weight-like data to reduce false positives
SKIPPED [1] tests/scanners/test_sevenzip_scanner.py:107: py7zr not available
SKIPPED [1] tests/scanners/test_sevenzip_scanner.py:126: py7zr not available
SKIPPED [1] tests/scanners/test_sevenzip_scanner.py:145: py7zr not available
SKIPPED [1] tests/scanners/test_sevenzip_scanner.py:182: py7zr not available
SKIPPED [1] tests/scanners/test_sevenzip_scanner.py:248: py7zr not available
SKIPPED [1] tests/scanners/test_sevenzip_scanner.py:324: py7zr not available
SKIPPED [1] tests/scanners/test_sevenzip_scanner.py:493: py7zr not available for integration tests
SKIPPED [1] tests/scanners/test_xgboost_scanner.py:244: ubjson not installed
SKIPPED [1] tests/test_false_positive_fixes.py:188: h5py not installed
SKIPPED [1] tests/test_metadata_extractor.py:188: could not import 'joblib': No module named 'joblib'
SKIPPED [1] tests/test_metadata_extractor.py:320: could not import 'xgboost': No module named 'xgboost'
SKIPPED [1] tests/test_real_world_dill_joblib.py:116: joblib not available
SKIPPED [1] tests/test_real_world_dill_joblib.py:136: joblib not available
SKIPPED [1] tests/test_real_world_dill_joblib.py:176: joblib not available
SKIPPED [1] tests/test_sklearn_joblib_false_positive.py:33: sklearn not installed
SKIPPED [1] tests/test_sklearn_joblib_false_positive.py:66: sklearn not installed
SKIPPED [1] tests/test_sklearn_joblib_false_positive.py:103: sklearn not installed
SKIPPED [1] tests/test_sklearn_joblib_false_positive.py:137: sklearn not installed
SKIPPED [1] tests/test_sklearn_joblib_false_positive.py:171: sklearn not installed
SKIPPED [1] tests/test_sklearn_joblib_false_positive.py:201: sklearn not installed
SKIPPED [1] tests/test_sklearn_joblib_false_positive.py:224: sklearn not installed
SKIPPED [8] tests/conftest.py:135: tensorflow is not installed
SKIPPED [1] tests/test_tensorflow_lambda_detection.py:218: TensorFlow not installed
SKIPPED [1] tests/utils/helpers/test_ml_context_false_positives.py:149: Skipping due to test environment differences - core functionality verified with real models
SKIPPED [1] tests/utils/helpers/test_ml_context_false_positives.py:199: Skipping due to test environment differences - core functionality verified with real models
SKIPPED [1] tests/scanners/test_tflite_scanner.py:58: tflite not installed
SKIPPED [1] tests/scanners/test_tflite_scanner.py:74: tflite not installed
SKIPPED [1] tests/scanners/test_tflite_scanner.py:111: tflite not installed
SKIPPED [1] tests/test_performance_benchmarks.py:152: Not enough asset files for scaling test
SKIPPED [1] tests/test_performance_benchmarks.py:195: psutil not available for memory testing
SKIPPED [1] tests/test_performance_benchmarks.py:312: Skipping timeout performance test due to enhanced security scanning. The improved security detection now performs more thorough analysis, which introduces legitimate performance variance that makes timeout overhead measurements unreliable. The core security functionality has been verified to work correctly.
SKIPPED [1] tests/scanners/test_weight_distribution_scanner.py:251: PyTorch not installed
SKIPPED [1] tests/scanners/test_weight_distribution_scanner.py:299: h5py not installed
SKIPPED [1] tests/scanners/test_weight_distribution_scanner.py:333: TensorFlow not installed
SKIPPED [1] tests/scanners/test_weight_distribution_scanner.py:372: PyTorch not installed
SKIPPED [1] tests/scanners/test_weight_distribution_scanner.py:398: PyTorch not installed
SKIPPED [1] tests/test_performance_benchmarks.py:255: Skipping concurrency overhead check in local environment (overhead=3.15x)
SKIPPED [1] tests/test_false_positive_fixes.py:292: torch not installed
================= 2118 passed, 55 skipped in 71.19s (0:01:11) ================== -> \

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
modelaudit/scanners/pickle_scanner.py (1)

2048-2053: ⚠️ Potential issue | 🟠 Major

Add _pop_to_mark() call to INST handler to consume the marked frame.

The INST opcode consumes a marked frame of constructor arguments (per pickletools spec: stack_before=[mark, stackslice], stack_after=[any]), but the current handler leaves it on the stack. This breaks subsequent stack simulation for all later opcodes, causing _pop_to_mark() calls in TUPLE, LIST, OBJ, and other handlers to consume stale frames instead, corrupting callable_origin_refs tracking.

Align INST with the pattern used for TUPLE, LIST, and OBJ (which all call _pop_to_mark() for opcodes with the same stack contract).

         if name == "INST" and isinstance(arg, str):
             parsed = _parse_module_function(arg)
             if parsed:
                 callable_refs[i] = parsed
                 callable_origin_refs[i] = i
+            _pop_to_mark()
             stack.append(unknown)
             continue
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@modelaudit/scanners/pickle_scanner.py` around lines 2048 - 2053, The INST
opcode handler currently pushes unknown without consuming the marked frame;
modify the INST handler in pickle_scanner.py to call _pop_to_mark() (like the
TUPLE, LIST, and OBJ handlers) before appending unknown so the marked frame is
removed from stack and callable_origin_refs/callable_refs tracking remains
correct; ensure the call occurs before stack.append(unknown) and that any
returned slice/value from _pop_to_mark() is discarded if unused.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@modelaudit/scanners/pickle_scanner.py`:
- Around line 1842-1863: The classifier _classify_import_reference currently
only matches denylisted refs with exact case; update it to perform a
case-normalized denylist check (e.g., normalize mod and func to a canonical
case) before falling back to "unknown_third_party" so mixed-case variants of
denylisted modules/functions are classified as dangerous; do this by checking
the normalized pair against the existing denylist/blocked set (or via a new
helper used by _is_actually_dangerous_global) while preserving the existing
calls to _is_resolved_import_target, _is_safe_import_only_global, and
_is_plausible_import_only_module and reusing WARNING_SEVERITY_MODULES to
determine severity.
- Around line 1805-1813: The helper _is_safe_import_only_global currently treats
entries from _is_safe_ml_global (ML_SAFE_GLOBALS) as import-only-safe, which
lets deserializers like dill.load, dill.loads, and joblib._pickle_load bypass
checks; remove that widening by not calling _is_safe_ml_global here and only
consulting IMPORT_ONLY_SAFE_GLOBALS (or explicit constructor/data symbols). Fix
options: (A) change _is_safe_import_only_global to return False if
_is_actually_dangerous_global(...) else return func in
IMPORT_ONLY_SAFE_GLOBALS.get(mod, frozenset()) (remove the _is_safe_ml_global
branch), and move any true constructor/data symbols from ML_SAFE_GLOBALS into
IMPORT_ONLY_SAFE_GLOBALS; or (B) add deserializer APIs (e.g., dill.load,
dill.loads, joblib._pickle_load) to ALWAYS_DANGEROUS_FUNCTIONS so they are never
treated as import-only-safe. Ensure you update ML_SAFE_GLOBALS and tests
accordingly.

---

Outside diff comments:
In `@modelaudit/scanners/pickle_scanner.py`:
- Around line 2048-2053: The INST opcode handler currently pushes unknown
without consuming the marked frame; modify the INST handler in pickle_scanner.py
to call _pop_to_mark() (like the TUPLE, LIST, and OBJ handlers) before appending
unknown so the marked frame is removed from stack and
callable_origin_refs/callable_refs tracking remains correct; ensure the call
occurs before stack.append(unknown) and that any returned slice/value from
_pop_to_mark() is discarded if unused.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 36309ccd-9218-4324-a4b7-2f96c627cf3b

📥 Commits

Reviewing files that changed from the base of the PR and between 4742262 and fd05e62.

📒 Files selected for processing (3)
  • CHANGELOG.md
  • modelaudit/scanners/pickle_scanner.py
  • tests/scanners/test_pickle_scanner.py

Comment on lines +1805 to +1813
def _is_safe_import_only_global(mod: str, func: str, ml_context: dict[str, Any] | None = None) -> bool:
"""Return True when an import-only target is explicitly safe to treat as benign."""
if _is_actually_dangerous_global(mod, func, ml_context or {}):
return False

if _is_safe_ml_global(mod, func):
return True

return func in IMPORT_ONLY_SAFE_GLOBALS.get(mod, frozenset())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Don't let deserializers inherit the import-only safe list.

_is_safe_import_only_global() now treats every ML_SAFE_GLOBALS entry as benign once _is_actually_dangerous_global() returns false. That currently whitelists loader gadgets such as dill.load, dill.loads, and joblib._pickle_load, so a standalone reference to them becomes a passing safe_allowlisted check. Keep this helper limited to constructor/data symbols, or move these APIs into ALWAYS_DANGEROUS_FUNCTIONS. Based on learnings, "Preserve or strengthen security detections; test both benign and malicious samples when adding scanner/feature changes."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@modelaudit/scanners/pickle_scanner.py` around lines 1805 - 1813, The helper
_is_safe_import_only_global currently treats entries from _is_safe_ml_global
(ML_SAFE_GLOBALS) as import-only-safe, which lets deserializers like dill.load,
dill.loads, and joblib._pickle_load bypass checks; remove that widening by not
calling _is_safe_ml_global here and only consulting IMPORT_ONLY_SAFE_GLOBALS (or
explicit constructor/data symbols). Fix options: (A) change
_is_safe_import_only_global to return False if
_is_actually_dangerous_global(...) else return func in
IMPORT_ONLY_SAFE_GLOBALS.get(mod, frozenset()) (remove the _is_safe_ml_global
branch), and move any true constructor/data symbols from ML_SAFE_GLOBALS into
IMPORT_ONLY_SAFE_GLOBALS; or (B) add deserializer APIs (e.g., dill.load,
dill.loads, joblib._pickle_load) to ALWAYS_DANGEROUS_FUNCTIONS so they are never
treated as import-only-safe. Ensure you update ML_SAFE_GLOBALS and tests
accordingly.

Comment on lines +1842 to +1863
def _classify_import_reference(
mod: str, func: str, ml_context: dict[str, Any]
) -> tuple[bool, IssueSeverity | None, str]:
"""Classify a resolved GLOBAL/STACK_GLOBAL import target.

Returns (is_failure, severity, classification) where classification is one of
safe_allowlisted, dangerous, unknown_third_party, or unresolved.
"""
if not _is_resolved_import_target(mod, func):
return False, None, "unresolved"

if _is_actually_dangerous_global(mod, func, ml_context):
base_sev = IssueSeverity.WARNING if mod in WARNING_SEVERITY_MODULES else IssueSeverity.CRITICAL
return True, base_sev, "dangerous"

if _is_safe_import_only_global(mod, func, ml_context):
return False, None, "safe_allowlisted"

if not _is_plausible_import_only_module(mod):
return False, None, "implausible"

return True, IssueSeverity.WARNING, "unknown_third_party"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Case-normalize denylist matching before falling back to unknown_third_party.

This classifier still relies on exact-case blocklist hits. Mixed-case variants of denylisted refs therefore fall through to unknown_third_party, which weakens the new import-only path for malicious payloads that only change casing. Based on learnings, "Preserve or strengthen security detections; test both benign and malicious samples when adding scanner/feature changes."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@modelaudit/scanners/pickle_scanner.py` around lines 1842 - 1863, The
classifier _classify_import_reference currently only matches denylisted refs
with exact case; update it to perform a case-normalized denylist check (e.g.,
normalize mod and func to a canonical case) before falling back to
"unknown_third_party" so mixed-case variants of denylisted modules/functions are
classified as dangerous; do this by checking the normalized pair against the
existing denylist/blocked set (or via a new helper used by
_is_actually_dangerous_global) while preserving the existing calls to
_is_resolved_import_target, _is_safe_import_only_global, and
_is_plausible_import_only_module and reusing WARNING_SEVERITY_MODULES to
determine severity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant