Ingest CVSS 4.0 metrics #857
Conversation
fricklerhandwerk
left a comment
There was a problem hiding this comment.
Thanks a lot for the stab at it. I think some things can be simplified, and some of your decisions are not entirely obvious to me. Would appreciate clarification here (if I'm just missing the point) or as in-code comments (if you're doing something clever).
| if not k.startswith("M") # Don't display modified metrics | ||
| }, | ||
| raw_cvss = m.get("raw_cvss_json", {}) | ||
| if "vectorString" in raw_cvss: |
There was a problem hiding this comment.
Why not select the right CVSS parser based on m["format"]?
There was a problem hiding this comment.
Yes, makes sense. I have stored it earlier. I guess I should follow this if m["format"] == "cvssV4_0",
| ctx["vector_string"] = raw_cvss.get("vectorString") | ||
| ctx["base_score"] = float(raw_cvss.get("baseScore")) | ||
| base_score = raw_cvss.get("baseScore") | ||
| if base_score is not None: |
There was a problem hiding this comment.
Why are we checking for None here? The spec doesn't allow that anyway. Since we're consuming data from the authoritative source, we can reasonably expect it to follow the spec (and so far it did).
There was a problem hiding this comment.
That was mostly a defensive check out of habit. Since the data follows the spec, it makes sense to rely on it here. I'll remove the check and update the patch.
| def make_metric(data: dict[str, Any]) -> models.Metric: | ||
| cvss_preference = ("cvssV4_0", "cvssV3_1", "cvssV3_0") | ||
|
|
||
| selected_format = "cvssV3_1" |
There was a problem hiding this comment.
If the first item in the preference list is v4, why do we default to 3.1?
We could use the opportunity to retain more information here. Before any change, we're storing cvssV3_1 unconditionally, even if there's no data there, so there's no way to identify a gap after the fact.
For example, how about something along these lines:
ctx["format"] = data.get("format", "")
if ctx["format"] in supported_cvss_versions:
raw_cvss = data.get(ctx["format"])
elif not ctx["format"]:
# loop over `supported_cvss_versions`
else:
raw_cvss = data.get("other", {"content": {}})["content"]There was a problem hiding this comment.
My intention was to keep the existing default behavior while adding v4 support, but I agree that preserving the actual format is better and avoids masking missing data. I'll update it to store the format explicitly and select the metric based on the provided format as suggested by u.
| raw_cvss: dict[str, Any] = {} | ||
| for cvss_format in cvss_preference: | ||
| candidate = data.get(cvss_format) | ||
| if isinstance(candidate, dict) and candidate: |
There was a problem hiding this comment.
Isn't this always a dict by spec?
|
This needs more work, converting to draft to make it stand out as not ready for the next review round. |
Resolves #839
Summary:
Implemented CVSS ingestion and display compatibility for newer CVE records that no longer include cvssV3_1.
In fetchers.py metric parsing now falls back across versions in order: cvssV4_0 -> cvssV3_1 -> cvssV3_0, instead of only reading cvssV3_1.
In the same path, base score parsing is now guarded (no crash on missing/invalid values), and base_severity is populated from raw CVSS data.
In viewutils.py, severity badge generation no longer assumes every vector is CVSS3; CVSS4 vectors are handled safely without breaking overview rendering.
Added tests in test_fetchers.py and test_viewutils.py to cover CVSS fallback ingestion and badge behavior for both CVSS4 and CVSS3 vectors.
Manually verified
New Tests created