The CNA Scorecard promotes transparency and accountability in vulnerability reporting by measuring how completely
CVE Numbering Authorities populate essential data fields. The scoring philosophy is simple: complete, actionable
vulnerability data leads to better security outcomes for everyone.
Scoring Window
Current Analysis Period
Loading...
All CNA scores are based on CVE records published during this 6-month window. This ensures
scores reflect recent performance and current data quality practices.
How the Scoring Window Works
Rolling 6-Month Window: The analysis period automatically updates to always cover the most recent 6 months of CVE data.
Fresh Data: Scores are recalculated regularly using only CVEs published within the current window, ensuring relevance and accuracy.
Trend Analysis: Performance trends compare the current 6-month period against the previous 6-month period to show improvement or decline.
Fair Comparison: All CNAs are evaluated using the same time window, creating consistent and comparable scoring across organizations.
Scoring Categories
The scoring system evaluates CVE records across five essential categories, with points allocated based on
the relative importance of each type of information for security decision-making.
50 points Foundational Completeness
What it is: The essential building blocks of every CVE record, including problem types, affected products, references, and descriptions.
Why it matters: These are the minimum fields required for CVE publication and provide the basic context needed to understand any vulnerability. Without complete foundational data, security teams cannot properly identify or assess threats.
How points are awarded: Points are earned for including comprehensive problem type classifications, detailed affected product information, relevant reference links, and clear vulnerability descriptions.
15 points Root Cause Analysis
What it is: Information about the underlying weakness that caused the vulnerability, typically expressed as a CWE (Common Weakness Enumeration) identifier.
Why it matters: CWE data helps development teams understand patterns in security flaws and implement preventive measures. It's like knowing that a house fire was caused by faulty wiring versus a gas leak—the root cause determines the best prevention strategy.
How points are awarded: Points are earned for including valid CWE identifiers that accurately describe the specific type of weakness, rather than generic classifications.
15 points Severity & Impact Context
What it is: Standardized severity ratings using CVSS (Common Vulnerability Scoring System) metrics, which provide numerical scores from 0-10 indicating how serious a vulnerability is.
Why it matters: CVSS scores help organizations prioritize which vulnerabilities to fix first based on potential impact. A CVSS score of 9.8 demands immediate attention, while a 3.1 might be scheduled for routine patching.
How points are awarded: Points are earned for including complete CVSS v3 or v4 metrics with valid vector strings that accurately reflect the vulnerability's exploitability and impact.
10 points Software Identification
What it is: Precise identification of affected software using CPE (Common Platform Enumeration) format, which provides standardized names for software products and versions. CPE data can be provided in either the traditional affected[].cpes field or the advanced CVE 5.1 cpeApplicability field.
Why it matters: CPE data enables automated vulnerability scanners to accurately match vulnerabilities to specific software in an organization's environment. Without it, security tools can't reliably identify what's at risk.
How points are awarded: Points are earned for including valid CPE identifiers that precisely specify the affected software products, versions, and configurations. Both traditional CPE lists and advanced applicability statements with version ranges are recognized.
10 points Patch Information
What it is: Direct links to patches, fixes, or official vendor advisories that provide remediation guidance.
Why it matters: Patch information bridges the gap between identifying a vulnerability and actually fixing it. Security teams need clear paths to remediation, not just problem descriptions.
How points are awarded: Points are earned for including direct links to official patches or vendor advisories, rather than generic references or third-party discussions.
Technical Documentation for CNAs
This section provides the exact technical implementation details for CNAs who want to verify
the scoring methodology. All field paths reference the official CVE JSON 5.0 schema.
CWE ID Format: Must match pattern CWE-[0-9]+ (case-insensitive)
Valid CWE List: Must exist in official MITRE CWE database (loaded from cwe_ids.json)
Search Process: Iterates through all problemTypes → descriptions → cweId fields
Code Implementation:
def _calculate_root_cause_analysis(cve, cna):
problem_types = cna.get('problemTypes', [])
if not isinstance(problem_types, list):
return 0
has_valid_cwe = _find_valid_cwe(problem_types)
return 15 if has_valid_cwe else 0
def _find_valid_cwe(problem_types):
for problem_type in problem_types:
descriptions = problem_type.get('descriptions', [])
for desc in descriptions:
cwe_id = desc.get('cweId', '')
if _is_valid_cwe_id(cwe_id):
return True
return False
def _is_valid_cwe_id(cwe_raw):
if not isinstance(cwe_raw, str):
return False
# Check format: CWE-[digits]
import re
if not re.match(r'^CWE-\d+$', cwe_raw, re.IGNORECASE):
return False
# Check against official CWE database
return cwe_raw.upper() in scoring_config.valid_cwe_ids
CVSS Version Support: Accepts CVSS v3.0, v3.1, or v4.0
Vector String Required: Must contain valid vectorString field
Search Process: Iterates through metrics array looking for any valid CVSS object
Code Implementation:
def _calculate_severity_context(cve, cna):
metrics = cna.get('metrics', [])
if not isinstance(metrics, list):
return 0
has_cvss = _has_cvss_metrics(metrics)
has_vector = _has_valid_cvss_vector(metrics)
return 15 if (has_cvss and has_vector) else 0
def _has_cvss_metrics(metrics):
for metric in metrics:
if any(version in metric for version in ['cvssV3_0', 'cvssV3_1', 'cvssV4_0']):
return True
return False
def _has_valid_cvss_vector(metrics):
for metric in metrics:
for version in ['cvssV3_0', 'cvssV3_1', 'cvssV4_0']:
if version in metric:
cvss_data = metric[version]
if isinstance(cvss_data, dict) and cvss_data.get('vectorString'):
return True
return False
Traditional Field: containers.cna.affected[].cpes (array of CPE 2.3 strings)
CVE 5.1 Advanced Field: containers.cna.cpeApplicability[].nodes[].cpeMatch[].criteria (CPE with operators)
Note: Both field types are checked. Points are awarded if CPE data is found in either location.
Validation Logic:
Binary Scoring: Either 10 points or 0 points
Dual Field Support: Checks traditional affected[].cpes first, then CVE 5.1 cpeApplicability
Traditional CPE: At least one affected product must contain non-empty cpes array
CPE Applicability: Must contain valid nodes with cpeMatch entries and criteria strings
Code Implementation:
def _calculate_software_identification(cve, cna):
# Check traditional affected[].cpes field
affected = cna.get('affected', [])
if isinstance(affected, list):
has_cpe_in_affected = _has_cpe_identifiers(affected)
if has_cpe_in_affected:
return 10
# Check CVE 5.1 cpeApplicability field
has_cpe_applicability = _has_cpe_applicability(cna)
if has_cpe_applicability:
return 10
return 0
def _has_cpe_identifiers(affected):
"""Check for traditional CPE lists in affected products"""
for product in affected:
if not isinstance(product, dict):
continue
cpes = product.get('cpes')
if isinstance(cpes, list) and len(cpes) > 0:
return True
return False
def _has_cpe_applicability(cna):
"""Check for CVE 5.1 cpeApplicability with advanced matching"""
cpe_applicability = cna.get('cpeApplicability', [])
for item in cpe_applicability:
nodes = item.get('nodes', [])
for node in nodes:
cpe_match = node.get('cpeMatch', [])
for match in cpe_match:
if isinstance(match, dict) and match.get('criteria'):
return True
return False
The cpeApplicability field provides NVD-style CPE matching with operators and version ranges,
enabling more precise vulnerability applicability statements than simple CPE lists.
10 points Patch Information - Technical Implementation
Schema Field Paths Checked:
containers.cna.references[].tags (array of strings)
Validation Logic:
Binary Scoring: Either 10 points or 0 points
Tag Matching: Case-insensitive search for "patch" in any tag string
Search Process: Iterates through all references → tags arrays
Code Implementation:
def _calculate_actionable_intelligence(cve, cna):
references = cna.get('references', [])
if not isinstance(references, list):
return 0
has_patch_ref = _has_patch_references(references)
return 10 if has_patch_ref else 0
def _has_patch_references(references):
for ref in references:
if not isinstance(ref, dict):
continue
tags = ref.get('tags', [])
if not isinstance(tags, list):
continue
# Check for patch-related tags (case-insensitive)
for tag in tags:
if isinstance(tag, str) and 'patch' in tag.lower():
return True
return False
Issue: Missing software identification points Check: Verify at least one affected product has non-empty cpes array OR the cpeApplicability field contains valid CPE match data with criteria strings
Issue: Missing patch information points Check: Ensure at least one reference has "patch" in its tags array
Grading Tiers
Scores are calculated as a percentage of total possible points (100), then mapped to letter grades for easy interpretation:
Perfect100%All essential data fields are complete and accurate
Great>75%Most critical information is provided with minor gaps
Good>50%Basic requirements met with room for improvement
Needs Work<50%Significant gaps in essential vulnerability data
Missing Data0%No scoring data available for this CNA
Performance Trends Analysis
Beyond point-in-time scoring, the CNA Scorecard tracks performance trends over time using rolling 7-day averages
to identify patterns and improvements in vulnerability data quality.
Rolling 7-Day Averages
What it tracks: Daily average scores for each scoring category calculated over sliding 7-day windows, updated daily for the past 6 months.
Why it matters: Smooths out daily fluctuations to reveal genuine trends in data quality improvement or decline. A single bad day doesn't define overall performance.
How it works: Each day's score represents the average of the previous 7 days, creating a smooth trend line that highlights meaningful patterns over noise.
Top Improving CNAs
What it identifies: CNAs showing the most significant improvement by comparing their earliest versus most recent 7-day performance averages over a 6-month analysis period.
Why it's valuable: Recognizes CNAs making genuine efforts to improve their vulnerability reporting practices and helps identify best practices worth emulating.
How it's calculated: Improvement score equals the difference between the most recent 7-day average and the earliest 7-day average for each CNA with sufficient data.
Trend Visualization
What it shows: Interactive line charts displaying rolling averages for Root Cause Analysis (CWE), Severity & Impact (CVSS), Software Identification (CPE), and Patch Information categories.
Why separate charts: Each scoring category has different typical ranges and improvement patterns, requiring dedicated visualization for meaningful analysis.
How it adapts: Charts use auto-scaling Y-axes that adjust to actual data ranges, ensuring trend visibility whether scores are high-performing or need improvement.
CNAs can display their CNA Scorecard rating on their website, README, or documentation using automatically-generated SVG badges.
Badges are updated every time the pipeline runs, so they always reflect the latest score and rank.
Badges are color-coded based on score (green for 90-100%, blue for 70-79%, yellow for 60-69%, red below 60%)
to provide quick visual feedback on performance.
Transparency & Open Standards
The CNA Scorecard is committed to complete transparency in methodology. Every aspect of the scoring system is based on
established industry standards and open specifications. The scoring logic is fully documented and publicly
available, ensuring that CNAs and security professionals can understand exactly how scores are calculated.
All scoring algorithms, data processing methods, and evaluation criteria are open source and available for
review in the project repository.
Feedback and contributions from the security community are welcome to continuously improve the methodology.