EAS Scoring Methodology

The Enhanced Aggregate Scoring (EAS) is a comprehensive framework designed to assess the quality and completeness of CVE records. By evaluating each record across five critical dimensions, EAS provides a standardized score (up to 100 points) that reflects how actionable, precise, and useful a CVE is for security teams and automated tools. All CVEs are published by CVE Numbering Authorities (CNAs), whose performance is measured and compared using this methodology.

๐Ÿ”ง Technical Implementation

The EAS scoring algorithm is implemented in Python and processes CVE data from the official CVEProject/cvelistV5 repository. The core scoring logic is contained in:

๐Ÿ“„ cnascorecard/eas_scorer.py ๐Ÿ“„ cnascorecard/main.py ๐Ÿ“„ cnascorecard/data_ingestor.py

1. Foundational Completeness (32 points)

Measures the presence of basic, essential information needed to understand and act on a vulnerability.

Metric Points Description
Description Quality 15 Advanced content analysis evaluating technical depth, specificity, and clarity
Affected Products 10 Clear identification of affected products
Version Information 5 Specific version ranges or status information
Language Tag & Structured Data 2 Proper language tags and structured product data

๐Ÿ” Description Quality Algorithm

The description quality scoring uses a multi-dimensional analysis based on 9,435 CVE descriptions:

  • Length & Structure: Progressive scoring for descriptions โ‰ฅ50, โ‰ฅ100, โ‰ฅ200 characters
  • Technical Vulnerability Types: Detection of 47 specific vulnerability patterns (SQL injection, XSS, buffer overflow, etc.)
  • Impact/Exploitation Context: 36 exploitation indicators ("leads to", "execute arbitrary", "allows", "bypass")
  • Technical Specificity: 52 technical depth indicators ("function", "parameter", "API", "authentication mechanism")
  • Generic Content Penalty: -2 points for 12 generic phrases in short descriptions
๐Ÿ“„ View Description Scoring Implementation
Example: A CVE that specifies "Apache HTTP Server versions 2.4.0 through 2.4.52" with a detailed description like "A buffer overflow vulnerability in the mod_rewrite module allows remote attackers to execute arbitrary code via crafted HTTP requests when processing malformed URL patterns" would score the full 30 points.

2. Root Cause Analysis (12 points)

Evaluates whether the CVE provides insight into the underlying weakness type.

Metric Points Description
CWE ID Provided & Valid 11 Valid CWE identifier (e.g., CWE-79, CWE-120)
CWE Format Precision 1 Correct CWE-ID format (e.g., CWE-79 not CWE: 79)
Implementation Note: CWE validation uses the official MITRE CWE catalog. The system checks for proper CWE-XXX format and validates against the current CWE database.
Example: A CVE that includes "CWE-787: Out-of-bounds Write" provides developers with the specific weakness pattern to look for.

3. Software Identification (12 points)

Assesses whether the CVE record includes a valid CPE identifier for affected products, enabling precise software identification and automation.

Metric Points Description
CPE Present & Valid 11 Valid CPE identifier (e.g., cpe:2.3:a:apache:http_server:2.4.52:*)
CPE Format Precision 1 Correct CPE 2.3 formatting

๐Ÿ—๏ธ CPE Validation

CPE validation uses the python-cpe library to ensure compliance with NIST IR 7695 specification.

The system validates:

  • CPE 2.3 format structure
  • Proper URI encoding
  • Valid component values
Example: Including "cpe:2.3:a:apache:http_server:2.4.52:*:*:*:*:*:*:*" enables automated vulnerability scanning tools to identify affected systems.

4. Severity & Impact Context (27 points)

Assesses the quality and completeness of severity scoring information.

Metric Points Description
CVSS Base Score 15 CVSS v4.0/v3.1/v3.0 base score provided
CVSS Vector String & Valid 6 Complete and valid CVSS vector string for reproducibility
Impact Description 5 Description includes impact indicators
CVSS Format Precision 1 Correct CVSS vector format and values

๐Ÿ“Š CVSS Validation

CVSS scoring validation supports multiple versions and uses the python-cvss library:

  • CVSS v4.0: Latest specification with enhanced metrics
  • CVSS v3.1: Current industry standard
  • CVSS v3.0: Previous generation support
  • CVSS v2.0: Legacy support for older CVEs

The system validates both base scores (0.0-10.0) and vector strings for mathematical consistency.

Example: A CVE with CVSS v3.1 base score of 9.8, complete vector string "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H", and description mentioning "remote code execution" would score 25 points.

5. Actionable Intelligence (20 points)

Measures the availability of information that enables immediate action by security teams.

Metric Points Description
Solution Information 8 Available fixes, patches, or mitigations
Actionable References 6 Links to patches, advisories, or security guidance
Workarounds 2 Temporary mitigation steps
Detailed Solution 4 Solution or fix description is detailed (>100 characters)

๐ŸŽฏ Reference Classification

The system automatically classifies references by analyzing URLs and content:

  • Vendor Advisories: Official security bulletins
  • Patch References: Direct links to fixes or updates
  • Technical Analysis: Security researcher findings
  • Exploit References: Excluded from scoring to avoid incentivizing exploit disclosure
๐Ÿ“„ View Reference Analysis Code
Example: A CVE with vendor advisory, security researcher blog post, patch commit, and a detailed solution description would score the full 20 points. No points are given for published exploits.

Data Processing Pipeline

๐Ÿ”„ Automated Processing

The CNA ScoreCard system operates through a fully automated pipeline:

  1. Data Ingestion: Fetches latest CVE data from CVEProject/cvelistV5 every 6 hours
  2. CVE Processing: Parses and validates CVE records using the CVE 5.0 schema
  3. Scoring: Applies EAS methodology to each CVE record
  4. CNA Aggregation: Calculates CNA-level statistics and rankings
  5. Static Generation: Produces JSON data files and HTML pages
  6. Deployment: Updates GitHub Pages site automatically

Key Components:

๐Ÿ“„ scripts/build.py ๐Ÿ“„ scripts/generate_dashboard.py ๐Ÿ“„ cnascorecard/generate_static_data.py ๐Ÿ“„ .github/workflows/main.yml

Aggregation and Ranking

CNA scores are calculated by:

  1. Individual CVE Scoring: Each CVE receives a score from 0-100 based on the metrics above
  2. CNA Average: The arithmetic mean of all CVEs published by a CNA in the last 6 months
  3. Minimum Threshold: CNAs must have published at least 1 CVE to receive a score
  4. Inactive CNAs: CNAs with no recent publications are marked as "No CVEs published in the last 6 months"

๐Ÿ“ˆ Statistical Analysis

The system maintains comprehensive statistics for each CNA:

  • Score Distribution: Percentile rankings among all active CNAs
  • Temporal Trends: Score changes over time
  • Component Breakdowns: Performance across individual scoring dimensions
  • Volume Metrics: CVE publication frequency and patterns

Data is stored in JSON format for easy consumption by visualization tools and APIs.

Data Freshness: Scores are updated every 6 hours using the latest CVE data from the official CVEProject repository.
Note: This project and scoring methodology were inspired by the CNA Enrichment Recognition program.
Related Work: For additional research on CVE performance measurement, see "Measuring CVE Performance" by Ben Edwards (BitSight), which provides complementary analysis of CNA effectiveness and vulnerability disclosure quality.

Ranking

Ranking shows a CNA's position among all active CNAs based on their average EAS score. For example, "Rank: 12 of 150" means this CNA is 12th out of 150 active CNAs in the last 6 months.

Why This Matters

Higher EAS scores indicate CVE records that are:

๐Ÿงช Testing and Validation

The scoring system includes comprehensive testing:

๐Ÿ“„ tests/test_data_structure.py ๐Ÿ“„ tests/test_integration.py ๐Ÿ“„ tests/test_quick.py

For detailed analysis of the description quality algorithm, see the testing framework in the tests/ directory.