tianguaduizhang CVE Vulnerabilities & Metrics

Focus on tianguaduizhang vulnerabilities and metrics.

Last updated: 16 Apr 2026, 22:25 UTC

About tianguaduizhang Security Exposure

This page consolidates all known Common Vulnerabilities and Exposures (CVEs) associated with tianguaduizhang. We track both calendar-based metrics (using fixed periods) and rolling metrics (using gliding windows) to give you a comprehensive view of security trends and risk evolution. Use these insights to assess risk and plan your patching strategy.

For a broader perspective on cybersecurity threats, explore the comprehensive list of CVEs by vendor and product. Stay updated on critical vulnerabilities affecting major software and hardware providers.

Global CVE Overview

Total tianguaduizhang CVEs: 1
Earliest CVE date: 27 Mar 2026, 15:16 UTC
Latest CVE date: 27 Mar 2026, 15:16 UTC

Latest CVE reference: CVE-2026-30304

Rolling Stats

30-day Count (Rolling): 1
365-day Count (Rolling): 1

Calendar-based Variation

Calendar-based Variation compares a fixed calendar period (e.g., this month versus the same month last year), while Rolling Growth Rate uses a continuous window (e.g., last 30 days versus the previous 30 days) to capture trends independent of calendar boundaries.

Variations & Growth

Month Variation (Calendar): 0%
Year Variation (Calendar): 0%

Month Growth Rate (30-day Rolling): 0.0%
Year Growth Rate (365-day Rolling): 0.0%

Monthly CVE Trends (current vs previous Year)

Annual CVE Trends (Last 20 Years)

Critical tianguaduizhang CVEs (CVSS ≥ 9) Over 20 Years

CVSS Stats

Average CVSS: 0.0

Max CVSS: 0

Critical CVEs (≥9): 0

CVSS Range vs. Count

Range Count
0.0-3.9 1
4.0-6.9 0
7.0-8.9 0
9.0-10.0 0

CVSS Distribution Chart

Top 5 Highest CVSS tianguaduizhang CVEs

These are the five CVEs with the highest CVSS scores for tianguaduizhang, sorted by severity first and recency.

All CVEs for tianguaduizhang

CVE-2026-30304 tianguaduizhang vulnerability CVSS: 0 27 Mar 2026, 15:16 UTC

In its design for automatic terminal command execution, AI Code offers two options: Execute safe commands and execute all commands. The description for the former states that commands determined by the model to be safe will be automatically executed, whereas if the model judges a command to be potentially destructive, it still requires user approval. However, this design is highly susceptible to prompt injection attacks. An attacker can employ a generic template to wrap any malicious command and mislead the model into misclassifying it as a 'safe' command, thereby bypassing the user approval requirement and resulting in arbitrary command execution.