Focus on keras vulnerabilities and metrics.
Last updated: 25 Nov 2025, 23:25 UTC
This page consolidates all known Common Vulnerabilities and Exposures (CVEs) associated with keras. We track both calendar-based metrics (using fixed periods) and rolling metrics (using gliding windows) to give you a comprehensive view of security trends and risk evolution. Use these insights to assess risk and plan your patching strategy.
For a broader perspective on cybersecurity threats, explore the comprehensive list of CVEs by vendor and product. Stay updated on critical vulnerabilities affecting major software and hardware providers.
Total keras CVEs: 6
Earliest CVE date: 16 Apr 2024, 21:15 UTC
Latest CVE date: 19 Sep 2025, 09:15 UTC
Latest CVE reference: CVE-2025-9906
30-day Count (Rolling): 0
365-day Count (Rolling): 5
Calendar-based Variation
Calendar-based Variation compares a fixed calendar period (e.g., this month versus the same month last year), while Rolling Growth Rate uses a continuous window (e.g., last 30 days versus the previous 30 days) to capture trends independent of calendar boundaries.
Month Variation (Calendar): 0%
Year Variation (Calendar): 400.0%
Month Growth Rate (30-day Rolling): 0.0%
Year Growth Rate (365-day Rolling): 400.0%
Average CVSS: 0.0
Max CVSS: 0
Critical CVEs (≥9): 0
| Range | Count |
|---|---|
| 0.0-3.9 | 6 |
| 4.0-6.9 | 0 |
| 7.0-8.9 | 0 |
| 9.0-10.0 | 0 |
These are the five CVEs with the highest CVSS scores for keras, sorted by severity first and recency.
The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True. One can create a specially crafted .keras model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special config.json (a file within the .keras archive) that will invoke keras.config.enable_unsafe_deserialization() to disable safe mode. Once safe mode is disable, one can use the Lambda layer feature of keras, which allows arbitrary Python code in the form of pickled code. Both can appear in the same archive. Simply the keras.config.enable_unsafe_deserialization() needs to appear first in the archive and the Lambda with arbitrary code needs to be second.
The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True. One can create a specially crafted .h5/.hdf5 model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special .h5 archive file that uses the Lambda layer feature of keras which allows arbitrary Python code in the form of pickled code. The vulnerability comes from the fact that the safe_mode=True option is not honored when reading .h5 archives. Note that the .h5/.hdf5 format is a legacy format supported by Keras 3 for backwards compatibility.
A safe mode bypass vulnerability in the `Model.load_model` method in Keras versions 3.0.0 through 3.10.0 allows an attacker to achieve arbitrary code execution by convincing a user to load a specially crafted `.keras` model archive.
The Keras Model.load_model function permits arbitrary code execution, even with safe_mode=True, through a manually constructed, malicious .keras archive. By altering the config.json file within the archive, an attacker can specify arbitrary Python modules and functions, along with their arguments, to be loaded and executed during model loading.
An issue in keras 3.7.0 allows attackers to write arbitrary files to the user's machine via downloading a crafted tar file through the get_file function.
A arbitrary code injection vulnerability in TensorFlow's Keras framework (<2.13) allows attackers to execute arbitrary code with the same permissions as the application using a model that allow arbitrary code irrespective of the application.