Interactive Playground
Experiment with the vulnerable code and security rule below. Edit the code to see how the rule detects different vulnerability patterns.
pathfinder scan --ruleset python/PYTHON-LANG-SEC-001 --project .About This Rule
Understanding the vulnerability and how it is detected
Python's built-in eval() function evaluates an arbitrary string as a Python expression and returns its result. When the string argument is derived from untrusted user input such as query parameters, request bodies, environment variables, or file contents, an attacker can craft an expression that executes system commands, reads sensitive files, spawns reverse shells, or accesses any installed module via __import__().
Unlike exec(), eval() is restricted to expressions, but this provides almost no security boundary since an attacker can call __import__('os').system('cmd') or traverse __builtins__ to reach any callable. Safe alternatives include ast.literal_eval() for Python literals, json.loads() for structured data, and explicit allowlist dispatch for dynamic function calls.
Security Implications
Potential attack scenarios if this vulnerability is exploited
Remote Code Execution
An attacker controlling the eval() argument can execute arbitrary Python code in the context of the running process, including spawning OS commands, opening network connections, reading credentials from disk, and importing any installed module.
Data Exfiltration
eval() can access environment variables, configuration files, and in-memory secrets. Attackers can encode exfiltrated data in the return value or send it over a network connection opened within the evaluated expression.
Privilege Escalation
If the Python process runs with elevated privileges such as root, broad IAM permissions, or a privileged Kubernetes service account, eval() injection immediately grants the attacker those same privileges with no additional steps.
Persistent Backdoor Surface
eval() in configuration loaders, plugin systems, or template engines creates a persistent attack surface. An attacker influencing config files or environment variables can achieve persistent code execution without touching application source code.
How to Fix
Recommended remediation steps
- 1Replace eval() with ast.literal_eval() when the input is expected to be a Python literal such as a number, string, list, or dictionary.
- 2Use json.loads() or a schema validation library such as pydantic or marshmallow for structured data from external sources.
- 3If dynamic dispatch is needed, use an explicit allowlist mapping names to callables rather than evaluating the name as code.
- 4Audit all call sites of eval() and document why each one cannot use a safer alternative; treat any use on untrusted data as a critical finding.
- 5Apply the principle of least privilege so that even if eval() is exploited, the process cannot access credentials or execute privileged operations.
Detection Scope
How Code Pathfinder analyzes your code for this vulnerability
This rule detects all direct calls to the built-in eval() function anywhere in Python source code, matching both unqualified eval() and builtins.eval() forms. Every call site is flagged regardless of whether the argument is a literal or a variable, because static analysis cannot always determine the runtime origin of a value at the call site.
Compliance & Standards
Industry frameworks and regulations that require detection of this vulnerability
References
External resources and documentation
Similar Rules
Explore related security rules for Python
Dangerous exec() Usage Detected
exec() executes arbitrary Python statements from strings or code objects, enabling remote code execution when called with untrusted input.
Dangerous code.InteractiveConsole Usage
code.InteractiveConsole and code.interact() enable arbitrary Python code execution and should not be exposed to untrusted users.
Non-literal Dynamic Import Detected
__import__() or importlib.import_module() with a non-literal argument can import arbitrary modules when called with untrusted input.
Frequently Asked Questions
Common questions about Dangerous eval() Usage Detected
New feature
Get these findings posted directly on your GitHub pull requests
The Dangerous eval() Usage Detected rule runs in CI and posts inline review comments on the exact lines — no dashboard, no SARIF viewer.