Dangerous eval() Usage Detected

HIGH

eval() executes arbitrary Python expressions from strings, enabling remote code execution when called with untrusted input.

Rule Information

Language
Python
Category
Python Core
Author
Shivasurya
Shivasurya
Last Updated
2026-03-22
Tags
pythonevalcode-injectiondynamic-executionbuiltinsCWE-95OWASP-A03
CWE References

Interactive Playground

Experiment with the vulnerable code and security rule below. Edit the code to see how the rule detects different vulnerability patterns.

pathfinder scan --ruleset python/PYTHON-LANG-SEC-001 --project .
1
2
3
4
5
6
7
rule.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21

About This Rule

Understanding the vulnerability and how it is detected

Python's built-in eval() function evaluates an arbitrary string as a Python expression and returns its result. When the string argument is derived from untrusted user input such as query parameters, request bodies, environment variables, or file contents, an attacker can craft an expression that executes system commands, reads sensitive files, spawns reverse shells, or accesses any installed module via __import__().

Unlike exec(), eval() is restricted to expressions, but this provides almost no security boundary since an attacker can call __import__('os').system('cmd') or traverse __builtins__ to reach any callable. Safe alternatives include ast.literal_eval() for Python literals, json.loads() for structured data, and explicit allowlist dispatch for dynamic function calls.

Security Implications

Potential attack scenarios if this vulnerability is exploited

1

Remote Code Execution

An attacker controlling the eval() argument can execute arbitrary Python code in the context of the running process, including spawning OS commands, opening network connections, reading credentials from disk, and importing any installed module.

2

Data Exfiltration

eval() can access environment variables, configuration files, and in-memory secrets. Attackers can encode exfiltrated data in the return value or send it over a network connection opened within the evaluated expression.

3

Privilege Escalation

If the Python process runs with elevated privileges such as root, broad IAM permissions, or a privileged Kubernetes service account, eval() injection immediately grants the attacker those same privileges with no additional steps.

4

Persistent Backdoor Surface

eval() in configuration loaders, plugin systems, or template engines creates a persistent attack surface. An attacker influencing config files or environment variables can achieve persistent code execution without touching application source code.

How to Fix

Recommended remediation steps

  • 1Replace eval() with ast.literal_eval() when the input is expected to be a Python literal such as a number, string, list, or dictionary.
  • 2Use json.loads() or a schema validation library such as pydantic or marshmallow for structured data from external sources.
  • 3If dynamic dispatch is needed, use an explicit allowlist mapping names to callables rather than evaluating the name as code.
  • 4Audit all call sites of eval() and document why each one cannot use a safer alternative; treat any use on untrusted data as a critical finding.
  • 5Apply the principle of least privilege so that even if eval() is exploited, the process cannot access credentials or execute privileged operations.

Detection Scope

How Code Pathfinder analyzes your code for this vulnerability

This rule detects all direct calls to the built-in eval() function anywhere in Python source code, matching both unqualified eval() and builtins.eval() forms. Every call site is flagged regardless of whether the argument is a literal or a variable, because static analysis cannot always determine the runtime origin of a value at the call site.

Compliance & Standards

Industry frameworks and regulations that require detection of this vulnerability

CWE Top 25
CWE-95 - Eval Injection in the MITRE CWE Top 25 Most Dangerous Software Weaknesses
OWASP Top 10
A03:2021 - Injection
NIST SP 800-53
SI-10: Information Input Validation
PCI DSS v4.0
Requirement 6.2.4 - Protect web-facing applications against injection attacks

References

External resources and documentation

Similar Rules

Explore related security rules for Python

Frequently Asked Questions

Common questions about Dangerous eval() Usage Detected

eval() is dangerous whenever its argument can be influenced by an attacker. Trace the variable backward to its origin — if it can be set from HTTP parameters, file contents, environment variables, CLI arguments, or database values, it is untrusted. Literals hardcoded in source code are safer but should still use ast.literal_eval() to prevent future refactoring from accidentally introducing a taint path.
ast.literal_eval() is safe for parsing Python literals: strings, bytes, numbers, tuples, lists, dicts, sets, booleans, and None. It raises ValueError or SyntaxError on anything else and does not execute function calls or attribute access. However, deeply nested structures can cause recursion errors, so validate input length before calling it in high-availability services.
Use a dedicated safe expression evaluator such as the simpleeval or asteval library. These parse the expression AST and allow only whitelisted operations. Never use eval() for this purpose even with restricted namespace arguments, as Python namespace sandboxes have been bypassed repeatedly.
Yes. The rule flags all eval() call sites to ensure no taint path is missed. If the argument is a hardcoded literal, the finding is low-risk but should still be refactored to ast.literal_eval() or a constant to make the intent explicit and eliminate future risk.
Test code and REPL utilities are still subject to injection if they process any external input such as test fixtures from files or user input in a debug console. Suppress findings only after confirming the eval() argument is never derived from external sources.
Code Pathfinder tracks data flow from source functions such as HTTP request parameters, os.environ, file reads, and socket receives through variable assignments and function calls to the eval() sink. Inter-procedural analysis follows tainted values across function boundaries and file boundaries, catching patterns where input handling and eval() execution are separated across the codebase.

New feature

Get these findings posted directly on your GitHub pull requests

The Dangerous eval() Usage Detected rule runs in CI and posts inline review comments on the exact lines — no dashboard, no SARIF viewer.

See how it works