# PYTHON-LANG-SEC-001: Dangerous eval() Usage Detected

> **Severity:** HIGH | **CWE:** CWE-95 | **OWASP:** A03:2021

- **Language:** Python
- **Category:** Python Core
- **URL:** https://codepathfinder.dev/registry/python/lang/PYTHON-LANG-SEC-001
- **Detection:** `pathfinder scan --ruleset python/PYTHON-LANG-SEC-001 --project .`

## Description

Python's built-in eval() function evaluates an arbitrary string as a Python expression
and returns its result. When the string argument is derived from untrusted user input
such as query parameters, request bodies, environment variables, or file contents, an
attacker can craft an expression that executes system commands, reads sensitive files,
spawns reverse shells, or accesses any installed module via __import__().

Unlike exec(), eval() is restricted to expressions, but this provides almost no security
boundary since an attacker can call __import__('os').system('cmd') or traverse __builtins__
to reach any callable. Safe alternatives include ast.literal_eval() for Python literals,
json.loads() for structured data, and explicit allowlist dispatch for dynamic function calls.


## Vulnerable Code

```python
import code
import importlib
import typing

# SEC-001: eval
user_input = input("Enter expression: ")
result = eval(user_input)
```

## Secure Code

```python
import ast
import json

# SECURE: Use ast.literal_eval() for parsing Python literals
def parse_config_value(raw_value: str):
    try:
        return ast.literal_eval(raw_value)
    except (ValueError, SyntaxError):
        return raw_value

# SECURE: Use json.loads() for structured data from external sources
def parse_user_data(raw_json: str):
    return json.loads(raw_json)

# SECURE: Use an allowlist for dynamic dispatch instead of eval()
ALLOWED_OPERATIONS = {
    "add": lambda a, b: a + b,
    "multiply": lambda a, b: a * b,
}

def compute(op_name: str, a: float, b: float):
    if op_name not in ALLOWED_OPERATIONS:
        raise ValueError(f"Unknown operation: {op_name}")
    return ALLOWED_OPERATIONS[op_name](a, b)

```

## Detection Rule (Python SDK)

```python
from rules.python_decorators import python_rule
from codepathfinder import calls, flows, QueryType
from codepathfinder.presets import PropagationPresets

class Builtins(QueryType):
    fqns = ["builtins"]


@python_rule(
    id="PYTHON-LANG-SEC-001",
    name="Dangerous eval() Detected",
    severity="HIGH",
    category="lang",
    cwe="CWE-95",
    tags="python,eval,code-injection,OWASP-A03,CWE-95",
    message="eval() detected. Avoid eval() on untrusted input. Use ast.literal_eval() for safe parsing.",
    owasp="A03:2021",
)
def detect_eval():
    """Detects eval() calls."""
    return Builtins.method("eval")
```

## How to Fix

- Replace eval() with ast.literal_eval() when the input is expected to be a Python literal such as a number, string, list, or dictionary.
- Use json.loads() or a schema validation library such as pydantic or marshmallow for structured data from external sources.
- If dynamic dispatch is needed, use an explicit allowlist mapping names to callables rather than evaluating the name as code.
- Audit all call sites of eval() and document why each one cannot use a safer alternative; treat any use on untrusted data as a critical finding.
- Apply the principle of least privilege so that even if eval() is exploited, the process cannot access credentials or execute privileged operations.

## Security Implications

- **Remote Code Execution:** An attacker controlling the eval() argument can execute arbitrary Python code in the
context of the running process, including spawning OS commands, opening network
connections, reading credentials from disk, and importing any installed module.

- **Data Exfiltration:** eval() can access environment variables, configuration files, and in-memory secrets.
Attackers can encode exfiltrated data in the return value or send it over a network
connection opened within the evaluated expression.

- **Privilege Escalation:** If the Python process runs with elevated privileges such as root, broad IAM permissions,
or a privileged Kubernetes service account, eval() injection immediately grants the
attacker those same privileges with no additional steps.

- **Persistent Backdoor Surface:** eval() in configuration loaders, plugin systems, or template engines creates a persistent
attack surface. An attacker influencing config files or environment variables can achieve
persistent code execution without touching application source code.


## FAQ

**Q: Is eval() always dangerous or only when used with untrusted input?**

eval() is dangerous whenever its argument can be influenced by an attacker. Trace the
variable backward to its origin — if it can be set from HTTP parameters, file contents,
environment variables, CLI arguments, or database values, it is untrusted. Literals
hardcoded in source code are safer but should still use ast.literal_eval() to prevent
future refactoring from accidentally introducing a taint path.


**Q: Does ast.literal_eval() really provide complete safety?**

ast.literal_eval() is safe for parsing Python literals: strings, bytes, numbers, tuples,
lists, dicts, sets, booleans, and None. It raises ValueError or SyntaxError on anything
else and does not execute function calls or attribute access. However, deeply nested
structures can cause recursion errors, so validate input length before calling it in
high-availability services.


**Q: What if I need to evaluate mathematical expressions safely?**

Use a dedicated safe expression evaluator such as the simpleeval or asteval library.
These parse the expression AST and allow only whitelisted operations. Never use eval()
for this purpose even with restricted namespace arguments, as Python namespace sandboxes
have been bypassed repeatedly.


**Q: Can this rule produce false positives on eval() with hardcoded strings?**

Yes. The rule flags all eval() call sites to ensure no taint path is missed. If the
argument is a hardcoded literal, the finding is low-risk but should still be refactored
to ast.literal_eval() or a constant to make the intent explicit and eliminate future risk.


**Q: What about eval() inside test code or REPL utilities?**

Test code and REPL utilities are still subject to injection if they process any external
input such as test fixtures from files or user input in a debug console. Suppress
findings only after confirming the eval() argument is never derived from external sources.


**Q: How does this rule interact with inter-procedural taint flows?**

Code Pathfinder tracks data flow from source functions such as HTTP request parameters,
os.environ, file reads, and socket receives through variable assignments and function
calls to the eval() sink. Inter-procedural analysis follows tainted values across
function boundaries and file boundaries, catching patterns where input handling and
eval() execution are separated across the codebase.


## References

- [CWE-95: Eval Injection](https://cwe.mitre.org/data/definitions/95.html)
- [Python docs: built-in eval()](https://docs.python.org/3/library/functions.html#eval)
- [Python docs: ast.literal_eval()](https://docs.python.org/3/library/ast.html#ast.literal_eval)
- [OWASP Code Injection](https://owasp.org/www-community/attacks/Code_Injection)
- [OWASP Top 10 A03:2021 Injection](https://owasp.org/Top10/A03_2021-Injection/)

---

Source: https://codepathfinder.dev/registry/python/lang/PYTHON-LANG-SEC-001
Code Pathfinder — Open source, type-aware SAST with cross-file dataflow analysis
