Django Code Injection via eval()

CRITICAL

User input flows to eval(), enabling arbitrary Python code execution on the server.

Rule Information

Language
Python
Category
Django
Author
Shivasurya
Shivasurya
Last Updated
2026-03-22
Tags
pythondjangocode-injectionevalrcetaint-analysisinter-proceduralCWE-95OWASP-A03
CWE References

Interactive Playground

Experiment with the vulnerable code and security rule below. Edit the code to see how the rule detects different vulnerability patterns.

pathfinder scan --ruleset python/PYTHON-DJANGO-SEC-020 --project .
1
2
3
4
5
6
7
8
9
10
11
12
13
14
rule.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43

About This Rule

Understanding the vulnerability and how it is detected

This rule detects code injection vulnerabilities in Django applications where untrusted user input from HTTP request parameters flows into Python's eval() function.

Python's eval() interprets its string argument as a Python expression and evaluates it in the current scope. When user-controlled data reaches eval(), an attacker can inject arbitrary Python code that executes with the full privileges and scope of the application process. Unlike SQL injection or command injection, eval() injection gives attackers direct access to the Python runtime, all imported modules, the filesystem, and network resources without any shell intermediary.

Even supposedly "safe" uses of eval() with restricted builtins or custom namespaces have repeatedly been bypassed through creative use of Python's object model and dunder attributes. There is no safe way to call eval() on untrusted input; the function must be replaced with purpose-specific parsers (ast.literal_eval() for data structures, or custom validators for specific expression types).

Security Implications

Potential attack scenarios if this vulnerability is exploited

1

Direct Remote Code Execution

eval() with user input is direct Remote Code Execution. An attacker can import os, read filesystem contents, spawn shells, exfiltrate secrets, and install persistence backdoors through a single request. No privilege escalation step is needed -- the code runs immediately in the application process context.

2

Secret and Credential Theft

Django applications store database passwords, API keys, and secret keys in settings or environment variables. An injected expression like __import__('os').environ can exfiltrate all of these in a single request.

3

Complete Application Compromise

Beyond reading secrets, an attacker can modify application state, alter database records, delete files, corrupt the application's module cache, or replace functions with malicious versions that persist for the lifetime of the process.

4

Sandbox Escape Patterns

Restricted namespaces and custom builtins passed to eval() do not provide meaningful protection. Attackers can access the full Python object hierarchy through patterns like ().__class__.__base__.__subclasses__() to obtain references to arbitrary classes and modules without needing direct imports.

How to Fix

Recommended remediation steps

  • 1Remove all uses of eval() with user-controlled input; there is no safe sanitizer for this.
  • 2Use ast.literal_eval() for safely parsing Python literals (dicts, lists, strings, numbers) without executing arbitrary code.
  • 3For mathematical expressions, implement a custom recursive descent parser over the AST using ast.parse() with strict node type validation.
  • 4For function dispatch, use an explicit allowlist dictionary mapping string names to callable objects rather than using eval() or globals().
  • 5For JSON-like data structures, use json.loads() which is safe and standardized.

Detection Scope

How Code Pathfinder analyzes your code for this vulnerability

This rule performs inter-procedural taint analysis with global scope. Sources include calls("request.GET.get"), calls("request.POST.get"), calls("request.GET.__getitem__"), calls("request.POST.__getitem__"), calls("request.body"), and calls("request.read"). The sink is calls("eval") with tainted input tracked via .tracks(0). There are no recognized sanitizers for eval() -- any user-controlled input reaching eval() is a confirmed vulnerability. The rule follows taint across file and module boundaries.

Compliance & Standards

Industry frameworks and regulations that require detection of this vulnerability

CWE Top 25
CWE-95 - Eval Injection in Most Dangerous Software Weaknesses list
OWASP Top 10
A03:2021 - Injection
PCI DSS v4.0
Requirement 6.2.4 - protect against injection attacks
NIST SP 800-53
SI-10: Information Input Validation; SI-3: Malicious Code Protection
SANS Top 25
Improper Control of Generation of Code (Code Injection)

References

External resources and documentation

Similar Rules

Explore related security rules for Python

Frequently Asked Questions

Common questions about Django Code Injection via eval()

No. Passing eval() a restricted globals or locals dict does not prevent sandbox escapes. Python's rich object model allows attackers to traverse the class hierarchy using __class__, __bases__, __subclasses__, and __init__ to access built-in functions and modules without direct name access. Multiple CVEs exist for products that attempted restricted eval() sandboxes and were bypassed. The only safe use of eval() is with fully static, hardcoded strings.
eval() evaluates a single expression and returns its value. exec() executes statements and does not return a meaningful value. Both interpret user input as Python code when given untrusted strings and are equally dangerous. SEC-020 covers eval(), SEC-021 covers exec(). Both should be flagged and remediated.
Implement a safe math evaluator using ast.parse() and AST node validation. Parse the input to an AST, walk all nodes, raise an error if any node type is not in an allowlist of safe arithmetic nodes (BinOp, UnaryOp, Constant, safe operator types), then recursively evaluate only the validated AST nodes. The secure_example in this rule demonstrates this pattern.
ast.literal_eval() safely evaluates Python literals: strings, bytes, numbers, tuples, lists, dicts, sets, booleans, and None. It raises ValueError or SyntaxError for anything that is not a literal. It is safe for parsing configuration values, coordinate pairs, or other structured data. It cannot evaluate expressions, function calls, or variable references.
No. Wrapping eval() in try/except catches exceptions but does not prevent code execution. The attacker's injected code runs and any non-exception side effects (file reads, network calls, process spawning) still occur before any exception is raised or caught.
eval() injection is generally more severe. SQL injection is limited to database operations (unless the database has OS-level features). eval() injection runs arbitrary Python code in the application process directly, with access to all imported modules, all environment variables, the filesystem, and network. It is effectively Remote Code Execution with no intermediate step.

New feature

Get these findings posted directly on your GitHub pull requests

The Django Code Injection via eval() rule runs in CI and posts inline review comments on the exact lines — no dashboard, no SARIF viewer.

See how it works