Interactive Playground
Experiment with the vulnerable code and security rule below. Edit the code to see how the rule detects different vulnerability patterns.
pathfinder scan --ruleset python/PYTHON-FLASK-SEC-004 --project .About This Rule
Understanding the vulnerability and how it is detected
This rule detects eval injection in Flask applications where user-controlled input from HTTP request parameters reaches Python's built-in eval() function. eval() evaluates its argument as a Python expression and returns the result. When the argument contains attacker-controlled data, the attacker can inject arbitrary Python expressions including function calls, attribute access, and import statements that lead to remote code execution.
Unlike OS command injection, eval injection executes within the Python interpreter itself. The attacker has direct access to Python builtins, imported modules, and the application's own namespaces. Common payloads access __import__('os').system() or use __builtins__ to invoke any Python functionality available in the process.
The taint analysis traces data from Flask request sources through variable assignments and function calls to the eval() sink at argument position 0. The .tracks(0) parameter means only the expression string argument is tracked -- the optional globals and locals dictionaries at positions 1 and 2 are not analyzed by this rule. Flows through ast.literal_eval() or json.loads() are recognized as sanitizers because these functions evaluate only safe literal types (strings, numbers, lists, dicts, tuples, booleans, None) without executing arbitrary code.
Security Implications
Potential attack scenarios if this vulnerability is exploited
Direct Remote Code Execution in the Python Interpreter
eval() has access to Python builtins by default. An attacker can call __import__('subprocess').check_output(['id']) to execute OS commands, __import__('socket') to establish network connections, or access the application's database session objects via the interpreter's global namespace. There is no sandbox -- the attacker operates with the full privileges of the Flask process.
Secret and Credential Extraction
The Python runtime holds application secrets in memory: database passwords, API keys, JWT secrets, and OAuth tokens. An injected expression like globals()['app'].config['SECRET_KEY'] or os.environ['DB_PASSWORD'] extracts these values and returns them in the HTTP response body.
Bypassing Authentication and Authorization
eval() can call application functions directly. An attacker who knows the codebase (from source code exposure or error messages) can invoke admin functions, reset passwords, or elevate privileges by calling internal Python functions through the eval() context.
Supply Chain Attacks via Injected Imports
__import__() inside eval() can load any installed Python package. In environments with broad pip dependencies, attackers use eval injection to instantiate classes from installed libraries in unexpected ways, reaching attack surface that is not directly accessible from the Flask route handlers.
How to Fix
Recommended remediation steps
- 1Replace eval() with ast.literal_eval() when you need to parse Python literals (numbers, strings, lists, dicts) -- it rejects any expression that is not a safe literal type.
- 2Use json.loads() for structured data exchange instead of eval() -- JSON is a strict subset with no callable constructs.
- 3If you need to evaluate mathematical expressions, use a dedicated safe math library like simpleeval or numexpr instead of Python's eval().
- 4Never pass eval() a default globals or locals dict that contains application internals -- attackers can walk the object graph to reach sensitive state.
- 5Audit all uses of eval() in the codebase regardless of whether they currently receive user input -- refactor them out before new code paths introduce tainted data.
Detection Scope
How Code Pathfinder analyzes your code for this vulnerability
Scope: global (cross-file taint tracking across the entire project). Sources: Flask HTTP input methods -- request.args.get(), request.form.get(), request.values.get(), request.get_json() -- all of which return attacker-controlled strings that can contain Python expression syntax. Sinks: eval(). The .tracks(0) parameter focuses on argument position 0, the expression string. The optional globals and locals dictionaries at positions 1 and 2 are separate concerns and are not tracked by this rule. Sanitizers: ast.literal_eval() and json.loads() are recognized as sanitizing transformations. A value that passes through either function before reaching eval() is treated as safe because those functions constrain the input to non-executable data types. The rule follows tainted values through assignments, return values, and cross-file function calls, catching eval() injections in utility modules and helper functions that receive data from Flask route handlers.
Compliance & Standards
Industry frameworks and regulations that require detection of this vulnerability
References
External resources and documentation
Similar Rules
Explore related security rules for Python
Flask Code Injection via exec()
User input from Flask request parameters flows to exec() or compile(). exec() cannot be safely sanitized -- redesign the feature to avoid dynamic code execution.
Raw SQL Usage Audit via RawSQL Expression
RawSQL() expression detected. Audit this usage to confirm parameterized queries are used for all user-controlled values.
Lambda Command Injection via os.spawn*()
Lambda event data flows to os.spawn*() functions, enabling process execution with attacker-controlled arguments in the Lambda execution environment.
Frequently Asked Questions
Common questions about Flask Code Injection via eval()
New feature
Get these findings posted directly on your GitHub pull requests
The Flask Code Injection via eval() rule runs in CI and posts inline review comments on the exact lines — no dashboard, no SARIF viewer.