Interactive Playground
Experiment with the vulnerable code and security rule below. Edit the code to see how the rule detects different vulnerability patterns.
pathfinder scan --ruleset python/PYTHON-FLASK-SEC-005 --project .About This Rule
Understanding the vulnerability and how it is detected
This rule detects code injection in Flask applications where user-controlled input from HTTP request parameters reaches exec() or compile(). exec() is more dangerous than eval() because it executes full Python statements, not just expressions. It can define classes, import modules, modify global state, spawn threads, and run arbitrary multi-line code blocks. compile() generates a code object that is typically passed to exec() or eval() -- tainted input to compile() is a precursor to the same attack.
Unlike eval injection (PYTHON-FLASK-SEC-004), exec() injection has no safe sanitizer. ast.literal_eval() does not help because exec() accepts statements, not literals. Sandboxing exec() with restricted globals/locals dictionaries is notoriously difficult to get right -- numerous Python sandbox escapes exist that allow attackers to reach the unrestricted interpreter from a sandboxed exec() call.
The only correct fix is to eliminate exec() from code paths that process user input. This rule uses taint analysis to find those paths: it traces data from Flask request sources through assignments and function calls to exec() and compile() at argument position 0, flagging every reachable path regardless of how many intermediate steps are involved.
Security Implications
Potential attack scenarios if this vulnerability is exploited
Unrestricted Python Statement Execution
exec() executes any valid Python statement. Unlike eval(), which is limited to expressions, exec() can run import statements, define and call functions, modify global and local variables, and execute multi-line code blocks. An attacker who controls exec() input has the same capabilities as a developer with write access to the codebase.
Persistent Backdoor Installation
exec() can write files, install packages (subprocess + pip), modify Python module caches, and alter loaded module objects in memory. An attacker can inject code that installs a backdoor into the application's module namespace, persisting across requests without touching the filesystem.
Complete Secret Extraction
exec() has access to the application's global namespace. Injected code can iterate globals(), find database connections, configuration objects, and in-memory caches containing session tokens and API keys, and exfiltrate them via an HTTP request made from within the exec() call.
Sandbox Escape
Attempts to restrict exec() by passing a limited globals dict are broken by well-known Python sandbox escapes: ().__class__.__mro__[1].__subclasses__() gives access to all loaded classes, from which file objects, socket objects, and subprocess handles can be obtained. There is no reliable way to sandbox exec() against a determined attacker.
How to Fix
Recommended remediation steps
- 1Eliminate exec() entirely from any code path that can be reached with user-controlled input -- there is no sanitizer that makes exec(user_input) safe.
- 2If you need user-defined computation, use a restricted expression language (simpleeval for math, jsonata for JSON transformations) rather than Python's full exec() surface.
- 3If exec() is used to load configuration, replace it with a structured configuration format (YAML with yaml.safe_load(), TOML, JSON) that cannot execute code.
- 4If exec() is used for plugin loading, switch to a proper plugin architecture (importlib.import_module() with a controlled plugin directory and signature verification).
- 5Audit all exec() and compile() calls in the codebase and document the reason each exists -- any that process external input must be removed or redesigned.
Detection Scope
How Code Pathfinder analyzes your code for this vulnerability
Scope: global (cross-file taint tracking across the entire project). Sources: Flask HTTP input methods -- request.args.get(), request.form.get(), request.values.get(), request.get_json() -- all of which deliver attacker- controlled strings that may contain valid Python statement syntax. Sinks: exec() and compile(). The .tracks(0) parameter focuses on argument position 0, the code string. compile() at position 0 is included because it is the standard way to create a code object for subsequent exec() or eval() calls -- a tainted compile() call is a precursor to tainted execution. Sanitizers: None. There is no recognized sanitizer for exec() because no transformation of user input makes it safe to execute as Python code. Flows that pass through any intermediate function still trigger a finding unless the intermediate function completely replaces the value with a non-tainted result. The rule follows tainted values through assignments, return values, and cross-file function calls.
Compliance & Standards
Industry frameworks and regulations that require detection of this vulnerability
References
External resources and documentation
Similar Rules
Explore related security rules for Python
Flask Code Injection via eval()
User input from Flask request parameters flows to eval(). Replace with ast.literal_eval() for data parsing or json.loads() for structured input.
Raw SQL Usage Audit via RawSQL Expression
RawSQL() expression detected. Audit this usage to confirm parameterized queries are used for all user-controlled values.
Lambda Command Injection via os.spawn*()
Lambda event data flows to os.spawn*() functions, enabling process execution with attacker-controlled arguments in the Lambda execution environment.
Frequently Asked Questions
Common questions about Flask Code Injection via exec()
New feature
Get these findings posted directly on your GitHub pull requests
The Flask Code Injection via exec() rule runs in CI and posts inline review comments on the exact lines — no dashboard, no SARIF viewer.