# PYTHON-FLASK-SEC-005: Flask Code Injection via exec()

> **Severity:** CRITICAL | **CWE:** CWE-95 | **OWASP:** A03:2021

- **Language:** Python
- **Category:** Flask
- **URL:** https://codepathfinder.dev/registry/python/flask/PYTHON-FLASK-SEC-005
- **Detection:** `pathfinder scan --ruleset python/PYTHON-FLASK-SEC-005 --project .`

## Description

This rule detects code injection in Flask applications where user-controlled input
from HTTP request parameters reaches exec() or compile(). exec() is more dangerous
than eval() because it executes full Python statements, not just expressions. It can
define classes, import modules, modify global state, spawn threads, and run arbitrary
multi-line code blocks. compile() generates a code object that is typically passed
to exec() or eval() -- tainted input to compile() is a precursor to the same attack.

Unlike eval injection (PYTHON-FLASK-SEC-004), exec() injection has no safe sanitizer.
ast.literal_eval() does not help because exec() accepts statements, not literals.
Sandboxing exec() with restricted globals/locals dictionaries is notoriously difficult
to get right -- numerous Python sandbox escapes exist that allow attackers to
reach the unrestricted interpreter from a sandboxed exec() call.

The only correct fix is to eliminate exec() from code paths that process user input.
This rule uses taint analysis to find those paths: it traces data from Flask request
sources through assignments and function calls to exec() and compile() at argument
position 0, flagging every reachable path regardless of how many intermediate
steps are involved.


## Vulnerable Code

```python
from flask import Flask, request

app = Flask(__name__)

@app.route('/run_code')
def run_code():
    code = request.form.get('code')
    exec(code)
    return "executed"
```

## Secure Code

```python
from flask import Flask, request
import ast
import json

app = Flask(__name__)

# UNSAFE pattern replaced below:
# @app.route('/run')
# def run_code():
#     code = request.args.get('code')
#     exec(code)  # NEVER do this

# SAFE alternatives for common use cases:

@app.route('/calculate')
def calculate():
    # For math expressions, restrict to literals via ast.literal_eval()
    expression = request.args.get('expr', '')
    try:
        result = ast.literal_eval(expression)
        if not isinstance(result, (int, float)):
            return {'error': 'Numbers only'}, 400
    except (ValueError, SyntaxError):
        return {'error': 'Invalid expression'}, 400
    return {'result': result}

@app.route('/template', methods=['POST'])
def render_template_safe():
    # For dynamic rendering, use a template engine with autoescaping
    from flask import render_template_string
    from markupsafe import Markup
    # Use render_template() with static template files, not user-supplied strings
    # Never pass user input to render_template_string()
    return render_template_string('Hello {{ name }}!', name=request.form.get('name'))

```

## Detection Rule (Python SDK)

```python
from rules.python_decorators import python_rule
from codepathfinder import calls, flows, QueryType
from codepathfinder.presets import PropagationPresets

class Builtins(QueryType):
    fqns = ["builtins"]


@python_rule(
    id="PYTHON-FLASK-SEC-005",
    name="Flask Code Injection via exec()",
    severity="CRITICAL",
    category="flask",
    cwe="CWE-95",
    tags="python,flask,code-injection,exec,rce,OWASP-A03,CWE-95",
    message="User input flows to exec(). Never execute user-supplied code.",
    owasp="A03:2021",
)
def detect_flask_exec_injection():
    """Detects Flask request data flowing to exec()."""
    return flows(
        from_sources=[
            calls("request.args.get"),
            calls("request.form.get"),
            calls("request.values.get"),
            calls("request.get_json"),
        ],
        to_sinks=[
            Builtins.method("exec", "compile").tracks(0),
            calls("exec"),
            calls("compile"),
        ],
        sanitized_by=[],
        propagates_through=PropagationPresets.standard(),
        scope="global",
    )
```

## How to Fix

- Eliminate exec() entirely from any code path that can be reached with user-controlled input -- there is no sanitizer that makes exec(user_input) safe.
- If you need user-defined computation, use a restricted expression language (simpleeval for math, jsonata for JSON transformations) rather than Python's full exec() surface.
- If exec() is used to load configuration, replace it with a structured configuration format (YAML with yaml.safe_load(), TOML, JSON) that cannot execute code.
- If exec() is used for plugin loading, switch to a proper plugin architecture (importlib.import_module() with a controlled plugin directory and signature verification).
- Audit all exec() and compile() calls in the codebase and document the reason each exists -- any that process external input must be removed or redesigned.

## Security Implications

- **Unrestricted Python Statement Execution:** exec() executes any valid Python statement. Unlike eval(), which is limited to
expressions, exec() can run import statements, define and call functions, modify
global and local variables, and execute multi-line code blocks. An attacker who
controls exec() input has the same capabilities as a developer with write access
to the codebase.

- **Persistent Backdoor Installation:** exec() can write files, install packages (subprocess + pip), modify Python module
caches, and alter loaded module objects in memory. An attacker can inject code
that installs a backdoor into the application's module namespace, persisting
across requests without touching the filesystem.

- **Complete Secret Extraction:** exec() has access to the application's global namespace. Injected code can
iterate globals(), find database connections, configuration objects, and
in-memory caches containing session tokens and API keys, and exfiltrate them
via an HTTP request made from within the exec() call.

- **Sandbox Escape:** Attempts to restrict exec() by passing a limited globals dict are broken by
well-known Python sandbox escapes: ().__class__.__mro__[1].__subclasses__()
gives access to all loaded classes, from which file objects, socket objects,
and subprocess handles can be obtained. There is no reliable way to sandbox
exec() against a determined attacker.


## FAQ

**Q: Why is there no sanitizer for exec() when there is one for eval()?**

eval() is limited to expressions. ast.literal_eval() can filter its input to
safe literal types because literals are a well-defined subset of expressions.
exec() accepts full Python statements -- imports, function definitions, loops,
assignments -- and there is no meaningful subset of statements that is both
useful to users and safe to execute. Python sandbox escapes allow attackers
to reach unrestricted Python from any exec() call with limited globals.


**Q: I use exec() to dynamically load plugin code from the database. What should I do?**

Move plugins to the filesystem and load them with importlib.import_module().
Sign plugin files and verify signatures before loading. This gives you dynamic
extensibility without passing user-HTTP-controlled strings to exec(). If plugins
must come from the database, treat the plugin directory as trusted infrastructure
and never allow HTTP requests to influence which plugin code gets stored.


**Q: My code uses compile() then exec() for template caching. Is that flagged?**

Yes, if the source string passed to compile() is tainted. The rule tracks taint
to compile() at argument position 0. If the template source comes from a static
file or a trusted configuration store (not from HTTP input), the flow is not
tainted and will not be flagged.


**Q: How does this differ from PYTHON-FLASK-SEC-004 (eval injection)?**

SEC-004 covers eval(), which executes Python expressions. SEC-005 covers exec()
and compile(), which execute Python statements. exec() is strictly more powerful
than eval() -- every expression is a valid statement but not vice versa. Run
both rules to cover the complete dynamic code execution surface.


**Q: We use exec() in admin-only endpoints behind authentication. Is that acceptable?**

Authenticated endpoints are still reachable via stolen credentials, session
fixation, or CSRF. exec() behind authentication reduces the attack surface
but does not eliminate it. The rule will still flag authenticated endpoints
because the authentication gate is not a sanitizer for the tainted value.
Remove exec() from all HTTP-reachable code paths.


**Q: Does this rule fire on exec() in unit tests?**

Only if the test creates a tainted flow from a Flask request source (e.g., via
the test client) to exec(). Hardcoded exec('code_string') in test files does not
create a tainted flow and is not flagged.


**Q: Can I allowlist specific patterns to prevent false positives?**

If there is a code path where exec() receives only values from a trusted,
non-HTTP-controlled source that incorrectly appears tainted (e.g., a constant
loaded from a config file that the analysis cannot distinguish from request data),
you can add a # pathfinder: ignore PYTHON-FLASK-SEC-005 comment at the exec()
call site with a written explanation.


## References

- [CWE-95: Eval Injection](https://cwe.mitre.org/data/definitions/95.html)
- [OWASP Code Injection](https://owasp.org/www-community/attacks/Code_Injection)
- [Python exec() built-in documentation](https://docs.python.org/3/library/functions.html#exec)
- [Python compile() built-in documentation](https://docs.python.org/3/library/functions.html#compile)
- [Python sandbox escape research](https://book.hacktricks.xyz/generic-methodologies-and-resources/python/bypass-python-sandboxes)
- [OWASP Injection Prevention Cheat Sheet](https://cheatsheetseries.owasp.org/cheatsheets/Injection_Prevention_Cheat_Sheet.html)

---

Source: https://codepathfinder.dev/registry/python/flask/PYTHON-FLASK-SEC-005
Code Pathfinder — Open source, type-aware SAST with cross-file dataflow analysis
