Interactive Playground
Experiment with the vulnerable code and security rule below. Edit the code to see how the rule detects different vulnerability patterns.
pathfinder scan --ruleset python/PYTHON-FLASK-XSS-002 --project .About This Rule
Understanding the vulnerability and how it is detected
This rule detects calls to Markup() or markupsafe.Markup() in Flask applications. Markup is a string subclass from the markupsafe library (which Flask and Jinja2 depend on) that marks its content as safe HTML. When Jinja2 renders a template variable, it checks whether the value is a Markup instance. If it is, autoescaping is skipped and the raw string is inserted into the HTML output. If it is a plain string, HTML metacharacters are escaped.
Markup() is a legitimate API for generating trusted HTML programmatically (e.g., building HTML tags in Python helpers). It becomes a vulnerability when applied to user-controlled content. Markup(user_input) tells Jinja2 "this string is safe HTML" -- but if user_input contains <script> tags or event handlers, those will be inserted verbatim into the page and executed in the browser.
This is an audit-grade rule. Not every Markup() call is vulnerable -- wrapping a hardcoded HTML string like Markup("<br>") is safe. The vulnerability arises when user-controlled data flows into Markup() without prior sanitization. Every use of Markup() warrants review to confirm the string being wrapped is developer-controlled, sanitized, or already escaped.
The detection uses Or(calls("Markup"), calls("markupsafe.Markup")) to catch both the directly imported form (from markupsafe import Markup; Markup(...)) and the module-qualified form (markupsafe.Markup(...)). Flask re-exports Markup via flask.Markup, but that form is deprecated; this rule covers the two primary import paths.
Security Implications
Potential attack scenarios if this vulnerability is exploited
Reflected XSS via Markup-Wrapped User Input
If user input is passed to Markup() and the result is rendered in a Jinja2 template, the user's HTML/JavaScript is inserted verbatim into the page. An attacker can craft a request with a payload like <img src=x onerror=alert(1)> that executes in the victim's browser immediately on page load.
Stored XSS via Markup-Wrapped Database Content
Applications that retrieve content from a database, wrap it in Markup(), and render it in templates are vulnerable to stored XSS if an attacker can write HTML content to the database through any input path. The Markup() call silently suppresses the escaping that would otherwise protect against stored XSS.
Confused Developer Intent Propagation
Markup instances propagate through string operations: Markup("safe") + user_input returns a Markup instance containing the user input. Developers who build HTML strings by concatenating Markup with plain strings may inadvertently mark unsafe content as safe, especially across function boundaries or after code refactoring.
Bypass of Defense-in-Depth Escaping
Even if other layers (input validation, CSP) partially mitigate XSS, Markup() removes the last line of defense at the template rendering layer. An attacker who finds any way to get malicious content into a Markup()-wrapped variable can bypass all other controls at the output stage.
How to Fix
Recommended remediation steps
- 1Prefer passing user input as plain context variables to render_template() rather than wrapping in Markup(). Jinja2 autoescaping handles HTML encoding automatically.
- 2When building HTML in Python code (e.g., in template helper functions), always escape user-supplied strings with markupsafe.escape() before concatenating into a Markup instance.
- 3Use Markup() only for hardcoded HTML strings that are entirely developer-controlled and contain no user input, even indirectly.
- 4Review every Markup() call to trace the source of its argument. If any part of the argument can be influenced by user input, apply escape() first.
- 5Consider using a dedicated HTML sanitization library (bleach) for user-provided rich text content rather than wrapping unsanitized HTML in Markup().
Detection Scope
How Code Pathfinder analyzes your code for this vulnerability
This rule uses Or(calls("Markup"), calls("markupsafe.Markup")) to match both the directly imported constructor (from markupsafe import Markup; Markup(...)) and the module-qualified form (markupsafe.Markup(...)). This is a broad audit pattern -- every Markup() call is flagged for review regardless of whether its argument is demonstrably user-controlled. The rule surfaces all uses for manual inspection. For a dataflow rule that specifically traces user input from Flask request parameters into Markup(), a taint-analysis companion rule would provide confirmed-vulnerable findings as a complement to this audit coverage.
Compliance & Standards
Industry frameworks and regulations that require detection of this vulnerability
References
External resources and documentation
Similar Rules
Explore related security rules for Python
Flask Direct Use of Jinja2
Detects direct use of jinja2.Environment or jinja2.Template, which bypasses Flask's automatic HTML autoescaping and can lead to XSS vulnerabilities.
Flask render_template_string Usage
Detects any use of render_template_string(), which renders Jinja2 templates from Python strings and is inherently adjacent to Server-Side Template Injection (SSTI) vulnerabilities.
Flask Server-Side Template Injection (SSTI)
User input from Flask request parameters flows to render_template_string() as part of the template source. Pass user data as template variables, never in the template string itself.
Frequently Asked Questions
Common questions about Flask Explicit Unescape with Markup
New feature
Get these findings posted directly on your GitHub pull requests
The Flask Explicit Unescape with Markup rule runs in CI and posts inline review comments on the exact lines — no dashboard, no SARIF viewer.