Interactive Playground
Experiment with the vulnerable code and security rule below. Edit the code to see how the rule detects different vulnerability patterns.
pathfinder scan --ruleset python/PYTHON-DJANGO-SEC-051 --project .About This Rule
Understanding the vulnerability and how it is detected
This audit rule flags all usages of django.utils.html.mark_safe() in Django applications regardless of whether user-controlled data is detected flowing into it. It is a visibility rule designed to surface all mark_safe() call sites for manual security review.
Django's template engine automatically escapes variables ({{ variable }}) to prevent XSS. mark_safe() is a signal to the template engine that a string is already safe HTML and should not be escaped. When mark_safe() is called on a string that contains unescaped user input, the protection is bypassed and XSS becomes possible.
mark_safe() is legitimately used in custom template tags and filters that generate controlled HTML, but it is frequently misused by developers who apply it to strings containing user data without prior escaping. This audit rule identifies all call sites so security reviewers can verify each one is used correctly.
Security Implications
Potential attack scenarios if this vulnerability is exploited
Auto-escaping Bypass Leading to XSS
Django templates auto-escape {{ variable }} to prevent XSS. mark_safe() tells the template engine to skip escaping. If a mark_safe() call is applied to a string that contains unescaped user input, that input is rendered raw in the browser and can contain malicious script tags or event handler attributes.
Latent Risk from Refactoring
A mark_safe() call that is currently safe (applied to a hardcoded HTML string) becomes unsafe if a developer later adds user input to the string before the mark_safe() call. This rule flags all mark_safe() usages so they are reviewed whenever the surrounding code changes.
Template Tag and Filter Vulnerabilities
Custom template tags and filters that use mark_safe() to return HTML are a common location for XSS vulnerabilities. If the tag or filter incorporates arguments passed from template context (which may originate from user input) without escaping them, the result is XSS.
Chained mark_safe() and String Concatenation
mark_safe() on a safe string followed by concatenation with an unsafe string creates a SafeString that propagates the safety flag. The resulting concatenation will not be escaped in templates, even though the appended string may be unsafe.
How to Fix
Recommended remediation steps
- 1Use format_html() instead of mark_safe() with f-strings; format_html() automatically escapes all interpolated arguments.
- 2When mark_safe() must be used, ensure all user-controlled values are passed through escape() first, or use bleach.clean() for rich text that allows a controlled subset of HTML.
- 3Never apply mark_safe() directly to request.GET or request.POST values without prior escaping.
- 4Review all custom template tags and filters that return mark_safe() values to verify they escape any context-provided data.
- 5Prefer Django's |escape template filter for in-template escaping and format_html() for Python-side HTML construction over manual mark_safe() patterns.
Detection Scope
How Code Pathfinder analyzes your code for this vulnerability
This rule uses QueryType pattern matching rather than taint analysis. It matches all calls to mark_safe() and django.utils.html.mark_safe() regardless of whether user-controlled data flows into the argument. This is an audit rule intended for security reviews and compliance inventories. Use PYTHON-DJANGO-SEC-050 (taint-based rule) for CI-integrated detection of confirmed XSS flows. For complete audit coverage, this rule is used alongside taint-based rules. The .where() clause constrains matches to Python files in Django project structures.
Compliance & Standards
Industry frameworks and regulations that require detection of this vulnerability
References
External resources and documentation
Similar Rules
Explore related security rules for Python
Django XSS via Direct HttpResponse with User Input
User input flows directly to HttpResponse without HTML escaping, enabling Cross-Site Scripting (XSS) attacks.
Django SafeString Subclass Audit
Class extends SafeString or SafeData, bypassing Django's auto-escaping for all instances. Audit to confirm the class properly sanitizes content.
Django XSS in HTML Email Body via EmailMessage
User input flows into HTML email body content without sanitization, enabling HTML injection in emails.
Frequently Asked Questions
Common questions about Django mark_safe() Usage Audit
New feature
Get these findings posted directly on your GitHub pull requests
The Django mark_safe() Usage Audit rule runs in CI and posts inline review comments on the exact lines — no dashboard, no SARIF viewer.