Django mark_safe() Usage Audit

MEDIUM

mark_safe() bypasses Django's automatic HTML escaping. Audit all usages to confirm content is properly sanitized before being marked safe.

Rule Information

Language
Python
Category
Django
Author
Shivasurya
Shivasurya
Last Updated
2026-03-22
Tags
pythondjangoxssmark-safeauto-escapingauditCWE-79OWASP-A03
CWE References

Interactive Playground

Experiment with the vulnerable code and security rule below. Edit the code to see how the rule detects different vulnerability patterns.

pathfinder scan --ruleset python/PYTHON-DJANGO-SEC-051 --project .
1
2
3
4
5
6
7
8
rule.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32

About This Rule

Understanding the vulnerability and how it is detected

This audit rule flags all usages of django.utils.html.mark_safe() in Django applications regardless of whether user-controlled data is detected flowing into it. It is a visibility rule designed to surface all mark_safe() call sites for manual security review.

Django's template engine automatically escapes variables ({{ variable }}) to prevent XSS. mark_safe() is a signal to the template engine that a string is already safe HTML and should not be escaped. When mark_safe() is called on a string that contains unescaped user input, the protection is bypassed and XSS becomes possible.

mark_safe() is legitimately used in custom template tags and filters that generate controlled HTML, but it is frequently misused by developers who apply it to strings containing user data without prior escaping. This audit rule identifies all call sites so security reviewers can verify each one is used correctly.

Security Implications

Potential attack scenarios if this vulnerability is exploited

1

Auto-escaping Bypass Leading to XSS

Django templates auto-escape {{ variable }} to prevent XSS. mark_safe() tells the template engine to skip escaping. If a mark_safe() call is applied to a string that contains unescaped user input, that input is rendered raw in the browser and can contain malicious script tags or event handler attributes.

2

Latent Risk from Refactoring

A mark_safe() call that is currently safe (applied to a hardcoded HTML string) becomes unsafe if a developer later adds user input to the string before the mark_safe() call. This rule flags all mark_safe() usages so they are reviewed whenever the surrounding code changes.

3

Template Tag and Filter Vulnerabilities

Custom template tags and filters that use mark_safe() to return HTML are a common location for XSS vulnerabilities. If the tag or filter incorporates arguments passed from template context (which may originate from user input) without escaping them, the result is XSS.

4

Chained mark_safe() and String Concatenation

mark_safe() on a safe string followed by concatenation with an unsafe string creates a SafeString that propagates the safety flag. The resulting concatenation will not be escaped in templates, even though the appended string may be unsafe.

How to Fix

Recommended remediation steps

  • 1Use format_html() instead of mark_safe() with f-strings; format_html() automatically escapes all interpolated arguments.
  • 2When mark_safe() must be used, ensure all user-controlled values are passed through escape() first, or use bleach.clean() for rich text that allows a controlled subset of HTML.
  • 3Never apply mark_safe() directly to request.GET or request.POST values without prior escaping.
  • 4Review all custom template tags and filters that return mark_safe() values to verify they escape any context-provided data.
  • 5Prefer Django's |escape template filter for in-template escaping and format_html() for Python-side HTML construction over manual mark_safe() patterns.

Detection Scope

How Code Pathfinder analyzes your code for this vulnerability

This rule uses QueryType pattern matching rather than taint analysis. It matches all calls to mark_safe() and django.utils.html.mark_safe() regardless of whether user-controlled data flows into the argument. This is an audit rule intended for security reviews and compliance inventories. Use PYTHON-DJANGO-SEC-050 (taint-based rule) for CI-integrated detection of confirmed XSS flows. For complete audit coverage, this rule is used alongside taint-based rules. The .where() clause constrains matches to Python files in Django project structures.

Compliance & Standards

Industry frameworks and regulations that require detection of this vulnerability

CWE Top 25
CWE-79 ranked #2 in 2023 Most Dangerous Software Weaknesses
OWASP Top 10
A03:2021 - Injection (XSS)
PCI DSS v4.0
Requirement 6.2.4 and 6.3.2 - inventory of custom code; protect against XSS
NIST SP 800-53
SI-10: Information Input Validation; SI-15: Information Output Filtering
ISO 27001
A.14.2.8 - System security testing

References

External resources and documentation

Similar Rules

Explore related security rules for Python

Frequently Asked Questions

Common questions about Django mark_safe() Usage Audit

This is an audit rule that provides visibility into all mark_safe() usages, not just unsafe ones. The purpose is to create a reviewable inventory. A mark_safe() call on a static string is safe today, but if a developer later modifies the code to include a user-controlled value in that string, the mark_safe() call makes it unsafe. Auditing all call sites catches such regressions before they reach production.
format_html() is like Python's str.format() but it escapes all interpolated arguments using escape() before substituting them, and returns a SafeString. It is the recommended way to construct HTML strings from user data in Django. mark_safe() simply tags an existing string as safe without performing any escaping -- it is appropriate only when the string has already been escaped or when it is a static HTML literal.
Yes. bleach.clean() sanitizes HTML by stripping or escaping disallowed tags and attributes. Its output is safe to pass to mark_safe(). Ensure your bleach.clean() call uses a restrictive allowlist appropriate for your use case and that the strip_comments=True option is set to prevent comment-based injection.
No meaningful performance difference. format_html() calls the same escape() function that you would call manually before mark_safe(). The escaping is a simple string replacement operation with negligible performance impact. Use format_html() as the default approach for all HTML construction.
Run PYTHON-DJANGO-SEC-050 (taint-based XSS detection) to find confirmed flows from user input to HttpResponse. Run this audit rule (SEC-051) to find all mark_safe() usages for manual review. Cross-reference: any mark_safe() call where the argument contains user-controlled data without prior escape() or bleach.clean() processing is an XSS vulnerability.

New feature

Get these findings posted directly on your GitHub pull requests

The Django mark_safe() Usage Audit rule runs in CI and posts inline review comments on the exact lines — no dashboard, no SARIF viewer.

See how it works