Interactive Playground
Experiment with the vulnerable code and security rule below. Edit the code to see how the rule detects different vulnerability patterns.
pathfinder scan --ruleset python/PYTHON-LAMBDA-SEC-020 --project .About This Rule
Understanding the vulnerability and how it is detected
This rule detects Cross-Site Scripting (XSS) vulnerabilities in AWS Lambda functions where untrusted event data is embedded directly in the HTML response body returned to API Gateway without HTML-escaping.
Lambda functions acting as API Gateway backends frequently generate HTML responses dynamically. When the response body (the 'body' key in the Lambda return value) is set to Content-Type: text/html and contains unescaped event data, the browser renders any injected HTML or JavaScript. Event sources include event.get("queryStringParameters"), event.get("body"), event["pathParameters"], and event["headers"], all of which are attacker-controllable through API Gateway requests.
Unlike web frameworks that escape template variables by default, Lambda handlers that construct HTML strings manually have no automatic escaping layer. The developer must explicitly call html.escape() on every piece of event data embedded in an HTML response. Failure to do so enables reflected XSS, where an attacker crafts a URL or form that causes the Lambda to reflect malicious script back to the victim's browser.
Security Implications
Potential attack scenarios if this vulnerability is exploited
Reflected XSS via API Gateway
An attacker who controls any query parameter, path parameter, or request body field can inject <script> tags or event handler attributes (onerror, onload) into the HTML response. The victim's browser executes the injected JavaScript when they visit the attacker-crafted URL, enabling session hijacking, credential theft, and malicious redirects.
Session Token Theft
JavaScript injected via Lambda XSS can read document.cookie, localStorage, and sessionStorage to steal session tokens and authentication credentials. These tokens can be exfiltrated to attacker-controlled infrastructure in a single request, giving the attacker persistent access to the victim's account.
Phishing and Content Injection
XSS allows attackers to modify the DOM to display fake login forms, error messages, or instructions that trick users into submitting credentials or installing malware. The attack occurs on the legitimate domain served by the Lambda-backed API, making it harder for users to recognize.
SameSite Cookie Bypass
Lambda-backed APIs that rely on SameSite=Lax cookie protections against CSRF are still vulnerable to XSS, because XSS executes in the same origin context and can make authenticated requests directly without triggering CSRF protections.
How to Fix
Recommended remediation steps
- 1Call html.escape() on every piece of Lambda event data before embedding it in an HTML response body.
- 2Add a Content-Security-Policy header to API Gateway responses to limit the impact of any XSS that bypasses output escaping.
- 3Set X-Content-Type-Options: nosniff to prevent MIME type sniffing that could enable XSS via non-HTML responses.
- 4Return JSON responses instead of HTML wherever possible; if the client needs HTML, render it client-side with a trusted JavaScript framework that escapes content automatically.
- 5Validate and restrict the format of event fields that appear in HTML responses to further reduce the injection surface.
Detection Scope
How Code Pathfinder analyzes your code for this vulnerability
This rule performs inter-procedural taint analysis with global scope. Sources are Lambda event dictionary access calls: calls("event.get"), calls("event.__getitem__"), including event.get("body"), event.get("queryStringParameters"), event.get("pathParameters"), and event["headers"]. The sink is the HTML response body string that flows into the Lambda return value's 'body' key when the Content-Type is text/html (tracked via .tracks(0)). Sanitizers include html.escape() applied to individual values before embedding. The analysis follows taint through f-string interpolation, string concatenation, variable assignments, and function boundaries.
Compliance & Standards
Industry frameworks and regulations that require detection of this vulnerability
References
External resources and documentation
Similar Rules
Explore related security rules for Python
Lambda Code Injection via eval() or exec()
Lambda event data flows to eval() or exec(), enabling arbitrary Python code execution with the full permissions of the Lambda execution environment.
Lambda Remote Code Execution via Pickle Deserialization
Lambda event data flows to pickle.loads() or pickle.load(), enabling arbitrary Python code execution during deserialization of attacker-controlled bytes.
Frequently Asked Questions
Common questions about Lambda XSS via Tainted HTML Response Body
New feature
Get these findings posted directly on your GitHub pull requests
The Lambda XSS via Tainted HTML Response Body rule runs in CI and posts inline review comments on the exact lines — no dashboard, no SARIF viewer.