Interactive Playground
Experiment with the vulnerable code and security rule below. Edit the code to see how the rule detects different vulnerability patterns.
pathfinder scan --ruleset python/PYTHON-LAMBDA-SEC-022 --project .About This Rule
Understanding the vulnerability and how it is detected
This rule detects code injection vulnerabilities in AWS Lambda functions where untrusted event data flows into Python's eval() or exec() functions.
Lambda functions receive input from the event dictionary populated by API Gateway, SQS, SNS, S3, DynamoDB Streams, and other triggers. Fields like event.get("body"), event.get("queryStringParameters"), and event["Records"] are attacker-controllable. There is no sanitization layer between the event payload and application code.
Python's eval() evaluates a string as a Python expression. exec() executes a string as Python statements. When Lambda event data reaches either function, an attacker can inject arbitrary Python code that executes with the full capabilities of the Lambda execution environment: all imported modules, the boto3 SDK with the execution role's AWS credentials, the filesystem (especially /tmp), and outbound network access.
Lambda code injection is particularly severe because the execution role's temporary AWS credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN) are available as environment variables. A single eval() injection can exfiltrate these credentials, giving the attacker the full permissions of the Lambda's IAM role across all attached AWS services, with no additional authentication required.
There is no safe way to use eval() or exec() with event data from any Lambda trigger.
Security Implications
Potential attack scenarios if this vulnerability is exploited
Direct AWS Credential Exfiltration
The Lambda execution environment automatically provides temporary AWS credentials as environment variables. Injected code like __import__('os').environ can immediately access and exfiltrate AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN, giving the attacker the full IAM role permissions without any additional authentication steps.
Boto3 SDK Abuse via Execution Role
Lambda execution environments have boto3 pre-installed and the execution role credentials pre-configured. Injected code can import boto3 and call any AWS API permitted by the execution role: reading S3 buckets, querying DynamoDB, publishing to SNS/SQS, reading Secrets Manager, invoking other Lambda functions, and modifying IAM resources if the role has those permissions.
Complete Python Runtime Compromise
eval() injection gives attackers direct access to the Python runtime without any shell intermediary. Unlike command injection that requires a shell, eval() lets attackers traverse the Python object hierarchy, access arbitrary modules, modify the Lambda's global state, and install persistence mechanisms that affect subsequent warm invocations.
Python Sandbox Escape Is Not Feasible
Restricted namespaces passed to eval() (globals={}, locals={}) provide no meaningful protection. Python's rich object model allows traversal of the class hierarchy via __class__, __bases__, __subclasses__, and __init__ to access built-in functions and modules without direct name references. There is no safe sandbox for eval() with untrusted input.
How to Fix
Recommended remediation steps
- 1Remove all uses of eval() and exec() with Lambda event data; there is no safe sanitizer for these functions.
- 2Use ast.literal_eval() for safely parsing Python literals (dicts, lists, strings, numbers) without executing arbitrary code.
- 3For mathematical expressions, implement a custom recursive evaluator over the AST using ast.parse() with strict node type validation.
- 4For function dispatch based on event data, use an explicit allowlist dictionary mapping string names to callable objects.
- 5For JSON-like data structures, use json.loads() which is safe and standardized.
Detection Scope
How Code Pathfinder analyzes your code for this vulnerability
This rule performs inter-procedural taint analysis with global scope. Sources are Lambda event dictionary access calls: calls("event.get"), calls("event.__getitem__"), including event.get("body"), event.get("queryStringParameters"), event.get("pathParameters"), and event["Records"]. Sinks are calls("eval") and calls("exec") with tainted input tracked via .tracks(0). There are no recognized sanitizers for eval() or exec() — any Lambda event data reaching these sinks is a confirmed vulnerability. The analysis follows taint across file and module boundaries.
Compliance & Standards
Industry frameworks and regulations that require detection of this vulnerability
References
External resources and documentation
Similar Rules
Explore related security rules for Python
Lambda XSS via Tainted HTML Response Body
Lambda event data is embedded directly in an HTML response body returned to API Gateway, enabling Cross-Site Scripting attacks against end users.
Lambda Remote Code Execution via Pickle Deserialization
Lambda event data flows to pickle.loads() or pickle.load(), enabling arbitrary Python code execution during deserialization of attacker-controlled bytes.
Lambda Command Injection via os.system()
Lambda event data flows to os.system(), enabling arbitrary OS command execution inside the Lambda execution environment.
Frequently Asked Questions
Common questions about Lambda Code Injection via eval() or exec()
New feature
Get these findings posted directly on your GitHub pull requests
The Lambda Code Injection via eval() or exec() rule runs in CI and posts inline review comments on the exact lines — no dashboard, no SARIF viewer.