Lambda Code Injection via eval() or exec()

CRITICAL

Lambda event data flows to eval() or exec(), enabling arbitrary Python code execution with the full permissions of the Lambda execution environment.

Rule Information

Language
Python
Category
AWS Lambda
Author
Shivasurya
Shivasurya
Last Updated
2026-03-22
Tags
pythonawslambdacode-injectionevalexecrcetaint-analysisinter-proceduralCWE-95OWASP-A03
CWE References

Interactive Playground

Experiment with the vulnerable code and security rule below. Edit the code to see how the rule detects different vulnerability patterns.

pathfinder scan --ruleset python/PYTHON-LAMBDA-SEC-022 --project .
1
2
3
4
5
6
7
8
rule.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41

About This Rule

Understanding the vulnerability and how it is detected

This rule detects code injection vulnerabilities in AWS Lambda functions where untrusted event data flows into Python's eval() or exec() functions.

Lambda functions receive input from the event dictionary populated by API Gateway, SQS, SNS, S3, DynamoDB Streams, and other triggers. Fields like event.get("body"), event.get("queryStringParameters"), and event["Records"] are attacker-controllable. There is no sanitization layer between the event payload and application code.

Python's eval() evaluates a string as a Python expression. exec() executes a string as Python statements. When Lambda event data reaches either function, an attacker can inject arbitrary Python code that executes with the full capabilities of the Lambda execution environment: all imported modules, the boto3 SDK with the execution role's AWS credentials, the filesystem (especially /tmp), and outbound network access.

Lambda code injection is particularly severe because the execution role's temporary AWS credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN) are available as environment variables. A single eval() injection can exfiltrate these credentials, giving the attacker the full permissions of the Lambda's IAM role across all attached AWS services, with no additional authentication required.

There is no safe way to use eval() or exec() with event data from any Lambda trigger.

Security Implications

Potential attack scenarios if this vulnerability is exploited

1

Direct AWS Credential Exfiltration

The Lambda execution environment automatically provides temporary AWS credentials as environment variables. Injected code like __import__('os').environ can immediately access and exfiltrate AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN, giving the attacker the full IAM role permissions without any additional authentication steps.

2

Boto3 SDK Abuse via Execution Role

Lambda execution environments have boto3 pre-installed and the execution role credentials pre-configured. Injected code can import boto3 and call any AWS API permitted by the execution role: reading S3 buckets, querying DynamoDB, publishing to SNS/SQS, reading Secrets Manager, invoking other Lambda functions, and modifying IAM resources if the role has those permissions.

3

Complete Python Runtime Compromise

eval() injection gives attackers direct access to the Python runtime without any shell intermediary. Unlike command injection that requires a shell, eval() lets attackers traverse the Python object hierarchy, access arbitrary modules, modify the Lambda's global state, and install persistence mechanisms that affect subsequent warm invocations.

4

Python Sandbox Escape Is Not Feasible

Restricted namespaces passed to eval() (globals={}, locals={}) provide no meaningful protection. Python's rich object model allows traversal of the class hierarchy via __class__, __bases__, __subclasses__, and __init__ to access built-in functions and modules without direct name references. There is no safe sandbox for eval() with untrusted input.

How to Fix

Recommended remediation steps

  • 1Remove all uses of eval() and exec() with Lambda event data; there is no safe sanitizer for these functions.
  • 2Use ast.literal_eval() for safely parsing Python literals (dicts, lists, strings, numbers) without executing arbitrary code.
  • 3For mathematical expressions, implement a custom recursive evaluator over the AST using ast.parse() with strict node type validation.
  • 4For function dispatch based on event data, use an explicit allowlist dictionary mapping string names to callable objects.
  • 5For JSON-like data structures, use json.loads() which is safe and standardized.

Detection Scope

How Code Pathfinder analyzes your code for this vulnerability

This rule performs inter-procedural taint analysis with global scope. Sources are Lambda event dictionary access calls: calls("event.get"), calls("event.__getitem__"), including event.get("body"), event.get("queryStringParameters"), event.get("pathParameters"), and event["Records"]. Sinks are calls("eval") and calls("exec") with tainted input tracked via .tracks(0). There are no recognized sanitizers for eval() or exec() — any Lambda event data reaching these sinks is a confirmed vulnerability. The analysis follows taint across file and module boundaries.

Compliance & Standards

Industry frameworks and regulations that require detection of this vulnerability

CWE Top 25
CWE-95 - Eval Injection in Most Dangerous Software Weaknesses list
OWASP Top 10
A03:2021 - Injection
PCI DSS v4.0
Requirement 6.2.4 - protect against injection attacks
NIST SP 800-53
SI-10: Information Input Validation; SI-3: Malicious Code Protection
AWS Security Best Practices
Never execute untrusted code; apply least-privilege execution roles

References

External resources and documentation

Similar Rules

Explore related security rules for Python

Frequently Asked Questions

Common questions about Lambda Code Injection via eval() or exec()

In a traditional server, eval() injection gives the attacker Python runtime access but requires additional steps to access cloud credentials. In Lambda, the execution role's temporary AWS credentials are pre-loaded as environment variables in every invocation. Injected code immediately has boto3 access with the full execution role permissions. The attacker can call any AWS API the role allows in the same invocation, without any lateral movement step.
No. Restricting the globals dict does not prevent sandbox escapes. Python's object model allows traversal of the class hierarchy via ().__class__.__base__.__subclasses__() and similar patterns to access arbitrary built-in classes and modules without direct name access. Multiple CVEs exist for products that attempted restricted eval() sandboxes and were bypassed. The only safe approach is to not call eval() with event data at all.
eval() evaluates a single expression and returns a value. exec() executes statements and can include loops, assignments, imports, and function definitions. Both give an attacker full Python code execution in the Lambda environment. exec() is technically more powerful (multi-statement programs), but eval() is sufficient to import modules, call functions, and exfiltrate data. Both must be treated as equally dangerous.
Implement a safe arithmetic evaluator using ast.parse() with mode='eval', then walk the resulting AST with ast.walk() to verify all nodes are in an allowlist of safe node types (BinOp, UnaryOp, Constant, and safe operator types). Recursively evaluate only the validated AST nodes. The secure_example in this rule demonstrates this pattern. For more complex expression languages, use a dedicated safe evaluation library with a defined grammar.
No. The try/except catches exceptions but does not prevent code execution. The attacker's injected code runs and all non-exception side effects (reading environment variables, making outbound HTTP calls, importing modules) occur before any exception is raised. Successful exploitation may not raise any exception at all if the payload is crafted to execute silently.

New feature

Get these findings posted directly on your GitHub pull requests

The Lambda Code Injection via eval() or exec() rule runs in CI and posts inline review comments on the exact lines — no dashboard, no SARIF viewer.

See how it works