# PYTHON-LAMBDA-SEC-022: Lambda Code Injection via eval() or exec()

> **Severity:** CRITICAL | **CWE:** CWE-95 | **OWASP:** A03:2021

- **Language:** Python
- **Category:** AWS Lambda
- **URL:** https://codepathfinder.dev/registry/python/aws_lambda/PYTHON-LAMBDA-SEC-022
- **Detection:** `pathfinder scan --ruleset python/PYTHON-LAMBDA-SEC-022 --project .`

## Description

This rule detects code injection vulnerabilities in AWS Lambda functions where
untrusted event data flows into Python's eval() or exec() functions.

Lambda functions receive input from the event dictionary populated by API Gateway,
SQS, SNS, S3, DynamoDB Streams, and other triggers. Fields like event.get("body"),
event.get("queryStringParameters"), and event["Records"] are attacker-controllable.
There is no sanitization layer between the event payload and application code.

Python's eval() evaluates a string as a Python expression. exec() executes a string
as Python statements. When Lambda event data reaches either function, an attacker
can inject arbitrary Python code that executes with the full capabilities of the
Lambda execution environment: all imported modules, the boto3 SDK with the execution
role's AWS credentials, the filesystem (especially /tmp), and outbound network access.

Lambda code injection is particularly severe because the execution role's temporary
AWS credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN) are
available as environment variables. A single eval() injection can exfiltrate these
credentials, giving the attacker the full permissions of the Lambda's IAM role
across all attached AWS services, with no additional authentication required.

There is no safe way to use eval() or exec() with event data from any Lambda trigger.


## Vulnerable Code

```python
import json
import pickle

# SEC-022: exec with event data
def handler_exec(event, context):
    code = event.get('code')
    exec(code)
    return {"statusCode": 200}
```

## Secure Code

```python
import ast
import json
import operator

# Safe arithmetic evaluator using AST node validation
SAFE_OPS = {
    ast.Add: operator.add,
    ast.Sub: operator.sub,
    ast.Mult: operator.mul,
    ast.Div: operator.truediv,
}
SAFE_NODES = (ast.Expression, ast.BinOp, ast.UnaryOp, ast.Constant,
              ast.Add, ast.Sub, ast.Mult, ast.Div, ast.USub)

def safe_eval_arithmetic(expression):
    tree = ast.parse(expression, mode='eval')
    if not all(isinstance(n, SAFE_NODES) for n in ast.walk(tree)):
        raise ValueError("Unsafe expression")
    def _eval(node):
        if isinstance(node, ast.Expression):
            return _eval(node.body)
        elif isinstance(node, ast.Constant):
            return node.value
        elif isinstance(node, ast.BinOp):
            return SAFE_OPS[type(node.op)](_eval(node.left), _eval(node.right))
        raise ValueError(f"Unsupported: {type(node)}")
    return _eval(tree)

def lambda_handler(event, context):
    params = event.get('queryStringParameters', {}) or {}

    # SECURE: ast.literal_eval() for safe literal parsing (no code execution)
    config_str = params.get('config', '{}')
    try:
        config = ast.literal_eval(config_str)
        if not isinstance(config, dict):
            raise ValueError("Must be a dict")
    except (ValueError, SyntaxError):
        return {'statusCode': 400, 'body': 'Invalid config'}

    # SECURE: Custom AST evaluator for arithmetic only
    expr = params.get('expr', '1+1')
    try:
        result = safe_eval_arithmetic(expr)
    except (ValueError, SyntaxError):
        return {'statusCode': 400, 'body': 'Invalid expression'}

    return {'statusCode': 200, 'body': json.dumps({'result': result, 'config': config})}

```

## Detection Rule (Python SDK)

```python
from rules.python_decorators import python_rule
from codepathfinder import calls, flows, QueryType
from codepathfinder.presets import PropagationPresets

class Builtins(QueryType):
    fqns = ["builtins"]

_LAMBDA_SOURCES = [
    calls("event.get"),
    calls("event.items"),
    calls("event.values"),
    calls("*.get"),
]


@python_rule(
    id="PYTHON-LAMBDA-SEC-022",
    name="Lambda Code Injection via eval/exec",
    severity="CRITICAL",
    category="aws_lambda",
    cwe="CWE-95",
    tags="python,aws,lambda,code-injection,eval,exec,OWASP-A03,CWE-95",
    message="Lambda event data flows to eval()/exec()/compile(). Never eval untrusted data.",
    owasp="A03:2021",
)
def detect_lambda_code_injection():
    """Detects Lambda event data flowing to eval/exec/compile."""
    return flows(
        from_sources=_LAMBDA_SOURCES,
        to_sinks=[
            Builtins.method("eval", "exec", "compile"),
            calls("eval"),
            calls("exec"),
            calls("compile"),
        ],
        sanitized_by=[
            calls("ast.literal_eval"),
        ],
        propagates_through=PropagationPresets.standard(),
        scope="global",
    )
```

## How to Fix

- Remove all uses of eval() and exec() with Lambda event data; there is no safe sanitizer for these functions.
- Use ast.literal_eval() for safely parsing Python literals (dicts, lists, strings, numbers) without executing arbitrary code.
- For mathematical expressions, implement a custom recursive evaluator over the AST using ast.parse() with strict node type validation.
- For function dispatch based on event data, use an explicit allowlist dictionary mapping string names to callable objects.
- For JSON-like data structures, use json.loads() which is safe and standardized.

## Security Implications

- **Direct AWS Credential Exfiltration:** The Lambda execution environment automatically provides temporary AWS credentials
as environment variables. Injected code like __import__('os').environ can
immediately access and exfiltrate AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY,
and AWS_SESSION_TOKEN, giving the attacker the full IAM role permissions without
any additional authentication steps.

- **Boto3 SDK Abuse via Execution Role:** Lambda execution environments have boto3 pre-installed and the execution role
credentials pre-configured. Injected code can import boto3 and call any AWS API
permitted by the execution role: reading S3 buckets, querying DynamoDB, publishing
to SNS/SQS, reading Secrets Manager, invoking other Lambda functions, and
modifying IAM resources if the role has those permissions.

- **Complete Python Runtime Compromise:** eval() injection gives attackers direct access to the Python runtime without any
shell intermediary. Unlike command injection that requires a shell, eval() lets
attackers traverse the Python object hierarchy, access arbitrary modules, modify
the Lambda's global state, and install persistence mechanisms that affect
subsequent warm invocations.

- **Python Sandbox Escape Is Not Feasible:** Restricted namespaces passed to eval() (globals={}, locals={}) provide no
meaningful protection. Python's rich object model allows traversal of the class
hierarchy via __class__, __bases__, __subclasses__, and __init__ to access
built-in functions and modules without direct name references. There is no safe
sandbox for eval() with untrusted input.


## FAQ

**Q: Why is eval() injection in Lambda more severe than in a traditional server?**

In a traditional server, eval() injection gives the attacker Python runtime access
but requires additional steps to access cloud credentials. In Lambda, the execution
role's temporary AWS credentials are pre-loaded as environment variables in every
invocation. Injected code immediately has boto3 access with the full execution role
permissions. The attacker can call any AWS API the role allows in the same
invocation, without any lateral movement step.


**Q: Can passing eval() a restricted globals or locals dict make it safe?**

No. Restricting the globals dict does not prevent sandbox escapes. Python's object
model allows traversal of the class hierarchy via ().__class__.__base__.__subclasses__()
and similar patterns to access arbitrary built-in classes and modules without
direct name access. Multiple CVEs exist for products that attempted restricted
eval() sandboxes and were bypassed. The only safe approach is to not call eval()
with event data at all.


**Q: What is the difference between eval() and exec() risk in Lambda?**

eval() evaluates a single expression and returns a value. exec() executes statements
and can include loops, assignments, imports, and function definitions. Both give an
attacker full Python code execution in the Lambda environment. exec() is technically
more powerful (multi-statement programs), but eval() is sufficient to import modules,
call functions, and exfiltrate data. Both must be treated as equally dangerous.


**Q: Our Lambda uses eval() for a formula evaluation feature. How do I replace it?**

Implement a safe arithmetic evaluator using ast.parse() with mode='eval', then
walk the resulting AST with ast.walk() to verify all nodes are in an allowlist of
safe node types (BinOp, UnaryOp, Constant, and safe operator types). Recursively
evaluate only the validated AST nodes. The secure_example in this rule demonstrates
this pattern. For more complex expression languages, use a dedicated safe evaluation
library with a defined grammar.


**Q: Does wrapping eval() in try/except make it safe?**

No. The try/except catches exceptions but does not prevent code execution. The
attacker's injected code runs and all non-exception side effects (reading environment
variables, making outbound HTTP calls, importing modules) occur before any exception
is raised. Successful exploitation may not raise any exception at all if the payload
is crafted to execute silently.


## References

- [CWE-95: Eval Injection](https://cwe.mitre.org/data/definitions/95.html)
- [OWASP Code Injection](https://owasp.org/www-community/attacks/Code_Injection)
- [Python ast.literal_eval() documentation](https://docs.python.org/3/library/ast.html#ast.literal_eval)
- [AWS Lambda Security Best Practices](https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html)
- [OWASP Injection Prevention Cheat Sheet](https://cheatsheetseries.owasp.org/cheatsheets/Injection_Prevention_Cheat_Sheet.html)
- [AWS Lambda Execution Environment](https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtime-environment.html)

---

Source: https://codepathfinder.dev/registry/python/aws_lambda/PYTHON-LAMBDA-SEC-022
Code Pathfinder — Open source, type-aware SAST with cross-file dataflow analysis
