OWASP TOP 10 LLM VULNERABILITIES

Home

OWASP

LLM 01

Prompt Injection

Attackers manipulate input prompts to compromise model outputs and behavior.

How AHAGuardians mitigate this risk?

Robust Prompt Sanitization

Filters and neutralizes malicious prompt injections.

Controlled Prompt Templates

Enforces approved prompt structures, preventing free-form input.

Dynamic Prompt Parameterization

Safely inserts user data into prompts, minimizing injection risk.

LLM 02

Sensitive Information Disclosure

Unintended exposure of sensitive information during model operation.

How AHAGuardians mitigate this risk?

Input/Output Content Filtering

Detects and redacts PII, preventing exposure.

Data Loss Prevention (DLP) Integration

Enforces data security policies.

Contextual Data Masking

Dynamically masks sensitive data within outputs.

LLM 03

Supply Chain

Vulnerabilities arising from compromised model development and deployment elements.

How AHAGuardians mitigate this risk?

Model Provenance Tracking

Tracks model components for quick identification of compromises.

Secure Model Deployment Pipelines

Ensures only authorized models are deployed.

Dependency Vulnerability Scanning

Scans dependencies for known vulnerabilities.

LLM 04

Data and Model Poisoning

Introducing malicious data or poisoning the model to manipulate its behavior.

How AHAGuardians mitigate this risk?

Training Data Validation

Validates data for anomalies and poisoning

Model Performance Monitoring

Monitors for deviations indicating poisoning.

Differential Privacy Techniques

Mitigates poisoned data impact during training.

LLM 05

Improper Output Handling

Flaws in managing and safeguarding generated content, risking unintended consequences.

How AHAGuardians mitigate this risk?

Output Content Moderation

Filters harmful or inappropriate outputs.

Output Validation and Sanitization

Validates and sanitizes outputs.

Human-in-the-Loop Review

Enables human review for critical applications.

LLM 06

Excessive Agency

Overly permissive model behaviors that may lead to undesired outcomes.

How AHAGuardians mitigate this risk?

Controlled Model Access

Defines fine-grained access control policies.

Workflow Automation with Human Oversight

Requires human approval for critical actions.

Output Confinement

Restricts the scope of LLM outputs.

LLM 07

System Prompt Leakage

Leakage of internal prompts exposing the operational framework of the LLM.

How AHAGuardians mitigate this risk?

Secure Prompt Storage

Securely stores and encrypts system prompts.

Prompt Access Control

Implements strict access control for prompts.

Prompt Versioning and Auditing

Maintains version history and logs access.

LLM 08

Vector and Embedding Weaknesses

Weaknesses in vector storage and embedding representations that may be exploited.

How AHAGuardians mitigate this risk?

Secure Vector Database Integration

Integrates with secure vector databases.

Embedding Analysis and Monitoring

Analyzes and monitors embeddings for anomalies.

Adversarial Embedding Detection

Detects manipulative embeddings.

LLM 09

Misinformation

LLMs inadvertently generating or propagating misinformation.

How AHAGuardians mitigate this risk?

Bias Detection and Mitigation

Detects and mitigates biases in outputs.

Source Attribution

Provides source attribution for outputs.

Fact Verification Integratio

Integrates with fact-checking services.

LLM 10

Unbounded Consumption

Uncontrolled resource consumption by LLMs, causing service disruptions.

How AHAGuardians mitigate this risk?

Resource Quotas and Limits

Defines resource quotas for LLM usage.

Usage Monitoring and Alerting

Monitors usage and alerts on unusual activity.

Cost Management

Tracks spending and optimizes resource allocation.