platform.lakera.ai Open in urlscan Pro
2606:4700:10::6816:40b5  Public Scan

URL: https://platform.lakera.ai/docs
Submission: On January 29 via manual from GB — Scanned from GB

Form analysis 0 forms found in the DOM

Text Content

PlaygroundDocumentationAPI AccessPricing
Log inSign up
Open main menu
PlaygroundDocumentationAPI AccessPricing
Introduction
Quickstart
Tutorials
Prompt Injection Tutorial
Lakera Guard Evaluation
LangChain Integration
Advanced: Talk To Your Data
API Reference
Overview
Prompt Injection
Moderation
Personally Identifiable Information (PII)
Unknown Links
Resources
Datasets
Guard Prompt Injection Scope
Guard Content Moderation Scope
Miscellaneous
Roadmap
Changelog
On-prem deployment


INTRODUCTION TO LAKERA GUARD

Lakera Guard gives every developer the tools to protect their Large Language
Model (LLM) applications, and their users, from threats like prompt injection,
jailbreaks, exposing sensitive data, and more.


MODEL COMPATIBILITY

Lakera Guard is model-agnostic and works with:

 * any hosted model provider (OpenAI, Anthropic, Cohere, etc.)
 * any open-source model
 * your own custom models


HOW IT WORKS

Lakera Guard is built on top of our continuously evolving security intelligence
platform and is designed to sit in between your users and your generative AI
applications.

Our security intelligence platform combines insights from public sources, data
from the LLM developer community, our Lakera Red Team, and the latest LLM
security research and techniques.

Our proprietary vulnerability database contains tens of millions of attack data
points, and is growing by roughly 100,000 entries per day.

You can start protecting your LLM applications in minutes by signing up and
following our Quickstart guide.


LEARN MORE

To learn more about working with the Lakera Guard API:

 * Experience a real-world prompt injection attack in our Prompt Injection
   tutorial
 * Evaluate Lakera Guard on your own datasets by following our Lakera Guard
   Dataset Evaluation tutorial
 * Experience a more advanced prompt injection use case in our Talk to Your Data
   tutorial


COOKIE CONSENT

Hi, this website uses essential cookies to ensure its proper operation and
tracking cookies to understand how you interact with it. The latter will be set
only after consent.
Accept allSettings



COOKIE PREFERENCES


Cookie usage
We use cookies to ensure the basic functionalities of the website and to enhance
your online experience. You can choose for each category to opt-in/out whenever
you want. For more details relative to cookies and other sensitive data, please
read the full privacy policy.
Strictly necessary cookiesStrictly necessary cookies
These cookies are essential for the proper functioning of my website. Without
these cookies, the website would not work properly
Anonymous analytics cookiesAnonymous analytics cookies
These cookies help us understand how users interact with our website and give us
insighths how to improve the overall user experience.

NameDomain^_gagoogle.com_gidgoogle.com_lr_id_logrocket.com

More information
For any queries in relation to our policy on cookies and your choices, please
contuct us at privacy@lakera.ai.
Accept allReject allSave settings