Alerts
Events
DCR
Explore Cyware Products
Alerts
Events
DCR
Go to listing page
LLM Guard: Open-Source Toolkit for Securing Large Language Models
Security Products & Services
September 21, 2023
Help Net Security
The open-source toolkit provides evaluators for inputs and outputs of LLMs, offering features such as sanitization, detection of harmful language, data leakage prevention, and protection against prompt injection and jailbreak attacks.
Read More
LLM Guard
Open Source Tool
Large Language Model (LLM)
Data Leakage Prevention
Prompt Injection Attack
Publisher
Previous
UK Passes the Online Safety Bill — And No, It Doesn’t B ...
Laws, Policy, Regulations
Next
T-Mobile App Glitch Let Users See Other People’s Accoun ...
Breaches and Incidents