Alerts
Events
DCR
Explore Cyware Products
Alerts
Events
DCR
Go to listing page
AI models can acquire backdoors from surprisingly few malicious documents
Threat Intel & Info Sharing
October 10, 2025
arstechnica
Recent research reveals that large language models (LLMs) can develop backdoor vulnerabilities from as few as 250 malicious documents embedded in their training data. The study involved training LLMs ranging from 600 million to 13 billion parameters.
Read More
ChatGPT
Claude
Publisher
Previous
RondoDox botnet targets 56 n-day flaws in worldwide att ...
Malware and Vulnerabilities
Next
All SonicWall Cloud Backup Users Had Firewall Configura ...
Breaches and Incidents