Skip to main content
ToolAI Ethics & Safetyv0.9

Vigil

by deadbits · open-source · Last verified 2026-03-17

Open-source prompt injection scanner for detecting and preventing attacks on LLM applications. Provides multiple detection methods including similarity matching, canary tokens, and heuristic analysis.

https://github.com/deadbits/vigil-llm
D
DPoor
Adoption: DQuality: B+Freshness: B+Citations: DEngagement: F

Specifications

License
MIT
Pricing
open-source
Capabilities
prompt-injection-scanning, canary-tokens, similarity-detection, heuristic-analysis
Integrations
Use Cases
prompt-injection-detection, input-scanning, security-auditing
API Available
Yes
SDK Languages
python
Deployment
self-hosted
Rate Limits
N/A (open-source)
Data Privacy
Self-hosted, user-managed; runs locally
Tags
prompt-injection, scanner, open-source, security
Added
2026-03-17
Completeness
100%

Index Score

31.45
Adoption
28
Quality
70
Freshness
72
Citations
25
Engagement
0

Explore the full AI ecosystem on Agents as a Service