Rebuff
by Protect AI · open-source · Last verified 2026-03-17
Self-hardening prompt injection detector that uses multiple defense layers to protect LLM applications. Combines heuristic analysis, LLM-based detection, and vector similarity to identify prompt injection attacks.
https://github.com/protectai/rebuff ↗D
D—Poor
Adoption: DQuality: B+Freshness: B+Citations: DEngagement: F
Specifications
- License
- Apache-2.0
- Pricing
- open-source
- Capabilities
- prompt-injection-detection, multi-layer-defense, heuristic-analysis, vector-similarity, self-hardening
- Integrations
- openai, langchain
- Use Cases
- prompt-injection-defense, input-sanitization, security-hardening, attack-detection
- API Available
- Yes
- SDK Languages
- python, typescript
- Deployment
- self-hosted
- Rate Limits
- N/A (open-source)
- Data Privacy
- Self-hosted, user-managed
- Tags
- prompt-injection, security, detection, defense
- Added
- 2026-03-17
- Completeness
- 100%
Index Score
38.3Adoption
35
Quality
74
Freshness
72
Citations
38
Engagement
0