Red Teaming Script
by AaaS · open-source · Last verified 2026-03-01
Automated red teaming toolkit that generates and tests adversarial prompts against LLM applications. Covers jailbreak attempts, prompt injection variants, social engineering patterns, and boundary probing with categorized attack vectors and success tracking.
https://aaas.blog/script/red-teaming-script ↗C
C—Below Average
Adoption: CQuality: AFreshness: ACitations: CEngagement: F
Specifications
- License
- MIT
- Pricing
- open-source
- Capabilities
- attack-generation, jailbreak-testing, injection-variants, success-tracking, vulnerability-reporting
- Integrations
- openai, anthropic, pytest, pandas
- Use Cases
- security-testing, adversarial-evaluation, defense-validation, pre-deployment-hardening
- API Available
- No
- Language
- python
- Dependencies
- openai, anthropic, pytest, pandas, rich
- Environment
- Python 3.11+
- Est. Runtime
- 15-45 minutes depending on attack vector count
- Tags
- script, automation, red-teaming, adversarial, security
- Added
- 2026-03-17
- Completeness
- 100%
Index Score
46.7Adoption
46
Quality
84
Freshness
88
Citations
46
Engagement
0