Website • Docs • Blog • Discord
AI security testing for LLMs, agents, and RAG systems
Trusted by 25% of the Fortune 500 and 350K+ developers
Important
Promptfoo is now part of OpenAI. Promptfoo remains open source under MIT, community contributions remain welcome, and we will continue supporting multiple providers and models. Read the announcement →
npx promptfoo@latest init
npx promptfoo@latest eval
npx promptfoo@latest viewSecurity Testing
- Red Teaming — Automated vulnerability discovery with 100+ attack plugins
- Code Scanning — Detect LLM security risks in your IDE and CI/CD
Evaluations
- CLI & Getting Started — Test prompts, models, and RAG pipelines locally
- Node.js Package — Integrate testing into your codebase
- Model Evaluation — Compare and benchmark models
- GitHub Action — Security testing in every pull request
What we detect:
- Prompt injections and jailbreaks
- PII and sensitive data leaks
- Hallucinations and policy violations
- Tool misuse and adversarial attacks
Compliance: SOC 2 Type II · ISO 27001 · HIPAA
Data model:
- Evals — 100% local, API keys never leave your machine
- Red teaming — Your target runs locally; attack generation via our API or bring your own keys
Connect: Discord · X/Twitter · Bluesky · LinkedIn
Contribute: Contributing Guide · Good First Issues · Report Issues
