gws-modelarmor
Google Model Armor filters user-generated prompts and model responses for safety using configurable templates.
Setup & Installation
What This Skill Does
Google Model Armor filters user-generated prompts and model responses for safety using configurable templates. It sits between your application and an LLM, screening content before it reaches the model or the user.
Instead of building custom content moderation logic, you get Google's safety filters wired directly into your LLM pipeline with a single CLI call.
When to use it
- Filtering user prompts before sending them to an LLM API
- Scanning model responses for harmful content before displaying to end users
- Creating reusable safety templates across multiple AI applications
- Blocking prompt injection attempts in user-facing chat interfaces
- Auditing content policy violations in LLM-powered workflows
Similar Skills
best-practices
A checklist of modern web development standards covering HTTPS, CSP headers, input sanitization, deprecated API avoidance, and HTML validity.
auth0-android
Adds authentication to native Android apps using the Auth0 SDK.
auth0-angular
Adds authentication to Angular apps using the @auth0/auth0-angular SDK.
auth0-aspnetcore-api
Adds JWT access token validation to ASP.
