firebase-ai-logic-basics
Integrates Firebase AI Logic into web and mobile apps so you can call Gemini models directly from client-side code without managing a backend.
Setup & Installation
What This Skill Does
Integrates Firebase AI Logic into web and mobile apps so you can call Gemini models directly from client-side code without managing a backend. Supports text generation, multimodal input (images, audio, video, PDFs), structured JSON output, streaming responses, and on-device inference via Gemini Nano.
Handles Firebase SDK setup, API provider selection, and App Check configuration so you skip the boilerplate and go straight to calling Gemini from your client code.
When to use it
- Adding image captioning to a web app using Gemini's multimodal input
- Generating structured JSON responses from user prompts in a Firebase project
- Streaming chat responses in a multi-turn conversation UI
- Switching between on-device and cloud inference for offline-capable apps
- Updating the Gemini model version across all clients without redeploying code
Similar Skills
mcp-builder
A development guide for building MCP (Model Context Protocol) servers that connect LLMs to external APIs and services.
skill-creator
A skill for building, testing, and refining other skills.
template
A starter scaffold for building new agent skills.
answers
Provides AI-generated answers grounded in live web search results through Brave's OpenAI-compatible chat completions endpoint.
