AI-powered novel-to-video production platform
Turn your novel, script, or story idea into a complete short drama video — fully automated.
Live Demo · Quick Start · FAQ · Contributing
AIDrama Studio is an open-source platform that turns text (novels, scripts, story ideas) into short drama videos using AI. The entire pipeline is automated:
Novel Text → AI Script Analysis → Storyboard → Images → Videos → Voiceover → Final Drama
You paste in your story, and the AI handles everything: extracting characters, generating consistent character images, creating shot-by-shot storyboards, producing video clips, adding voice acting, and assembling the final product.
No video editing skills required. No design skills required. Just your story.
AI-generated short dramas — from text to video, fully automated:
| Feature | Description |
|---|---|
| AI Script Analysis | Paste a novel chapter and AI extracts characters, scenes, props, and plot structure automatically |
| Character Workshop | Generate consistent character images across all shots — same face, same outfit |
| Scene Generation | AI creates location backgrounds matching your story's setting |
| Smart Storyboard | Auto-generates shot-by-shot breakdown with camera angles, composition, and lighting |
| Video Generation | Multiple AI video models (Kling, Seedance 2.0, Vidu, etc.) generate each shot |
| AI Voiceover | Design unique voices for each character, generate lip-synced dialogue |
| Bilingual UI | Full Chinese / English interface, one-click switch |
| One Key, All Models | Just configure EvoLink — one API key unlocks text, image, video, and voice |
With a single EvoLink API key, you get access to:
| Type | Models |
|---|---|
| Text/LLM | GPT-4o, Gemini, Claude, Doubao, Qwen, and more |
| Image | FLUX, Seedance, Kling, Imagen, DALL-E, and more |
| Video | Kling, Seedance 2.0, Vidu, Veo, MiniMax, and more |
| Voice/TTS | CosyVoice, and more |
No need to register with multiple AI providers. EvoLink aggregates all major models behind a single API key. Register once, use everything. Advanced users can also configure individual provider keys if preferred.
This is the easiest way. You only need Docker Desktop installed.
# 1. Clone the project
git clone https://github.com/EvoLinkAI/ai-short-drama.git
cd ai-short-drama
# 2. Start everything (MySQL, Redis, MinIO, App — all included)
docker compose up -d
# 3. Wait ~30 seconds, then open your browser
open http://localhost:23000That's it! The first startup takes a bit longer as it builds the Docker image.
What you'll see:
- App: http://localhost:23000
- Queue Dashboard: http://localhost:23010 (admin/changeme)
To stop:
docker compose downTo update:
git pull
docker compose down
docker compose up -d --buildFor developers who want to modify the code.
Prerequisites:
- Node.js 18+ (download)
- Docker (for MySQL + Redis)
# 1. Clone and install
git clone https://github.com/EvoLinkAI/ai-short-drama.git
cd ai-short-drama
npm install
# 2. Set up environment
cp .env.example .env
# Edit .env — at minimum, set your AI provider API keys
# 3. Start database services
docker compose up mysql redis -d
# 4. Initialize database
npx prisma db push
# 5. Start dev server (with hot reload)
npm run devVisit http://localhost:3000
- Register an account — Click "Sign Up" on the landing page
- Get an EvoLink API key — Go to evolink.ai, register, and copy your API key
- Configure in Settings — Paste your EvoLink API key in Settings → API Configuration
- Create your first project — Click "Start Creating" and paste your story text
- Let AI do its thing — The platform will analyze your text, generate characters, create storyboards, and produce videos
| Category | Technology |
|---|---|
| Framework | Next.js 15 (App Router) + React 19 |
| Language | TypeScript (strict mode) |
| Database | MySQL 8.0 + Prisma ORM |
| Queue | Redis 7 + BullMQ (4 worker pools) |
| Storage | MinIO / S3-compatible (pluggable) |
| Auth | NextAuth.js v4 (JWT sessions) |
| i18n | next-intl (Chinese + English) |
| Styling | Tailwind CSS v4 |
| Video Editor | Remotion |
| Testing | Vitest (unit / integration / system / regression) |
src/
├── app/ # Next.js pages + 120+ API routes
├── components/ # Shared UI components
├── features/ # Video editor (Remotion)
├── lib/
│ ├── workers/ # BullMQ worker pools (image, video, voice, text)
│ ├── generators/ # Multi-provider media generation
│ ├── llm/ # Multi-provider LLM gateway
│ ├── billing/ # Usage tracking ledger
│ ├── storage/ # Pluggable storage (MinIO, local, COS, EvoLink)
│ ├── task/ # Task lifecycle + SSE streaming
│ └── media/ # Media object management
├── i18n/ # Internationalization config
messages/ # i18n translations (zh + en)
prisma/ # Database schema
tests/ # Multi-tier test suite
scripts/ # 30+ architecture guard scripts
Copy .env.example to .env and configure. Here are the most important ones:
Important:
API_ENCRYPTION_KEYis used to encrypt all provider API keys in the database. If you change it after users have saved their keys, those keys become permanently unreadable. Generate a strong key once and keep it forever.
See .env.example for the full list with explanations.
Q: Do I need a GPU?
A: No! All AI processing happens through cloud APIs (OpenAI, Google, etc.). Your server only needs to run the web app and coordinate tasks. A basic VPS or even a Raspberry Pi works.
Q: How much does it cost to use?
A: The platform itself is free and open-source. You pay for the AI API usage to providers like OpenAI, Google, etc. A typical 2-minute short drama costs roughly $1-5 in API fees depending on which models you choose.
Q: Docker says "port already in use"
A: Another service is using the same port. Either stop that service, or edit docker-compose.yml to change the port mapping (e.g., "23000:3000" → "8080:3000").
Q: docker compose up hangs at building
A: The first build takes 5-10 minutes — it's installing dependencies and compiling. Be patient. If it fails, check that Docker has enough memory allocated (at least 4GB recommended).
Q: npm install fails with node-gyp errors
A: Make sure you're using Node.js 18+. Run node -v to check. If you're on an older version, install nvm and run nvm use in the project directory.
Q: Database connection refused
A: Make sure MySQL is running. For Docker: docker compose up mysql -d. For local: check that MySQL is started and the DATABASE_URL in .env matches your setup.
Q: Video generation is slow
A: Video generation depends on the AI provider. Kling and Seedance typically take 1-3 minutes per shot. The platform processes shots in parallel, so a 20-shot episode might take 5-10 minutes total.
Q: Images look inconsistent between shots
A: This is a common challenge with AI-generated content. Tips:
- Use the Character Workshop to generate reference images first
- The platform uses these references to maintain consistency
- Generating 2-4 candidates per shot and picking the best one helps
Q: Can I edit the generated storyboard?
A: Yes! Every shot's description, camera angle, and prompt can be edited manually before generating images/videos. The AI gives you a starting point that you can refine.
Q: What video formats are supported?
A: The platform generates MP4 (H.264) by default. Each shot is a separate video that you can download individually or as a ZIP package.
Q: How do I deploy to a cloud server?
A: Clone the repo on your server and run Docker Compose:
git clone https://github.com/EvoLinkAI/ai-short-drama.git
cd ai-short-drama
# Edit docker-compose.yml → change NEXTAUTH_SECRET, API_ENCRYPTION_KEY, etc.
docker compose up -dThen set up a reverse proxy (Nginx/Caddy) for HTTPS. See the Quick Start section for details.
Q: How do I enable HTTPS?
A: We include a Caddyfile. Install Caddy, point your domain to the server, and run:
caddy run --config CaddyfileCaddy auto-provisions TLS certificates.
We welcome contributions! See CONTRIBUTING.md for guidelines.
- Bug reports: Open an issue
- Feature requests: Open an issue
- Pull requests: Fork → branch → PR
AIDrama Studio — Free AI-powered novel-to-video production platform with 20+ models (Kling, Seedance 2.0, FLUX, Veo, GPT-4o). Fully automated pipeline from text to complete short drama video. Self-hosted, customizable, MIT licensed.
Built by EvoLinkAI · aidrama.dev · Powered by EvoLink API
MIT — free for personal and commercial use.
If this project helps you, give it a ⭐!


