agentman-video-production-pipeline
Complete workflow for creating AI-narrated promotional videos with synchronized motion graphics. Use when producing demo videos, product explainers, or social media content that combines React/Framer Motion animations with ElevenLabs AI voiceover.
video-productionv1.2.0
videomotion-graphicselevenlabsvoiceoverframer-motionreactsocial-mediacontent-creationtiktokvertical-video
Skill Instructions
# Video Production Pipeline
A comprehensive workflow for creating professional AI-narrated videos with synchronized motion graphics. This skill covers the entire pipeline from concept to final render, with real examples from production.
## Prerequisites
Before starting video production, ensure brand consistency by referencing:
- **agentman-styleguide**: For colors, typography, component patterns, and visual design system
- **agentman-brand-voice**: For tone, messaging, and content strategy
## Pipeline Overview
```
1. Brainstorm & Script → 2. Motion Design → 3. Voice Generation → 4. Transcription → 5. Timing Sync → 6. Assembly → 7. Export
```
---
## Phase 1: Motion Design with React/Framer Motion
### Project Structure (Real Example)
```
src/
├── app/
│ ├── App.tsx # Default horizontal app
│ ├── AppVertical.tsx # Default vertical app
│ ├── AppOpenClawVertical.tsx # Episode 1: OpenClaw Intro
│ ├── AppMessageFlow.tsx # Episode 2: Message Flow
│ └── components/
│ ├── openclaw-vertical/ # 9 frames for Episode 1
│ │ ├── FrameOCV1.tsx # Hook
│ │ ├── FrameOCV2.tsx # Problem
│ │ └── ...
│ └── message-flow/ # 7 frames for Episode 2
│ ├── Frame1_Hook.tsx
│ ├── Frame2_Setup.tsx
│ └── shared/
│ ├── FlowBox.tsx # Reusable step box
│ └── FlowArrow.tsx # Animated connector
public/
├── openclaw_natural.mp3 # Episode 1 voiceover
└── message_flow.mp3 # Episode 2 voiceover
```
### Main Orchestrator Component (Production Code)
```tsx
// AppOpenClawVertical.tsx
import { useState, useEffect, useRef } from 'react';
import { AnimatePresence, motion } from 'motion/react';
export default function AppOpenClawVertical() {
const [currentFrame, setCurrentFrame] = useState(0);
const [muteAudio, setMuteAudio] = useState(false);
const audioRef = useRef<HTMLAudioElement>(null);
// Frame timings synced to voiceover at 1.2x speed
const frames = [
{ component: FrameOCV1, duration: 6000 }, // Hook - Scattered reality
{ component: FrameOCV2, duration: 5000 }, // Problem - The gap
{ component: FrameOCV3, duration: 5000 }, // Solution intro
{ component: FrameOCV4, duration: 10000 }, // Architecture diagram
{ component: FrameOCV5, duration: 12000 }, // How it works
{ component: FrameOCV6, duration: 12000 }, // Key benefits
{ component: FrameOCV7, duration: 9000 }, // Implications
{ component: FrameOCV9, duration: 10000 }, // Event-driven use cases
{ component: FrameOCV8, duration: 6000 }, // CTA + Branding
];
// Handle ?mute URL param for silent recording
useEffect(() => {
const params = new URLSearchParams(window.location.search);
setMuteAudio(params.has('mute'));
if (audioRef.current && !params.has('mute')) {
audioRef.current.playbackRate = 1.2;
audioRef.current.play().catch(() => {});
}
}, []);
// Frame progression timer
useEffect(() => {
if (currentFrame >= frames.length) return;
const timer = setTimeout(() => {
setCurrentFrame((prev) => prev + 1);
}, frames[currentFrame].duration);
return () => clearTimeout(timer);
}, [currentFrame, frames]);
const CurrentFrameComponent = frames[currentFrame]?.component;
return (
<div className="w-screen h-screen overflow-hidden" style={{ backgroundColor: '#1f1f1e' }}>
{!muteAudio && <audio ref={audioRef} src="/openclaw_natural.mp3" />}
<AnimatePresence mode="wait">
{CurrentFrameComponent && (
<motion.div
key={currentFrame}
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
exit={{ opacity: 0 }}
transition={{ duration: 0.3 }}
className="w-full h-full"
>
<CurrentFrameComponent />
</motion.div>
)}
</AnimatePresence>
</div>
);
}
```
### URL Routing for Multiple Videos
```tsx
// main.tsx - Route different video versions via URL params
const isMessageFlow = window.location.search.includes('message-flow');
const isOpenClawVertical = window.location.search.includes('openclaw-vertical');
const getApp = () => {
if (isMessageFlow) return <AppMessageFlow />;
if (isOpenClawVertical) return <AppOpenClawVertical />;
return <App />;
};
// Access: localhost:5173?message-flow&mute
```
### Frame Components (Real Examples)
**Hook Frame - Dramatic Number Reveal:**
```tsx
// Frame1_Hook.tsx - "9 steps" dramatic reveal
export function Frame1_Hook() {
return (
<div className="w-full h-full flex flex-col items-center justify-center px-6">
{/* Large "9" with dramatic entrance */}
<motion.div
initial={{ opacity: 0, scale: 0.5 }}
animate={{ opacity: 1, scale: 1 }}
transition={{ duration: 0.5, ease: 'easeOut' }}
>
<motion.p
className="text-[120px] leading-none"
style={{ color: '#D97757', fontWeight: 700 }}
animate={{ scale: [1, 1.02, 1] }}
transition={{ duration: 2, repeat: Infinity, delay: 1.5 }}
>
9
</motion.p>
<motion.p
className="text-[32px] tracking-wide"
style={{ color: '#FAF9F5', fontWeight: 500 }}
initial={{ opacity: 0, y: 20 }}
animate={{ opacity: 1, y: 0 }}
transition={{ duration: 0.4, delay: 0.6 }}
>
steps
</motion.p>
</motion.div>
<motion.p
className="text-[14px] mt-6"
style={{ color: '#D97757', fontWeight: 500 }}
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
transition={{ duration: 0.3, delay: 2.0 }}
>
Let me show you.
</motion.p>
</div>
);
}
```
**Problem Frame - Split Comparison:**
```tsx
// FrameOCV2.tsx - You vs AI comparison
export function FrameOCV2() {
return (
<div className="w-full h-full flex flex-col items-center justify-center px-6">
{/* Split comparison - stacked */}
<div className="flex flex-col items-center gap-4 mb-8">
{/* You */}
<motion.div
initial={{ opacity: 0, y: -20 }}
animate={{ opacity: 1, y: 0 }}
transition={{ duration: 0.4, delay: 0.2 }}
>
<div className="px-6 py-3 rounded-lg" style={{ backgroundColor: 'rgba(250, 249, 245, 0.1)' }}>
<span className="text-[14px]" style={{ color: '#FAF9F5', fontWeight: 500 }}>
You: Scattered across 10 apps
</span>
</div>
</motion.div>
{/* Divider */}
<motion.div
className="w-12 h-[2px]"
style={{ backgroundColor: 'rgba(250, 249, 245, 0.2)' }}
initial={{ scaleX: 0 }}
animate={{ scaleX: 1 }}
transition={{ duration: 0.3, delay: 0.4 }}
/>
{/* AI */}
<motion.div
initial={{ opacity: 0, y: 20 }}
animate={{ opacity: 1, y: 0 }}
transition={{ duration: 0.4, delay: 0.3 }}
>
<div className="px-6 py-3 rounded-lg" style={{ backgroundColor: 'rgba(217, 119, 87, 0.2)' }}>
<span className="text-[14px]" style={{ color: '#D97757', fontWeight: 500 }}>
AI: Stuck in a browser tab
</span>
</div>
</motion.div>
</div>
{/* Punch line */}
<motion.p
className="text-[20px] mt-8 text-center"
style={{ color: '#D97757', fontWeight: 600 }}
initial={{ opacity: 0, y: 20 }}
animate={{ opacity: 1, y: 0 }}
transition={{ duration: 0.4, delay: 1.0 }}
>
The best AI in the world is useless... if you never use it.
</motion.p>
</div>
);
}
```
**Flow Diagram with Glow Effects:**
```tsx
// Frame3_Steps1to3.tsx - Steps with highlight glow
const steps = [
{ number: 1, title: 'BAILEYS', subtitle: 'WhatsApp connection', icon: '📱', delay: 0.3, glowDelay: 0.5 },
{ number: 2, title: 'MONITOR', subtitle: 'Normalizes formats', icon: '📋', delay: 4.0, glowDelay: 4.2 },
{ number: 3, title: 'ROUTER', subtitle: 'Decides destination', icon: '🔀', delay: 8.0, glowDelay: 8.2 },
];
{steps.map((step) => (
<motion.div
key={step.number}
className="w-full px-4 py-4 rounded-xl flex items-center gap-4"
style={{ backgroundColor: 'rgba(250, 249, 245, 0.08)' }}
initial={{ opacity: 0, scale: 0.9 }}
animate={{
opacity: 1,
scale: 1,
boxShadow: [
'0 0 0 rgba(74,158,255,0)',
'0 0 20px rgba(74,158,255,0.5)',
'0 0 0 rgba(74,158,255,0)',
],
}}
transition={{
duration: 0.3,
delay: step.delay,
boxShadow: { duration: 1.5, delay: step.glowDelay },
}}
>
<div className="w-8 h-8 rounded-full flex items-center justify-center" style={{ backgroundColor: '#4A9EFF' }}>
<span className="text-[14px]" style={{ color: '#FAF9F5', fontWeight: 600 }}>{step.number}</span>
</div>
<span className="text-[20px]">{step.icon}</span>
<div>
<span className="text-[14px]" style={{ color: '#FAF9F5', fontWeight: 600 }}>{step.title}</span>
<span className="text-[11px] block" style={{ color: '#FAF9F5', opacity: 0.5 }}>{step.subtitle}</span>
</div>
</motion.div>
))}
```
**AI Highlight with Sparkle Particles:**
```tsx
// Special treatment for AI step (Step 6)
<motion.div
className="w-full px-4 py-4 rounded-xl relative"
style={{
backgroundColor: 'rgba(217, 119, 87, 0.2)',
border: '2px solid #D97757',
}}
initial={{ opacity: 0, scale: 0.8 }}
animate={{
opacity: 1,
scale: 1.05,
boxShadow: [
'0 0 0 rgba(217,119,87,0)',
'0 0 40px rgba(217,119,87,0.6)',
'0 0 20px rgba(217,119,87,0.3)',
],
}}
transition={{ duration: 0.5, delay: 9.0, ease: 'easeOut' }}
>
{/* Sparkle particles */}
{[0, 1, 2, 3].map((i) => (
<motion.div
key={i}
className="absolute w-1 h-1 rounded-full"
style={{ backgroundColor: '#D97757', top: '50%', left: '50%' }}
initial={{ opacity: 0, scale: 0 }}
animate={{
opacity: [0, 1, 0],
scale: [0, 1, 0],
x: [0, (i % 2 === 0 ? 1 : -1) * 30],
y: [0, (i < 2 ? -1 : 1) * 20],
}}
transition={{ duration: 1, delay: 10.0 + i * 0.1, repeat: 2 }}
/>
))}
</motion.div>
```
### Reusable Components
**FlowBox - Consistent Step Styling:**
```tsx
// shared/FlowBox.tsx
interface FlowBoxProps {
title: string;
icon?: ReactNode;
subtitle?: string;
delay?: number;
isHighlighted?: boolean;
isAI?: boolean;
dimmed?: boolean;
}
export function FlowBox({ title, icon, subtitle, delay = 0, isAI = false, dimmed = false }: FlowBoxProps) {
return (
<motion.div
className="relative px-4 py-3 rounded-xl flex items-center gap-3"
style={{
backgroundColor: isAI ? 'rgba(217, 119, 87, 0.2)' : 'rgba(250, 249, 245, 0.08)',
border: isAI ? '2px solid #D97757' : '1px solid rgba(250, 249, 245, 0.1)',
}}
initial={{ opacity: 0, scale: 0.9 }}
animate={{ opacity: dimmed ? 0.2 : 1, scale: isAI ? 1.05 : 1 }}
transition={{ duration: 0.3, delay }}
>
{icon && <span className="text-[16px]">{icon}</span>}
<div>
<span className="text-[14px]" style={{ color: isAI ? '#D97757' : '#FAF9F5', fontWeight: 600 }}>{title}</span>
{subtitle && <span className="text-[10px] block" style={{ color: '#FAF9F5', opacity: 0.5 }}>{subtitle}</span>}
</div>
</motion.div>
);
}
```
**FlowArrow - Animated Connector:**
```tsx
// shared/FlowArrow.tsx
export function FlowArrow({ delay = 0, color = '#4A9EFF' }: { delay?: number; color?: string }) {
return (
<motion.div
className="flex justify-center py-1"
initial={{ opacity: 0, y: -5 }}
animate={{ opacity: 1, y: 0 }}
transition={{ duration: 0.2, delay }}
>
<svg width="20" height="20" viewBox="0 0 20 20" fill="none">
<path d="M10 4V16M10 16L6 12M10 16L14 12" stroke={color} strokeWidth="2" strokeLinecap="round" />
</svg>
</motion.div>
);
}
```
### Animation Patterns Reference
```tsx
// Fade up entrance
initial={{ opacity: 0, y: 20 }}
animate={{ opacity: 1, y: 0 }}
transition={{ duration: 0.4, delay: 0.2 }}
// Slide in from left (for lists)
initial={{ opacity: 0, x: -20 }}
animate={{ opacity: 1, x: 0 }}
transition={{ duration: 0.3, delay: item.delay }}
// Scale entrance (for logos/CTAs)
initial={{ opacity: 0, scale: 0.9 }}
animate={{ opacity: 1, scale: 1 }}
transition={{ duration: 0.5, delay: 0.2 }}
// Glow highlight effect
animate={{
boxShadow: ['0 0 0 rgba(74,158,255,0)', '0 0 20px rgba(74,158,255,0.5)', '0 0 0 rgba(74,158,255,0)']
}}
transition={{ duration: 1.5 }}
// Subtle pulse (for CTAs)
animate={{ scale: [1, 1.02, 1] }}
transition={{ duration: 2, repeat: Infinity }}
// Staggered list items
{items.map((item, idx) => (
<motion.div
initial={{ opacity: 0, x: -20 }}
animate={{ opacity: 1, x: 0 }}
transition={{ duration: 0.3, delay: 0.3 + idx * 0.15 }}
/>
))}
```
---
## Phase 2: Script Writing with Emotional Markers
Write scripts with emotional markers that guide AI voice generation. **Reference agentman-brand-voice for tone and messaging guidelines.**
**Episode 1: OpenClaw Intro (Production Script)**
```
[thoughtful] Every day you check WhatsApp. Then Gmail. Slack.
[slightly annoyed] Your AI? Stuck in a browser tab you forget to open.
[frustrated] Your conversations are scattered. Your context is fragmented.
[emphatic] The best AI in the world is useless... if you never use it.
[hopeful] What if your AI assistant just... showed up where you already are?
[confident] Introducing OpenClaw.
[descriptive] Your messaging world, unified. WhatsApp, Telegram, Slack, Discord, Gmail, iMessage.
[warm] Same assistant. Same memory. Different front doors.
[enthusiastic] Your data stays local. Your conversations stay private.
[inspiring] The future of AI isn't another app. It's invisible infrastructure that just works.
```
**Episode 2: Message Flow (Production Script)**
```
[confident] 9 steps happen between you hitting send and getting an AI response. Let me show you what's actually going on.
[educational] This is OpenClaw — a local gateway that routes your messages to AI. Here's one WhatsApp message, start to finish.
[measured] Step 1: Baileys — that's the WhatsApp connection — receives your message.
[steady] Step 2: The monitor normalizes it. Different platforms send different formats. This makes them all look the same.
[thoughtful] Step 3: The router figures out where this should go. Is it a DM? A group? Different rules apply.
[slightly serious] Step 4: Gate check. Are you on the allowlist? If not, you get a pairing code, not a response.
[matter-of-fact] Step 5: Session lookup. The system finds your conversation history.
[emphasis] Step 6: Now the AI sees your message. Claude, GPT, whatever you've configured.
[continuing] Step 7: The agent builds a reply.
[descriptive] Step 8: Dispatcher formats it for WhatsApp — text, images, audio, whatever the response needs.
[concluding] Step 9: Delivery. Message goes back through Baileys to your phone.
[insightful] The AI only touches step 6. Everything else? Infrastructure you never think about. That's the whole point.
[warm, direct] If you build with AI agents, follow for more architecture breakdowns.
```
### Emotional Marker Reference
| Marker | Voice Effect |
|--------|--------------|
| `[thoughtful]` | Slower pace, contemplative tone |
| `[excited]` | Higher energy, faster pace |
| `[frustrated]` | Slight tension, emphasis on pain points |
| `[hopeful]` | Upward inflection, lighter tone |
| `[confident]` | Strong, assured delivery |
| `[warm]` | Friendly, approachable tone |
| `[emphasis]` | Key point, slightly louder |
| `[measured]` | Even pacing, educational tone |
---
## Phase 3: Voice Generation with ElevenLabs
### Using Text-to-Dialogue API
```bash
curl -X POST "https://api.elevenlabs.io/v1/text-to-dialogue" \
-H "xi-api-key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"text": "[thoughtful] Every day you check WhatsApp. Then Gmail. Slack.\n\n[frustrated] Your AI? Stuck in a browser tab you forget to open.",
"voice_id": "YOUR_VOICE_ID",
"voice_settings": {
"stability": 0.35,
"similarity_boost": 0.85,
"style": 0.25,
"use_speaker_boost": true
},
"model_id": "eleven_multilingual_v2"
}' \
--output voiceover.mp3
```
### Recommended Voice Settings for Natural Speech
```json
{
"stability": 0.35,
"similarity_boost": 0.85,
"style": 0.25,
"use_speaker_boost": true
}
```
| Setting | Value | Effect |
|---------|-------|--------|
| `stability` | 0.35 | Lower = more expressive, natural variation |
| `similarity_boost` | 0.85 | High = closer to original voice |
| `style` | 0.25 | Moderate style amplification |
| `use_speaker_boost` | true | Enhanced voice clarity |
---
## Phase 4: Transcription with Whisper
```bash
# Install whisper
pip install openai-whisper
# Install ffmpeg (required)
brew install ffmpeg
# Transcribe with word timestamps
whisper voiceover.mp3 --model base --output_format json --word_timestamps True
```
### Parse Transcript for Timing
```python
import json
with open('voiceover.json', 'r') as f:
data = json.load(f)
for segment in data['segments']:
print(f"{segment['start']:.1f}s - {segment['end']:.1f}s: {segment['text']}")
```
---
## Phase 5: Frame Duration Synchronization
### Calculate Adjusted Timings for 1.2x Speed
```
Original Duration: 90.6 seconds
Speed Factor: 1.2x
Adjusted Duration: 90.6 ÷ 1.2 = 75.5 seconds
```
### Production Frame Timings (Episode 1)
```tsx
// Synced to voiceover at 1.2x speed
const frames = [
{ component: FrameOCV1, duration: 6000 }, // 0-5.8s → Hook
{ component: FrameOCV2, duration: 5000 }, // 6.2-11.3s → Problem
{ component: FrameOCV3, duration: 5000 }, // 11.9-16.3s → Solution intro
{ component: FrameOCV4, duration: 10000 }, // 16.9-26.3s → Architecture
{ component: FrameOCV5, duration: 12000 }, // 26.8-38.4s → How it works
{ component: FrameOCV6, duration: 12000 }, // 39.1-51.0s → Benefits
{ component: FrameOCV7, duration: 9000 }, // 51.7-60.6s → Implications
{ component: FrameOCV9, duration: 10000 }, // 61.1-70.4s → Use cases
{ component: FrameOCV8, duration: 6000 }, // 70.9-75.5s → CTA
];
```
---
## Phase 6: Assembly and Testing
### Recording Workflow
1. Run dev server: `npm run dev`
2. Open `localhost:5173?openclaw-vertical&mute`
3. Screen record at 1080x1920 (vertical)
4. Import to CapCut/Premiere
5. Add audio track at 1.2x speed
6. Sync audio start with first frame
---
## Phase 7: Export Settings
### Vertical Video (TikTok/Reels)
- Resolution: 1080x1920 (9:16)
- Frame Rate: 30fps
- Codec: H.264
- Bitrate: 10-15 Mbps
### TikTok Safe Zone
Keep content away from top ~150px and bottom ~200px to avoid UI overlap. Use standard font sizes (not bumped) for vertical videos.
---
## Color Palette
**Reference agentman-styleguide for full design system.**
| Element | Hex | Usage |
|--------------|-----------|--------------------------|
| Background | #1f1f1e | Base dark |
| Primary Text | #FAF9F5 | Headlines, key text |
| Accent | #D97757 | Highlights, CTAs, AI |
| Secondary | #4A9EFF | Flow arrows, connections |
| Success | #25D366 | Delivery, WhatsApp |
---
## Hashtag Strategy
Optimal count: 5-7 tags
Example for AI/tech content:
```
#claudeai #aiassistant #agenticai #opensource #devtools
```
---
## Troubleshooting
### Voice Accent Issues
- Use Text-to-Dialogue API instead of standard TTS
- Ensure voice_id matches your cloned voice
- Try `eleven_multilingual_v2` model for accent consistency
### Timing Misalignment
- Re-transcribe audio if segments don't match
- Adjust frame durations in 500ms increments
- Account for transition animations (typically 300ms)
### TikTok Content Overlap
- Don't bump font sizes for vertical videos
- Keep critical content in center 60% of screen
- Test with TikTok's preview before publishing
Included Files
- SKILL.md(20.3 KB)
- _archive/skill-package.zip(7.8 KB)
Ready to use this skill?
Connect this skill to your AI assistant or attach it to your Agentman agents.
Try Now
Or use with Agentman