灵感菇

AI 技能的自然生态,你的一句话,蔓延出无限连接。

返回搜索

AI 应用 / Agent skills

ai-avatar-video

ai-avatar-video

安装量 4,894GitHub Stars 3更新时间 2026年5月15日

描述

Create AI avatar, talking-head, and lip-sync videos on RunComfy via the `runcomfy` CLI. Routes across ByteDance OmniHuman (audio-driven full-body avatar), Wan-AI Wan 2-7 (audio-driven mouth sync via `audio_url` on a portrait), HappyHorse 1.0 (Arena #1 t2v / i2v with in-pass audio), and Seedance v2 Pro (multi-modal cinematic with reference audio + reference subject). Picks the right model for the user's actual intent — UGC voiceover, virtual presenter, dubbed product demo, lip-synced character, dialog scene — and ships each model's documented prompting patterns plus the minimal `runcomfy run` invoke. Triggers on "talking head", "lip sync", "avatar video", "make X speak", "audio to video", "audio driven avatar", "virtual presenter", "AI spokesperson", "dubbed video", "UGC avatar", "HeyGen alternative", "Synthesia alternative", "digital human", "make this portrait talk", "video from voiceover", or any explicit ask to put words in a face.

安全审计

使用前的风险提示

未审计

规则审计

未审计
更新 1年1月1日

智能审计

未审计
更新 1年1月1日
promptagentagentsavatarvideotalkingheadandlipsyncvideosruncomfy