Sync Labs' Lipsync 2.0 Model Now Available on fal

Sync Labs' Lipsync 2.0 Model Now Available on fal

fal is thrilled to announce our partnership with Sync Labs to bring you Lipsync 2.0, the world’s most natural video-to-video lipsyncing model. To experience Lipsync 2.0, visit the model gallery on fal today.

Introducing Lipsync 2.0

Introducing Lipsync 2.0, a zero-shot lipsyncing model that preserves a speaker's unique style without additional training or fine-tuning. Lipsync 2.0 is a leap forward in realism, expressiveness, control, quality, and speed across live-action, animated, and AI-generated video. This game-changing model delivers unmatched realism and flexibility for creators and developers alike.

  • Zero-shot: no need to wait for an “actor,” “clone,” or “avatar” to train—use it instantly
  • Learns and generates a speaker’s unique style of speech
  • Works across live-action, animated, and AI-generated humans
  • Powers video translation, word-level editing, and character re-animation workflows (including realistic AI UGC)

Style Preservation: Lipsync 2.0 learns from a representation of how a person speaks by watching how they speak from the input. Notice how even across different languages, we preserve the speaking style of Nicolas Cage.

0:00
/0:56

Temperature Control: Ability to control how expressive the lipsync generates.

0:00
/0:29

Active speaker detection: Handle long videos with multiple speakers — use the state-of-the-art active speaker detection pipeline that associates a unique voice with a unique face, and only applies lipsync when we detect that person is actively speaking.

Flawless animation: Works across animated characters, from Pixar-level animations to AI generated characters. Translation is only the beginning, with the power to edit dialogue in any video in post-production we’re on the cusp of reimagining how we create, edit, and consume videos forever.

Record Once & Edit Dialogue Forever:  A world where you only ever have to hit record once. Lipsync 2.0 is the only model that let’s you edit a dialogue while preserving the original speakers style, without needing to train or fine-tune beforehand.

Dub Any Video with AI: In an age where we can generate any video by typing a few lines of text, we don’t have to limit ourselves to what we can capture with a camera.

Try It Now

Head over to the fal model gallery to dive into the Lipsync 2.0 integration. Stay tuned to our blog, Twitter, or Discord for the latest updates, new model launches, and product enhancements.

– The fal Team