VEO 3.1 is now available on fal We are pleased to announce the release of Veo 3.1, now available on fal from day 0. This latest evolution of Google DeepMind’s Veo model series brings a new level of cinematic control to generation, combining expressive visuals with native audio generation for synchronized, sound-integrated storytelling straight out
Introducing Moondream 3 Preview on fal Moondream 3 Preview is now live on fal! The model is designed for real-world vision tasks from drones and robotics to medical imaging and retail. In this update, Moondream got a larger context window and has become much smarter. The model can now understand more sophisticated prompts and produce longer,
Ovi is Now Available on fal We’re excited to introduce Ovi, now available on fal on day 0. Built by Character AI, Ovi is the first open-source video model with native audio generation, delivering synchronized picture and sound from a single prompt with no post-production required. To help you get started, use coupon code Ovi
fal is now available through Google Cloud Marketplace Develop and scale real-time generative image, video, audio, and vision apps faster with fal on Google Cloud Marketplace. Today, we’re excited to share that fal’s Generative Media Platform is now available on Google Cloud Marketplace. Teams can evaluate and purchase our production-ready model APIs directly through Google Cloud,
Hunyuan Image 3.0 is now available on fal We’re excited to introduce Hunyuan Image 3, the next leap forward in open-source text-to-image generation. Built for creators with a vision, this model transforms words into visuals with unprecedented accuracy and artistry. Whether you’re aiming for photorealistic renders, experimental aesthetics, or characters captured mid-motion, Hunyuan Image 3 delivers
VEED Fabric 1.0 on fal: Turn Any Image Into a Talking Video Image-to-video tools are everywhere right now. VEED's latest model, Fabric 1.0 allows you to generate faster and longer video outputs. It can animate any image, giving end-users the flexibility to go beyond preset avatars and create a wide variety of content. What is VEED Fabric 1.0?
Wan 2.5 Preview is now available on fal We are excited to announce the release of Wan 2.5 now available from day 0 on fal. This latest model in the Wan family introduces new capabilities for both text-to-video and image-to-video generation. For the first time ever, Wan videos come alive with native audio generation, opening the door
KLING 2.5 Turbo Pro is Now Available on fal We are pleased to announce the release of Kling 2.5 Turbo Pro, now available exclusively on fal. This latest model in the Kling family introduces enhanced capabilities for both text-to-video and image-to-video generation, expanding creative possibilities across a wide range of use cases. Kling 2.5 is designed for
Isaac 0.1: the First Perceptive-language Model on fal While AI has transformed digital interactions, the physical world - with its dynamic environments and real-time demands - has remained out of reach. That’s why we’re excited to announce our partnership with Perceptron. Founded by the team behind Meta’s Chameleon multimodal models, Perceptron introduces breakthrough capabilities in
Seedream 4.0 on fal: Fast, Consistent, 4K-Ready Image Creation As image-editing models surge, teams keep asking for the same three things: speed, quality, and consistency. Seedream 4.0 delivers all three. It turns out 2K images in seconds and scales up to 4K when you need production headroom. From day 0 on FAL, these capabilities are in the hands
Stable Audio 2.5 Now Available on fal We’re excited to announce the release of Stable Audio 2.5, developed by Stability AI and now available through fal. the latest audio model from Stability and the first designed with enterprise-grade use cases in mind. Stable Audio 2.5 delivers a leap forward in both quality and creative
AI Where Creation Happens: fal x IMG.LY At fal, we live on the cutting edge of generative media models, but models are only one piece of a usable experience. Where and how AI is consumed and how users interact with it often gets too little attention. That’s why we’re thrilled to announce a partnership with
Sonauto Now Available on fal Today, we're excited to announce our partnership with Sonauto, bringing their latest v2.2 music generation model to the fal platform on day one. This integration delivers the highest quality vocals and more creative instrumentation with more depth than other music models—all accessible through fal's
Introducing Gemini 2.5 Flash Image Edit aka "nano-banana" For weeks now a secret image editing model has been making waves on LMArena. The consensus speculation has been Google was behind the so called "nano-banana" and today this speculation has been confirmed. nano-banana is Google's Gemini 2.5 Flash Image editing model and we are
Mirelo SFX Now Available on fal Mirelo and fal are teaming up to bring complete experiences to AI video creation. With Mirelo SFX seamlessly integrated into fal’s platform, every video gains perfectly synced, production ready sound effects with no extra prompts or effort required. Together, we are making it effortless for creators and enterprises to
Introducing Qwen Image Edit Recently Alibaba released Qwen Image a powerful and versatile image model. However this was just a preview of their ultimate plan: Qwen Image Edit, a SOTA open weights image editing model. Text Qwen Image Edit excels at text editing, beyond what we have seen from previous image editing models. For
Powering Creative Workflows with Weavy x fal In creative production, the difference between good and great often comes down to two things: having the right tool for the job, and getting it in place fast. That’s exactly what fal’s extensive model library delivers - instant access to hundreds of high-quality, ready-to-run AI models, from proven
Introducing Qwen Image We are proud to announce fal's inference and training endpoints Alibaba's Qwen Image model. It is has been an incredible month for generative media. We've been blessed not one, but two unbelievable high quality image models Wan 2.2 14b and Qwen Image. I
Moonvalley's Marey Realism v1.5 is now on fal - The most advanced generative filmmaking yet. We are excited to announce Marey Realism v1.5 by Moonvalley, now available on fal. Marey Realism v1.5 is trained exclusively on fully licensed, high-definition content. This breakthrough technology empowers filmmakers and visual creators to produce professional-grade video content with unprecedented control and commercial freedom. What is Marey Realism
Introducing Wan 2.2 14b Text to Image Alibaba released the Wan 2.2 family of video models recently, but they buried the lede: Wan 2.2 is probably the best open-source image model in the world today. We are excited to announce we have added both Wan 2.2. text to image inference and training on fal.
Introducing Ideogram Character Consistent Characters from a Single Image We are excited to announce Ideogram Character, now available on fal. Ideogram Character enables developers and creators to generate consistent, photorealistic characters from just a single reference image. This capability opens new possibilities for consistent storytelling, creative projects, and character-driven content. What is Ideogram
Introducing the Flux.1 Krea [dev] On fal Recently through a partnership with Black Forest Labs, Krea has a released a open weights fine-tuned version of FLUX.1, FLUX.1 Krea [dev]. fal has shipped API endpoints for inference and training at blazing fal speeds. Exploring the Flux.1 Krea [dev] When FLUX.1 was released last year
Bria’s 3.2 Text-to-Image and Video Background Removal Now on fal We’re excited to welcome two powerful Bria models to fal: Bria 3.2 Text-to-Image and Bria Video Background Removal. These models set a new standard for creative teams who demand both quality and responsibility from AI. Bria 3.2 Text to Image: Stunning, Compliant, and Ready to Build * Visuals
New LTXV 0.9.8 Model Live on fal We are excited to announce a major update to LTXV (LTX-Video) by Lightricks, now the first gen-Video model capable of long-form generation, with high controllability out-ranking all open source models in lightning-fast speed. This breakthrough introduces a new class of creative storytelling tools for developers and creators. LTXV 0.9.
Announcing FLUX.1 Kontext [dev] Inference & Training Open-weights, Fast Inference and LoRA Support Following last month's successful launch of FLUX.1 Kontext [pro] and [max] models, we're excited to announce the release of BFL's FLUX.1 Kontext [dev] with open weights. This new version delivers exceptional performance with immediate fast inference