Announcing red_panda by Recraft fal is partnering with recraft.ai to offer the new Recraft V3 (aka red_panda) model. To play around with the new models now, check out the demo page here on fal. Codename "red_panda" – shining in user evaluations The Recraft V3 model, codenamed "red_panda"
Announcing Stable Diffusion 3.5 Large & Turbo by stability.ai fal is partnering with stability.ai to offer the new Stable Diffusion 3.5 Large & Turbo models. To play around with the new models now, check out the demo page here on fal. Where the models shine Stable Diffusion 3.5 stands out as one of the most customizable
Training FLUX Style LoRA on fal FLUX has taken over the image generation space, but getting exactly the style you want can be difficult. This where style LoRAs can help. Training a style LoRA on Fal is easy, but there are some tips and tricks for getting the best results. The Basics: Collecting a Good Dataset
Announcing FLUX1.1 [pro] by BFL and FLUX [dev] speed improvements fal is proud to partner with Black Forest Labs to offer both the new FLUX.1.1 [pro] (aka 🫐) and the 2x accelerated FLUX.1 [dev] models. To play around with the new FLUX.1.1 [pro] now, check out the demo page here on fal. Codename "blueberry 🫐"
Generative media needs speed. fal has raised $23M to accelerate. fal has raised $23M in Seed and Series A funding. The $14M Series A was led by Kindred Ventures with participation from Andreessen Horowitz and First Round Capital and angel investors including Perplexity founder Aravind Srinivas, Vercel founder Guillermo Rauch, Balaji Srinivasan, and Huggingface CTO Julien Chaumond. The Seed round
Announcing Flux by Black Forest Labs: The Next Leap in Text-to-Image Models Flux, the largest SOTA open source text-to-image model to date, developed by Black Forest Labs—the original team behind Stable Diffusion is now available on fal. Flux pushes the boundaries of creativity and performance with an impressive 12B parameters, delivering aesthetics reminiscent of Midjourney. To play around with the model
AuraSR V2 Today we released the second version of our single step GAN upscaler: AuraSR. We released AuraSR v1 last month and were encouraged by the community response so immediately started training a new version. AuraSR is based on the Adobe Gigagan paper using lucidrain’s implementation as a starting point. The
Introducing AuraFlow v0.1, an Open Exploration of Large Rectified Flow Models Open-source AI is in jeopardy. As community interest in AI models skyrocketed over the past year, we noticed that development of new open-source foundational models came to a halt. Some even boldly announced that open-source AI is dead. Not so fast! We are excited to present you the first release
Introducing AuraSR - An open reproduction of the GigaGAN Upscaler Today we are releasing AuraSR, a 600M parameter upsampler model derived from the GigaGAN paper. This model can upscale low-res images to 4x the resolution, and can be applied repeatedly. We are publishing this model under a truly open source license. AuraSR excels in upscaling images generated by text-to-image models.
fal News: Stable Diffusion 3 available now, ComfyUI workflows and more Greetings friends of fal! At fal, we are building the fastest and most reliable inference cloud for generative AI. We are thrilled to announce some major updates. Here’s what’s new: Stable Diffusion 3 Now Available Experience this state of the art text-to-image model from our friends at stability.
Introducing On-demand Preference Fine-Tuning for Image Models We’re excited to announce a simpler way to make our AI tools work better for you at fal.ai. Our latest feature lets enterprise users fine-tune their models with user preferences collected directly through their apps. This new service starts today, making it easier to tailor your model to
Introducing fal Model Endpoints Effortlessly serve your Python functions through a managed web server.
Introducing fal We are launching fal-serverless, a cloud based Python runtime that simplifies infrastructure for teams who are building data pipelines, analytics, ML training and inference.