Building long-term trust in a world where creation moves at the speed of thought
By Sean Bonawitz, Head of Trust & Safety at fal
My first job was in the US Navy as an EOD platoon commander, where my job was to protect the people around us and make sure they could keep moving forward. The mission was always to protect life, minimize damage, and enable access. Simple enough to understand. Hard to live by when the clock is running and the consequences of moving too fast or too slow are immediate and real.
Trust was foundational and implicit. Something we actively built and maintained over time. Trust between you and your team. Trust in your own judgment under pressure. Trust that the people depending on you come to understand, over time, the difference between what you can rush and what you cannot.
This thought process has followed me through every industry. And in AI, the unique pace brings everything we do into sharper focus.
Not because the stakes are literally life and death, though in some corners of trust and safety they genuinely are. But because the underlying logic is the same: you are always operating inside a time constraint, you will never have all the information you want, and the worst thing you can do is freeze. The second worst thing is to move so fast you stop thinking clearly.
What has changed across every role I've held is the definition of trust itself and what responsibility actually looks like as the stakes change around you. That definition has had to grow and stretch with every industry I've worked in. What hasn't changed is this: how you conduct yourself under pressure is how trust gets built or is lost.
Nobody in AI is truly ahead of the trust and safety problem.
The AI industry does not slow down for anyone. Not for regulators. Not for ethicists. Not even for the companies building inside of it. I joined fal because I believe in what we were building. But I want to be honest about something that is often glossed over in industry conversations about AI trust and safety: nobody has this figured out perfectly. Not us. Not the biggest names in the space. We are all writing the playbook in real time, often one step behind where we wish we were simply because the industry shifted overnight. That's the reality of working at the edge of a fast-moving frontier.
Across all of it, I learned how trust and safety actually works. Not the theories, not the concept, but the actual true mechanics of it. How it's built under pressure. How it breaks when you move too fast or too slow. How the people depending on you need to trust that you know the difference.
That's what I carry into fal. Timing is a constraint, not a luxury. The question is never really "are we ready?" It's "what can we do right now, and what does responsible movement look like from here?"
Here is exactly where fal stands
I want to be specific. At fal, we operate as AI infrastructure. That framing matters. We are not a content or social media platform. We are not curating a feed or pushing content to users. We are closer in structure and responsibility to a cloud provider than to a social network. Our posture centers on one clear principle: when we gain actual knowledge of a violation, we act immediately.
Right now, we prioritize enforcing on:
- Child safety: We are integrating Thorn, whose technology helps us detect and takedown child sexual abuse material (CSAM) before it spreads, and then report to NCMEC, the legally mandated channel for flagging and responding to it. This is a proactive safeguard, not reactive. We invested here early because any successful and ethical business in this space must continually uphold the highest standards in child safety.
- Non-consensual intimate imagery: We are partnering with StopNCII.org, an international charity providing hash-matching that allows victims to register their images and trigger automated detection across partner platforms. We invested here because the creation of non-consensual intimate imagery violates our Terms of Service and Acceptable Use Policy, and contradicts our community values of respecting each other’s protected rights.
The threats in this space move fast, which means the program has to be built for endurance, not just the problems visible today.
Why building trust and safety well is a long game
Trust and safety work, the programs, the partnerships, the infrastructure underneath all of it, holds best when built with the long run in mind.
What we have is clarity about what we're here to do, and we've done the work to prepare for it. That focus is what allows a company to execute well over the long run rather than just react to the last thing that surfaced. You don't build for that by patching things as they break. You build systems designed to absorb new pressure.
What I've observed at other companies, and tried to learn from, is what happens at both extremes. Some tried to solve for everything and ended up paralyzed, every decision becoming a philosophical debate that slowed everything down. Others moved so fast they stopped noticing things they shouldn't have missed. Neither is the model. What I've taken from watching both is that the goal isn't perfection. It's constant improvement and durability. Can you hold the line on what matters most, stay honest about what you don't yet know, and keep building without losing sight of either? At fal, that is the standard we’re building toward.
The right moment, the right time
fal was already focused on trust and safety before I arrived. There are moments in a company's growth, and in an industry's growth, where the work needs to crystallize into something more dedicated. Generative AI has hit that moment. The speed at which this space is moving, the breadth of what's now possible, meant that fal needed someone whose entire job was to hold this line. The timing was right, for the company and for me.
I’ve been happily surprised and humbled by what I’ve discovered since joining fal. I expected a fast-moving technical team, and the team members working on safety are no exception. I also found a group of people with no ego about it. The founders are building something they genuinely believe in, and that comes through in everything we do. There's no posturing, no territoriality, no one trying to look good at the expense of moving forward together towards the best solution.
That matters more than it might sound. Trust inside a company is what makes trust outside of it possible. I trust the people I'm working with to be straight with me about what they know and don't know, and I owe them the same. If they can trust that I'm here to protect what they're building rather than slow it down, our team will be strong. What I have found at fal is a team that already understood that balance, even before anyone had put a formal name to it.
The clock is always running. What drew me here is that these are people who know it, and build accordingly.
Sean Bonawitz is Head of Trust & Safety at fal. Before joining fal he held trust and safety roles at YouTube, TikTok, Patreon, and TextNow, and served in the U.S. Navy as an EOD platoon commander.