Say X but do Y

Welcome to The Fixer, a weekly newsletter from The WayFinders Group. We’re organisational repair specialists who repair damage, rebuild trust, and restore performance. On Fridays, we examine unfolding corporate crises — breakdowns that reveal what happens when damage goes unrepaired, and what you can do in the face of a fiasco.

In partnership with

Friday’s Fiasco

Say X but do Y

Earlier this week, Elon Musk’s X announced “tougher restrictions” to stop Grok generating nonconsensual sexualised images. 

source: thetimes.com

Grok is Elon Musk’s AI chatbot, created by his company xAI that has come under fire by women and politicians alike for allowing users to request that any available image be amended to create sexualised digital nude or scantily dressed versions of the people in the relevant photograph. 

The Times reported that “X’s safety team said that there will be “zero tolerance” for child sexual exploitation and high-priority violations will be removed immediately. “Clear rules … Strong enforcement … No exceptions,” it added.”

Allegedly developers had imposed restrictions on editing and geoblocking the ability to generate such images where it’s illegal to do so.

However, post-announcement, Grok was still mass-producing the content that had supposedly been blocked. According to Metro, “one user made dozens of requests to fabricate nonconsensual intimate images in only two hours”. 

xAI has received explicit warnings in the last six months following the departure of safety teams raising concerns and team leadership departing prior to the scandal. When Grok exploded, 7,751 sexualised images appeared per hour — Internet Watch Foundation found criminal imagery of children aged 11-13 with dark web forums citing Grok as their preferred tool. 

Musk’s first step was to limit access to Grok to paid subscribers, a move both curious and ethically concerning. When the first step is neither to disable a harmful tool nor to fix the technical flaw to prevent further harm, one has to ask why. The old adage of course: all publicity is good publicity if your singular goal is to monetise the situation. Here, basic duty of care competed with premium subscription engagement, “free speech” brand positioning, and growth metrics. Which means that the rules of the game we’re in are neither about ethics nor compliance, but a question of culture. 

So, if xAI’s board asked for our advice, here’s where we’d start:

No ethics. You’ve made that clear. Child protection competed with engagement metrics and engagement won. But that means the rules you’re operating under aren’t about technical capability - they’re about organisational culture. When the warning came in August and you cut safety teams in November, those were choices about what matters when compliance conflicts with growth. Repair starts with acknowledging what those choices revealed about your operating principles.

No compliance. You’re not avoiding the cost of compliance. You’re choosing which costs to pay: implement proper safeguards now, or fight regulators, face bans, lose markets, and implement the same safeguards later under far worse commercial terms. The EU can fine 6% of global revenue, Ofcom (UK) 10% of turnover, and Indonesia and Malaysia have already banned Grok. But compliance isn’t the barrier here. The barrier is that your remediation was performative: you announced restrictions that you didn’t enforce. Regulators investigate the gap between stated policy and actual practice so we’d need to address that first - implementation capacity, not just the policy language.

No repair. Nearly three in five Britons want X banned if Grok cannot be reined in according to More In Common polling. Four in five fear AI misuse will become worse with time. Technology Secretary Liz Kendall said the Government “shall not rest” until platforms meet legal duties. But you’re treating this as a regulatory relations problem when it’s a trust restoration problem. Until your restrictions demonstrably prevent the harm rather than announce they will at some point, you’re in the unquantified damage zone. And regulators, victims, and users will all continue to submit their invoices to the court of public opinion where there is no right of set off.

Online safety isn’t a “nice to have”. It’s not censorship. It’s having standards that don’t bend when they become expensive to enforce, because the alternative is far more expensive.​​​​​​​​​​​​​​​​

Help us make better ads

Did you recently see an ad for beehiiv in a newsletter? We’re running a short brand lift survey to understand what’s actually breaking through (and what’s not).

It takes about 20 seconds, the questions are super easy, and your feedback directly helps us improve how we show up in the newsletters you read and love.

If you’ve got a few moments, we’d really appreciate your insight.

In the face of a fiasco

four steps a board can take to quantify the damage

1. Map what you knew, when you knew it, and what you did with it. 

Not a timeline of the scandal - a timeline of the warnings. When did concerns first surface? Who raised them? What decisions were made in response? The gap between warning and crisis reveals your operating principles under pressure. Quantify what acting on warnings would have cost vs what ignoring them is costing you now in enforcement, litigation, and market access.

2. Track who raised concerns and what happened to them. 

Don’t just document the concern - document the person. When people flagging risk depart before the risk materialises, that’s a governance indicator. Calculate severance costs, knowledge loss, and implementation capacity gaps. Add up what you paid people to leave vs what you’re paying because they left. This shows whether you valued the warning or silenced your people.

3. Test whether your remediation works before announcing it works. 

If external parties can disprove your fix within 24 hours of announcement, you’re compounding the damage. Quantify the gap: how many violations occurred between “we fixed it” and when it was demonstrably fixed? That window is your liability exposure and regulators (and insurers in some cases) measure it precisely. Announced restrictions are a holding announcement. Working restrictions are evidence of amends. 

4. Calculate repair costs vs ongoing damage costs. 

Your board sees compliance as expensive. But the real comparison isn’t compliance vs non-compliance - it’s repair now vs accumulated damage later. Quantify the four categories: 

  • relationship damage (stakeholder trust erosion), 

  • operational damage (implementation capacity loss), 

  • regulatory damage (fines, bans, enforcement), and 

  • reputational damage (market access, commercial relationships). 

These don’t offset - they compound. Our Organisational Repair Index™ measures how much damage is accumulating while you’re in reputation management instead of repair. 

Want to quantify the damage before regulators do? Our diagnostic converts intangible damage into a quantified risk map using the Organisational Repair Index™ showing leaders exactly where damage sits, how trust and performance are affected, and what’s needed to repair it.