Skip to Content

Pulse

From Hype to Accountability: The New AI Compliance Frontier

As artificial intelligence becomes embedded across operations, regulators are looking past the promise and into the process

20240120 131750 5 1 scaled 415x275 c
David Yatom Hay, General Counsel of Soft2Bet

David Yatom Hay, General Counsel of Soft2Bet, argues that explainability, audit readiness, and human oversight – not technological ambition alone – will define the next chapter of iGaming compliance.

I recently took part in a VIXIO webinar that offered a look ahead at what iGaming compliance might look like in 2026, with a strong focus on responsible gambling, innovation, and the growing role of AI.

As General Counsel at Soft2Bet, I see these themes shaping how products are built, how player care is delivered, and how businesses stay resilient as expectations continue to shift. It was a productive hour, and the takeaways below are the ones I expect teams to keep returning to throughout 2026.

AI needs to stand up to scrutiny

Any AI system used for profiling, decision-making, or risk management must produce outcomes that can be clearly and simply explained. You need to be clear on what the system is designed to do, why it exists, what data it relies on, and where human judgment sits in the process when outcomes are challenged.

AI needs to be developed with the expectation that it could be audited at any time. This means you need to track data and model versions, record the limits and rules used in decision-making, and maintain documentation that is easy to review.

If the system flags a player or triggers an intervention, you should be able to explain the “why”. Just as importantly, you need guardrails that keep the system anchored to its stated purpose, so it cannot drift into anything that feels like aggressive engagement by another name.

Control comes before capability

Before anyone gets excited about models, teams need a clear view of what AI is actually being used across the business. An AI inventory sounds basic, but it is often the fastest way to surface risk, especially when employees are already familiar with such tools.

From there, the sensible move is to classify use cases by risk and revisit that classification when the use changes, because a tool that feels harmless for drafting can become sensitive the moment it touches decision-making or player-related analysis.

Ownership has to be equally clear. AI spans product, engineering, data, privacy, security, legal, and compliance, so oversight requires a straightforward decision-making and escalation process.

AI literacy is most effective when it is built into day-to-day workflows, including procurement, where AI checks sit alongside privacy and security checks, backed by policies that protect confidential information when external tools are involved.

The real work starts after launch

AI systems often perform best when first deployed, as the data is familiar, benchmarks are fresh, and the application closely matches the original design. The risk usually shows up later, once the model is embedded in everyday workflows and the business starts treating it as “business as usual”.

That’s when false positives can quietly stack up, triggering interventions that feel justified in isolation but add up to the wrong outcome over time. Drift adds another layer, because both data and behaviour change, and yesterday’s thresholds can become today’s blind spots.

What separates mature teams is the discipline around what happens next. Performance monitoring, incident handling, and audit trails must remain intact long after deployment, with clear ownership for investigating anomalies and making safe adjustments.

In 2026, trust will hinge less on whether an AI model looks good in a demo and more on how confidently you can manage it when it misfires in the real world.

RG still needs humans in the loop

Used well, AI can help responsible gambling teams do the one thing they rarely have enough of, which is prioritise. It can surface patterns at scale and highlight where attention is needed, but I do not believe AI should be the final voice making the call.

The moment you solely rely on an AI model, you risk turning a sensitive, human issue into an automated outcome that is hard to defend and even harder to get right consistently.

There is a practical reason for that, too. If you point AI at a player database without careful thresholds and context, it will find “risk” almost everywhere. I mentioned on the webinar that you can end up with outputs that effectively label 50 per cent or 60 per cent of a database as potentially high risk.

That is not a workable result for any business, and it does not translate into meaningful player care because no team can intervene at that scale with the nuance it requires. The approach I trust is to use AI to create sensible categories and surface signals, then rely on humans to review, apply judgment, and decide on proportionate interventions.

Data protection also needs to sit close to this work, because responsible gambling models can involve large volumes of behavioural and financial indicators that carry long-term obligations once collected and used.

The product has to do more of the work

Across iGaming, the levers that once made acquisition easier are narrowing. Marketing, sponsorship and promotions are all being pulled into sharper focus, and that changes where innovation delivers the most value. The brands that stay strong will be the ones that earn retention through experience, rather than relying on constant incentives to keep players close.

At Soft2Bet, we have leaned into retention-led product innovation, including a gamification feature that sits across casino and sportsbook layers. The goal is to create an environment where players return because the experience is enjoyable and rewarding in its own right.

In practice, that can also support healthier play patterns, as engagement is spread over time rather than driven by short, incentive-heavy bursts.

AI can complement this with smarter personalisation, but the discipline lies in intent. If a system is designed to identify potentially harmful behaviour, its purpose has to stay clean. It should support player care, not drift into tactics that increase intensity under a different label.

Supplier expectations are evolving

The market is shifting quickly, and with that comes a more structured approach to oversight. For suppliers, the emphasis is moving toward visibility, knowing where content is appearing, spotting issues earlier, and having a clear route for escalation and follow‑up when something doesn’t look right.

In day-to-day terms, that can mean deeper audits, more detailed information requests, and a sharper focus on monitoring practices.

One comparison felt particularly relevant. AML evolved into a structured process with a clear rhythm of prevention, detection, monitoring, and reporting. A similar rhythm is beginning to take shape here, with greater emphasis on scalable repeatable processes rather than relying on one-off fixes when something is flagged.

The most important part is that this work starts long before launch. It depends on stronger tracking and better visibility by design, and on games and platforms that can scale across markets as requirements become more layered over time.

Share via
Copy link