Pulse
Artificial Intelligence: Ethical Concerns
By Lewis - 24 April 2025
There are some other serious questions as to how AI should be used for gambling. Is it too powerful? Could it create games that are too addictive? These are the kind of questions being posed by Dr Kasra Ghaharian the director of research for the University of Nevada Las Vegas UNLV’s International Gaming Institute. He is a global gaming expert who specialises in online settings and AI applications, machine learning, consumer protection, and payments modernisation. He is also one of the authors of a ground-breaking study on the ethics of AI and gambling which was published in July 2024. We caught up with him to hear his take on some of these issues.
What are the potential risks and challenges associated with the lack of regulation and oversight for the use of AI in the gambling industry?
Dr. Kasra Ghaharian: When we talk about regulating AI in the gambling sector, there are a few different routes in terms of how we could regulate it within the sector. But there’s also the fact that AI regulations or guidelines are being drafted outside of the sector as well. The most well-known one at this point is the EU AI Act, which is going to be a binding piece of legislation that AI developers as well as AI implementers need to adhere to. And the consensus appears to be that the EU AI Act is primarily targeted at the developers of AI, in particular generative AI. However, the Act is very much aligned with a sector based regulatory framework where specific sector regulators will look at the EU AI Act and then scrutinise their own industry and craft regulations there. For example, you could see, perhaps, the aviation regulatory body looking at the EU AI Act to inform their own regulatory practices. And we could certainly see the same thing happening for the gambling sector. And if we look at the risk based framework of the EU AI Act, it’s kind of mixed in terms of where gambling falls.
I think for the majority of gambling use-cases, they’re probably not going to fall into the higher risk or prohibited use categories. But who’s to say that analysing and tracking a person’s gambling behaviour to craft very targeted, engaging content and messaging to alter a person’s behaviour for a product that is potentially harmful and addictive could not be classed as a high-risk or prohibited use case? Someone could end up saying that’s a higher risk use case and needs to be scrutinised as such. So stakeholders definitely need to keep an eye on that and put their best foot forward.
In terms of within the gambling sector though, at least in regulated markets, we are well regulated and there’s quite nice pillars in place in terms of the legislations and regulations to keep gambling safe, keep it fair, and trying to minimise the associated harms. So, perhaps existing gambling regulations could be sufficient to protect consumers from the dangers and risks of AI. However, I do think there needs to be specific sector regulatory oversight to mitigate the risks and concerns, which could be things like exploitation of players, algorithmic bias, untransparent practices by operators, these kinds of things that are quite well known now if you stay up to date with AI ethics and AI regulations.
The other thing I’d say, which I personally think is the biggest risk for our sector, is a lack of AI literacy. I really think we need to make a more concerted effort to educate all stakeholders about this technology so that we can make strides and efforts to regulate the technology. Because the level of AI literacy is just too disparate at this point. So, I think actually for me that’s the biggest risk that we don’t make efforts to improve AI literacy across stakeholders in the industry.
Could AI develop games that are too addictive?
Dr. Kasra Ghaharian: Look at TikTok. TikTok is creating highly engaging content. I don’t know the inner workings of TikTok, but I’ve got to think they’re using things like machine learning and other data driven methods to engage users as much as they can because that’s their business model. Same thing for gambling. Engaging users with interesting novel content that appeals to them is part of the business model. I do think that the gambling industry at least does have existing regulations in terms of what you can change about game mechanics and things like that. But what happens if we get to a stage, which is probably going to be soon, where AI generated video can be generated in real time as you watch it, what does that mean for a slot machine online? I don’t know if regulators or existing regulations speak to that level of advanced generative AI. So, to your point, I think it is a concern. I haven’t done a deep dive into it, but I think it’s a legitimate concern.
AI is going to be able to identify players with a gambling addiction problem. Is there a danger that the use of AI to help gambling related problems might be exaggerated?
Dr. Kasra Ghaharian: I think there’s two things I’d probably comment on here. One, the AI risk detection algorithm space. I think there has been tremendous strides made in terms of how that technology has benefited harm prevention. I think it goes without saying that it provides a lot of benefit and there’s a lot of fantastic products, and we should by no means discount the impact those products have made. However, it’s a tricky thing to do. It’s very hard to define what a gambling disorder is.
It’s very hard to define that with tracking data alone. And that’s the crux of this whole issue, because there’s no binary flag we can use. And that’s why you’ve seen such advances in AI image generation and video generation. Because we have the Internet, which has got loads and loads of images, right? So, if we want to craft an AI that can generate an image of a house, we can tell it what a house looks like. We can’t tell an AI based on online tracking behaviour what a problem gambler looks like. We have an idea, but we don’t technically know. So, it makes it very challenging for these systems to actually do their job. But we’ve come up with ways to develop proxies for that and at least come up with more of a screening tool, I’d say.
AI is also used for commercial purposes like marketing, advertising, content generation. I think what I would actually like to see is AI for consumer protection weaved into those commercial purposes. So, like when someone is crafting a new advertising campaign where they’re using AI, maybe there can be things in place. Maybe each company should have an internal ethical AI board that all projects have to get approved through. Maybe there should be some technical developers understanding the consumer protection piece so they know where the line should be drawn. Or maybe the player protection element should be weaved into the marketing activities. I think there needs to be much more collaboration between departments on this.
Are we knowledgeable enough to regulate AI in gambling?
Dr. Kasra Ghaharian: If I was a CEO of a company, I’d probably be hiring an AI champion or, you know, the chief executive officer of AI or whatever the title is. I think there needs to be that person spearheading the discussion, looking at all of these issues. I think that would be a good first step. And I even think regulators maybe need to do the same.
One of the concerns commonly expressed regarding AI is that it could lead to other job losses. Are these concerns justified?
Dr. Kasra Ghaharian: In terms of job displacement, I don’t think it’s an issue to ignore. I think maybe it’s exaggerated a little bit. I don’t think we’re at a stage where large language models can replace coders. They are becoming very, very, very sophisticated at a very fast rate. I think we’d be ignorant to ignore it. In terms of the land based sector, there’s going to have to be massive advances in robotics for it to ever replace front of house roles and that kind of thing. There needs to be research done on the acceptance of that. Do people want a robot dealing cards or do they want a person?
I think my bigger moral concern is around transparency, and I’ll speak to a specific use case about this which kind of demonstrates the broader problem with it. Let’s look at the case of using AI for risk detection in problem gambling. There are a plethora of companies that have developed algorithms, based on behavioural wagering data, that claim to be able to detect individuals who are at-risk of experiencing gambling-related harms. We have no idea how these algorithms work. We don’t know how effective they are.
We also have no idea how much it’s benefiting the consumer, which makes it very hard for regulators to enforce any clear regulation or guidelines around this practice. So that’s actually something we’ll be working on in the next twelve months, is a benchmark for these risk detection algorithms. The idea is that we’ll have a standardised suite of benchmark data sets and any stakeholder can use our platform to test their algorithm, and we’ll provide a ranking of the performance. This is something already being done for large language models. We’re currently conceptualising this and we’re hoping to get started on it next year, which is part of a wider AI research initiative at UNLV’s International Gaming Institute.
How do you think AI will affect, in very general terms, the gambling industry overall?
Dr. Kasra Ghaharian: I think it will make operations more efficient, more sophisticated. I think there will be benefits to the consumer experience. We have to remember that this is an entertainment product. People are engaging with it for a reason. And the large majority of people do engage with it at, you know, levels within their means, and they consume gambling as a form of entertainment.
So I think there will be benefits to the consumer in that respect. And I think it’s important, like the work you’re doing and getting the word out there in terms of understanding that there is risk with it. I think the same conversation was probably had when the Internet came out. The Internet made gambling more accessible, more convenient. You could play in your pyjamas, in your living room. I think people started to understand that, and that’s why we’ve seen strides in terms of consumer protection and regulation. There’s regulations that speak to online gambling specifically now.
So, I think hopefully the same thing will happen with AI, and it’s important that more of the discussions that we’re having right now take place.