[bsa_pro_ad_space id=1 link=same] [bsa_pro_ad_space id=2]

Skip to Content

Pulse

Using Artificial Intelligence to identify problem behaviours

By - 26 June 2019

In today’s high-tech world, more and more industries are looking for ways to leverage technology to improve customer service, minimise risk and enhance business productivity and profitability. G3 interviews the Director General of the Spain Directorate General for Gambling Regulation, Juan Espinosa Garcia, about why using technologies like AI and analytics is the way forward to long-term sustainability in the gaming industry.

What are you seeking to share with the IAGA audience concerning AI technology?

From a regulatory perspective, Artificial Intelligence should not be considered as another burden, but rather an opportunity to maintain the stability of the market. I believe this is a relevant issue to discuss at this important point in the history of betting in America as the market seeks to embrace new opportunities, including online. I would like to share the experience of a European regulator dealing with these issues with fellow regulators in America.

AI is undoubtedly a very wide topic to discuss. However, as we see the mainstream adoption of Artificial Intelligence in commercial businesses, especially electronic commerce, the profligate use of algorithms within the Internet marketplace means that more and more people are interacting with AI on a daily basis. The valid point as regards Responsible Gambling, is that alongside the commercial capacity for AI to improve revenues for operators, the targeting of individuals to spur loyalty can also be considered a helpful tool and ally for responsible gambling policies.

Specifically, we have seen AI integrated into Responsible Gambling practices and methodologies, whereby operators have created devices and tools to identify potentially problematic behaviour before it becomes more serious. The same tracking and profiling tools used to incentivise a player, which can be viewed in a negative way in terms of overly aggressive marketing, can be used in a positive way to protect the health of citizens within the framework of general data protection.

My view is that operators have the opportunity with AI to actively promote the identification of problematic patterns of behaviour. For example, an operator using AI can programme algorithmic variables to establish when player behaviour should be raising red flags. In turn, these red flags trigger automatic responses that can address and alter the pattern of consumption, reacting with immediate effect, but with varying degrees of intervention. The player can be addressed directly concerning their behaviour and sent messages alerting them to the potential impact of their pattern of consumption. In Spain, we have examples of these systems in action and I would like to share this experience with the IAGA audience.

How do you ensure that these algorithms are used for positive, legitimate purposes?

This is a debate that is surrounding AI both outside and within the gambling community. It is imperative that regulators establish the requirements that ensure these algorithms are bracketed within an ethical framework. We cannot allow a policy in which “anything goes” in the field of Artificial Intelligence. The operator has a responsibility to progressively and proactively ask their customers “if they can afford to spend the money they are gambling?” In order to achieve this effectively, Artificial Intelligence will have a key role in establishing best practice in this area. In addition to player protections, it should also be noted that Artificial Intelligence within gaming can also be used to ensure compliance with Anti-Money Laundering requirements, triggering the need to identify source of funds and wealth of players using the same tools.

A priority for regulators is to ensure that AI is only used for purposes of fair play. In Europe, operators are not allowed to use commercial data for the profiling of players due to data protection regulation. However, as a regulator, we also recognise that while the rules are clear, this is an area that is very difficult to enforce without the cooperation of the operator. We understand that regulators must work with operators to achieve these goals. We cannot rely exclusively on the creation of rules and regulations to govern this technology, but rather work with operators to change the mindset of the business.

The industry must reconcile the short-term objectives of making the most profit possible, with a dimension more related to the long-term sustainability that is centred around the protection of the customer. There are ways to devise games with a high return to player and games with less return to player, by incorporating greater amusement and time on device. I believe that the operators that succeed in monetising this type of gaming offer will be the ones that have a long-term future in this business. It is not only smart regulation that will drive this ethical gaming model forward, but smart operators that incorporate this business model to protect both the player and their business in the future.

Should operators voluntarily adopt this model or is this something that regulators will enforce?

As a regulator you have to draw the red lines as clearly as possible. We must establish what are the acceptable and unacceptable uses of AI, and we must set this out clearly in the rules. We are seeking to establish regulation that will require operators to engage in these practices, but at the same time allow a degree of flexibility for the industry to explore what can be achieved with AI.

As the regulator, we set the objective to proactively protect the customer, but we are not seeking to micro-manage to such a degree that we devise the algorithms that must be implemented. The industry is in a better position to explore that potential. We must provide the parameters and framework for operators to work within, but not over regiment the application of this technology.

There’s a wider public debate at the moment in which AI is being blamed for all manner of incidents and accidents. How do you ensure that AI is used solely for positive purposes?

In gambling, I understand that the objective of the operator is to make the business as successful as possible. However, this is a dangerous proposition if AI is optimised for this purpose. AI must not be used to manipulate the conduct of the customer in such as manner that they lose control. That said, I also believe that we need to make the case for AI that it can be a ‘friend and ally’ for the protection of the player, as opposed to simply being an anti-consumer tool.

I believe there is a case to be made for fostering a business model that shifts away from the current norm in which 70-80 per cent of business is generated from 30 per cent of the customers. To put it clearly, from a marketing and product standpoint, rather than create high intensity customers, we need to widen the player base by incorporating fun games that do not require a huge investment from the player, but can be sustained over time, much in the same fashion as social gaming. This can only be achieved by shifting the business model and mindset of the operator. It will not be easy, but I believe that without such a move, the businesses will suffer in the medium term.

I believe that if operators do not widen their player base, the industry cannot be both profitable in the long term and support the goal of protecting the consumer and society as a whole. The business model that overly privileges high rollers over more casual customers must end. We must fight against this trend while fostering the image of consumer protection 24/7 through the use of AI to ensure best practice.

Are operators voluntarily moving towards this goal?

Regulation can only extend so far. We must leave a margin for operators to explore their own corporate social responsibility goals. We must let the industry develop its own solutions to these problems. I do not believe that it is efficient to prescribe a restrictive regime. It is better to find a middle ground for regulators and operators in which social responsibility works for everyone, and to achieve this it has to be a joint effort. However, it would be naïve to neglect that clear enforcement signals by authorities in the event of lack of action from Industry do have a bearing towards moving forward. In Spain, we have specific experience where this has worked extremely well through a combination of industry input and regulatory guidance and surveillance. We have experience in this area and I am looking forward to sharing this with the audience at the IAGA Summit.

Share via
Copy link