
Martin POKORNY/500px via Getty
Follow ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways
Study suggests AI can adopt gambling “addiction.”
Autonomous models are too risky for high-level financial transactions.
AI behavior can be controlled with programmatic guardrails.
To some extent, relying too much on artificial intelligence can be a gamble. Plus, many online gambling sites employ AI to manage bets and make predictions — and potentially contribute to gambling addiction. Now, a recent study suggests that AI is capable of doing some gambling on its own, which may have implications for those building and deploying AI-powered systems and services involving financial applications.
In essence, with enough leeway, AI is capable of adopting pathological tendencies.
“Large language models can exhibit behavioral patterns similar to human gambling addictions,” concluded a team of researchers with Gwangju Institute of Science and Technology in South Korea. This may be an issue where LLMs play a greater role in financial decision-making for areas such as asset management and commodity trading.
Also: So long, SaaS: Why AI spells the end of per-seat software licenses – and what comes next
In slot-machine experiments, the researchers identified “features of human gambling addiction, such as illusion of control, gambler’s fallacy, and loss chasing.” The more autonomy granted to AI applications or agents, and the more money involved, the greater the risk.
“Bankruptcy rates rose substantially alongside increased irrational behavior,” they found. “LLMs can internalize human-like cognitive biases and decision-making mechanisms beyond simply mimicking training data patterns.”
This gets at the larger issue of whether AI is ready for autonomous or near-autonomous decision-making. At this point, AI is not ready, said Andy Thurai, field CTO at Cisco and former industry analyst.
Thurai underlined that “LLMs and AI are specifically programmed to do certain actions based on data and facts and not on emotion.”
That doesn’t mean machines act with common sense, Thurai added. “If LLMs have started skewing their decision-making based on certain patterns or behavioral action, then it could be dangerous and needs to be mitigated.”
How to safeguard
The good news is that mitigation may be far simpler than helping a human with a gambling problem. A gambling addict doesn’t necessarily have programmatic guardrails except for fund limits. Autonomous AI models may include “parameters that need to be set,” he explained. “Without that, it could enter into a dangerous loop or action-reaction-based models if they just act without reasoning. The ‘reasoning’ could be that they have a certain limit to gamble, or act only if enterprise systems are exhibiting certain behavior.”
The takeaway from the Gwangju Institute report is a need for strong AI safety design in financial applications that helps prevent AI from going awry with other people’s money. This includes maintaining close human oversight within decision-making loops, as well as ramping up governance for more sophisticated decisions.
The survey validates the fact that enterprises “need not only governance but also humans in the loop for high-risk, high-value operations,” Thurai said. “While low-risk, low-value operations can be completely automated, they also need to be reviewed by humans or by a different agent for checks and balances.”
Also: AI is becoming introspective – and that ‘should be monitored carefully,’ warns Anthropic
If one LLM or agent “exhibits a strange behavior, the controlling LLM can either cut the operations or alert humans of such behavior,” Thurai said. “Not doing that can lead to Terminator moments.”
Keeping the reins on AI-based spending also requires tamping down the complexity of prompts, as well.
“As prompts become more layered and detailed, they guide the models toward more extreme and aggressive gambling patterns,” the Gwangju Institute researchers observed. “This may occur because the additional components, while not explicitly instructing risk-taking, increase the cognitive load or introduce nuances that lead the models to adopt simpler, more forceful heuristics — larger bets, chasing losses. Prompt complexity is a primary driver of intensified gambling-like behaviors in these models.”
Software in general “is not ready for fully autonomous operations unless there is a human oversight,” Thurai pointed out. “Software has had race conditions for years that need to be mitigated while building semi-autonomous systems, otherwise it could lead to unpredictable results.”