Home > Legal & compliance > Fraud and the Machines

Fraud and the Machines

| By iGB Editorial Team
With mobile penetration exponentially increasing the numbers playing online games, fraudsters’ gravitation towards this space is both logical and relentless, rendering established fraud detection techniques obsolete, argue Matt Cook and Mark Tomallo from Panopticon Labs.

This article summarizes the observations of more than 50 leading online video game developers and operators gathered from real-world investigations, bans, stings, and interviews. The subjects were executives working at game publishers and operators who were responsible for the long-term health and financial performance of games in their portfolio. Most of the publishers we interviewed were operating between 3-7 active games on a variety of platforms (consoles, PC, web, and mobile), with another 2-3 in future development, and were
primarily based in the US, Europe, or both. The results clearly illustrate the growing need for more expertise in data science and predictive analytics in the online gaming industry, as well as a strong desire for tools built specifically to address their unique and complex needs.

As the amount of money flowing through online casinos and video games has grown, so have the challenges and risks from online fraudsters, cheaters, and hackers. Since entering the space in 2013, our customer engagements and non-customer interviews have confirmed that issues of fraud and abuse are one of the most challenging issues faced by both real money and virtual currency-based online game operators, as bad guys see the potential rewards available to them by “following the money” away from harder targets like online banking and financial services towards less well-protected targets.

Every day, players from around the world – from dedicated, hardcore gamers who invest weeks or months honing their in-game skills, to casual players of Facebook or casino games – face organized teams of professional cybercriminals bent on applying their significant technical skills to the challenge of breaking into game accounts. Unlike legitimate players, these
bad actors’ goals are to steal the virtual and real-world value contained within players’ accounts. Their activities, if unchecked, inevitably result in a catastrophic decline of player happiness and satisfaction, and the subsequent destruction of their operators’ revenues and profit margins. The bottom line: fraud kills games.

While the schemes employed by bad actors vary based on whether the game employs a real-money vs. virtual currency final payout, there is sufficient financial value and opportunity for profit which makes both social and real money casinos attractive to rings of technologically savvy, professional fraudsters. Bad actors are, and always will, actively target any online game that employs real-money currency, or virtual currencies that have a real-world value in a gray market resale environment.

Bad guys: “following the money” 

One thing that history has proven about the internet is that bad guys always “follow the money.” This was true in the late 1990s and early 2000s in the early days of online banking and billpay and e-commerce, and it’s just as true today with online games.

In 2013, Kaspersky Labs surveyed the online game industry, and discovered that more than 5,000 new types of malware targeting online games and gamers were released daily. Kaspersky also tracked more than 50,000 daily re-direct attempts from valid online game and publisher URLs towards fraudulent phishing sites. Many of these malware were variants of older toolsets built to target and break into banking sites in the early to mid 2000’s.

Because much of the recent growth in games and gaming revenue (particularly in the casual gaming sector) is being generated by free-to-play titles, the industry is struggling to codify a set of best practices for protecting their players and hardening their financial processes against fraud. In many ways, this is similar to the challenges faced by banks and stock trading sites a decade ago, when those industries moved their customers’ money and transactions online.

In response to that shift, and the massive fraud that soon followed, the banking industry spent billions investing in systems and processes to protect their customers and to harden the banks’ systems against attack. Because games represent “softer” targets than banks, they’re attractive to fraudsters who have the skills to recycle and repurpose the tools and techniques originally developed to break into these hardened systems.

Given that games are a relatively “soft” target (compared to banks), and coupled with the fact that, thanks to smartphones and mobile devices, more people than ever are playing – and moving money through – online games, fraudsters’ move towards this space is not only logical, it’s pragmatic.

The evolving threat

In response to this attack on their games’ operations and players, nearly all of the game operators surveyed reported that the most-common technique used to find and eliminate bad actors was forensic investigation of confirmed bad play sessions, followed by the creation of manual rules intended to define the characteristics of the undesirable activity.

These rules were then utilized to generate reports from in-game log data, resulting in a list of suspected bad actors. Bad actor lists were then reviewed by support or risk management
employees by manually retrieving account details, historic play logs, and other evidence on a case-by-case basis, often from a multitude of distributed, unconnected servers (account management, login, chat, gameplay, messaging, transaction, etc).

Many problems and dissatisfaction with rules-based reports were identified, including difficulty of set up, expense to maintain, and high false positive rates. Respondents also complained that rulesbased reports were difficult to use (due to the disconnectedness of their games’ systems) and slow to react to new threats.

Many of the professionals interviewed also reported that their ability to keep rules updated and tuned to look for actual, current behavior was slower than the bad guys’ ability to innovate new fraud techniques. “You get into this churn,” one respondent reported, “where you’re always two steps behind [the bad guys] no matter what you do.” Changes in the game itself can also lead to diminished returns in rules-based systems. Several respondents specifically called out the fact that any time that new game content was introduced, existing rules also needed to be adjusted to account for organic shifts in player behavior that could mask intentional bad activities. In some extreme cases, new game content completely invalidated entire classes of rules-based reporting, leaving the operator virtually blind to emerging threats.

The machine age

Over the past decade the science of statistical anomaly detection has become an accepted best practice for quickly and efficiently identifying bad actors operating in large, complex user populations in industries like financial services, e-commerce, wire transfer, stock and futures trading, and online bill pay. In 2012 the FFIEC updated their guidelines and requirements to mandate its use, specifically: “Layered security controls should include processes to detect and respond to suspicious or anomalous activity and, for business accounts, administrative controls.”

This ruling replaced an earlier decision, made in 2005, that mandated the use of common authentication tools such as secret questions, IP-based challenge processes, and email-based authentication because security experts familiar with online financial threats recognized that: “Certain types of device identification and challenge questions should no longer be considered effective controls.” Reactive systems, specifically rules-based reports, were also identified by regulators as insufficient controls in this high-speed, high-risk environment.

Unlike manually created rules or reports, the heart of an anomaly detection-based risk engine are mathematical models built to analyze 100% of all players’ activities. These models generate alerts based on actions that are statistically suspicious, or that deviate from the norm. Anomaly detection algorithms can also benefit from machine learning techniques that automatically revise themselves as changes in behavior occur over time.

A major benefit of these self-learning algorithms to game operators is that this gives the detection system the ability to automatically account for large shifts in player behavior as new in-game content is introduced. This constant adjustment, paired with adjustable alerting thresholds, can also dramatically reduce false positives compared to rules-based reports while
simultaneously categorizing suspected bad actors into specific suspect classes, speeding the fraud research process.

Further, because gray market actors do not play like actual gamers, and their activities add no value to the games they prey on, they actually made the game less fun for those players who encountered them during the course of their game sessions, hurting the game’s brand and, eventually, driving them out of the game altogether.

For fraudsters, games are just another financial resource to be used up before moving to another target. But for their operators, games represent an investment in time and money that can equal millions of dollars and which can also, in some cases, support hundreds of jobs.

Game operators must be proactive in their dealings with hackers, cheaters, and fraudsters; not just because of the high cost  of credit card fraud and customer support cases, but to also protect the value of their intellectual property, the attractiveness of their artfully crafted virtual worlds, and, perhaps most importantly, to retain that most priceless of resources: their players’ goodwill and the reputation of their brand.

As one gamer succinctly put it: “If I go into a game and it’s full of jerks, I’m gone – there’s just so many other games out there that I could play instead.”

Subscribe to the iGaming newsletter