Home Technical Articles How to Protect Your OTT Service from a Credential Stuffing Attack
Applications

How to Protect Your OTT Service from a Credential Stuffing Attack

About The Author

Outline

The threat posed by credential stuffing attacks on OTT streaming services became crystal clear recently. Within a few hours of a much-hyped launch of a popular streaming service, user accounts were hacked and offered for sale at a discount. This breach morphed into a PR challenge as thousands of subscribers turned to social media to vent their frustrations about locked account access and service accessibility issues.

As this experience illustrates, credential-stuffing attacks are an emerging challenge for OTT security teams. Streaming service subscriptions, driven by free trials, cord-cutting, and exclusive content, have generated large collections of user information, making OTT services increasingly attractive targets for data theft. Reselling access to breached accounts isn’t the only motive for hackers. They can also scrape valuable private details from breached user accounts, such as addresses, phone and browsing history, and credit card data. The hacker can then sell this information across the dark web or cause further damage through social engineering and phishing attacks.

The damage zone of a credential stuffing attack goes well beyond the impact on a user’s privacy and finances. Credential stuffing attacks use botnets capable of automating millions of login requests per hour, wreaking havoc on application infrastructure. Even with a low success rate, such a high volume of requests can drive up the cost of operating the streaming platform. Extra CPU cycles, memory, and data ingress/egress fees increase over time. Given the relatively high cost of managing application backends, especially in the cloud, login requests, which heavily depend on the backend systems, are the most expensive attack. Ultimately, a high level of unchecked nefarious activity degrades the service for legitimate users trying to authenticate, browse and stream content.

How can a streaming service neutralize this growing threat? This tech article will review what’s required to manage bots in today’s world and what it takes for a streaming service to minimize the impact — and reduce the probability — of a credential-stuffing attack.

The anatomy of a credential stuffing attack

Cybercriminals can start a credential stuffing attack by obtaining stolen credentials through several means, including discovering misconfigured databases, phishing attacks, infecting users’ devices with malware, or buying hacked credentials on the dark web. Next, attackers route countless login requests through distributed proxy servers to obscure the attack’s origin and amplify the requests. Criminals can purchase access to proxy services, at affordable hourly rates, from bot herders on dark web forums. Lastly, attackers create scripts to automate authentication requests using the list of breached credentials, usually preying on reused or simplistic passwords, to gain access to services. Attackers may also purchase toolkits on the dark web, such as CAPTCHA solvers, browser emulators, or fingerprint spoofing scripts, to help counteract existing defenses.

Defending against credential stuffing attacks

Stopping such attacks requires the ability to distinguish bots from humans. Unfortunately, bot operators continually find new ways to circumvent bot detection methods. The latest generation of bots is almost indistinguishable from humans.

As bots have grown more sophisticated, simple mitigation strategies that may have worked in the past, like blocking the bot’s request, the IP address, or the user-agent (UA), are no longer sufficient. Attackers today are most likely using one of the cheap and plentiful rotating IP proxy services instead of attacking from static IPs, which helps them circumvent rate limiting and simple access control list (ACL) protection. Moreover, blocking isn’t advisable because it serves as a useful feedback mechanism for bot operators, telling them to evolve their automation to defeat the detection method.

Bot detection techniques have had to become more sophisticated to match the increasing sophistication of bot attacks. Today’s state-of-the-art bot detection techniques involve three forms of analysis on both the server side and the client-side. They are:

  1. Request fingerprinting
  2. Client fingerprinting
  3. Behavioral fingerprinting

You’ll need a combination of all three to defeat sophisticated credential-stuffing attacks.

Attack detection method 1: Request fingerprinting

Request fingerprinting is usually done on the server side as soon as the server receives all the requested info from the client. A client request typically consists of a combination of a network (IP), connection, encryption, and other HTTP metadata analyzed and used to generate a request fingerprint. This fingerprint includes core details such as IP address, TCP handshake, TLS handshake (i.e., TLS protocol, ciphers, and extensions), HTTP headers and header orders, and other information derived from the metadata such as the ASN and device type. When put together, these request characteristics can yield a unique signature or fingerprint for each client.

Figure 1. A small sample of request characteristics that can work together to create a unique request fingerprint.

From the fingerprint, we can start to look for anomalies. For example, if a request claims to be from a Chrome UA, does it include headers in the order expected in that version of the Chrome browser as indicated in the user agent? Does it use the typical HTTP and TLS protocols? Does the ClientHello message contain the protocol and cipher with the preferred order typical to this Chrome version? In addition to analyzing the request metadata, the server can also perform some limited behavior analysis, such as the number of requests and their frequency and whether there’s a browsing pattern that could help determine whether the requests are automated.

Requesting fingerprinting is a necessary first step but is insufficient on its own.

Attack detection method 2: Client fingerprinting

The challenge with request fingerprinting is that attackers can now spoof request fingerprints that, more often than not, will appear 100% identical to the real client. If the attackers make a mistake, request fingerprinting will identify those mistakes — but you can’t count on that happening regularly.

Fundamentally, request fingerprinting only tells half the story. The server needs to see what’s happening on the client side and generate a client fingerprint to supplement the request fingerprint to gain more insight. This gives bot detection systems a more complete picture of the client and makes it harder for attackers to avoid detection.

A client fingerprinting server can inject a small piece of JavaScript (JS) to run on the client side by rewriting the HTML in response to the requested page. Alternatively, the server can inject a script tag that points to a remote JS that the client can download when loading the login page. The JS can perform checks on the client side and collects device info such as whether JS or cookies are enabled, and examines the OS, canvas, renderer, browser, JS engine, and more to generate a complete client fingerprint.

A normal browser is expected to have cookie support and be JS enabled (so they can properly login and use your streaming services); not having it enabled can cause suspicion. Client fingerprinting can identify other suspicious elements not typical of the advertised device that may indicate a potential fake client, such as a Safari browser UA with Blink (browser engine) or Chrome with a SpiderMonkey JS engine.

These details are collected and can be beaconed to a remote server as API calls for further analysis or be encrypted and set as a cookie or header to be sent to the server for analysis in subsequent client requests. The above techniques for collecting and generating client fingerprints can also be adopted for non-browser streaming applications such as iPhone/Android apps, Roku, or Samsung TVs via different SDKs.

Figure 2. A small sample of characteristics that can work together to create a unique client fingerprint.

While the combination of request and client fingerprinting was effective with early-generation bots, more advanced bots are based on the same clients as humans, including Chrome, Firefox, and Safari. They also may employ headless browsers like Headless Chrome. Unlike basic bots that may lack functionality, such as support for JavaScript and cookies, more advanced bots can utilize the proper browser and JS engine to perform properly formed TCP and TLS handshake and HTTP requests consistent with their device type.

Low and slow attacks can be performed by distributing requests through thousands of IP addresses, nullifying any rate-based detection method. To further compound the problem, real browsers from real user devices can be hijacked and used for credential-stuffing activities, and such attacks are almost certain to be missed with these approaches alone.

Attack detection method 3: Behavioral fingerprinting

To truly beat credential stuffing, you must add intelligent behavioral fingerprinting. When users interact with a streaming service, they are not just making requests for content, they are moving, clicking, tapping, and browsing around the app. Behavioral fingerprinting studies these actions by collecting user telemetry data on the client side, usually via JS. These may include mouse movement patterns, keystrokes, the timing of an action, or even tapping into device sensors such as phone accelerometers or gyroscopes to measure a user’s moving pattern and positioning.

Based on the data collected, behavioral fingerprints are generated and sent for real-time or offline analysis. Is the user exhibiting a random or non-organic pattern? Is the mouse moving in linear patterns, or is the scroll speed faster than a human could achieve? Is the phone always at a fixed-degree angle throughout the entire browsing session? Is the number of login requests per second humanly possible?

This is the battleground of data scientists and researchers who must employ machine learning techniques to continually analyze the data and derive intelligence on whether a request is automated. This is partly due to the exponential growth of the combination of requests, devices, and behavioral attributes gathered. As bots have improved their ability to mimic human behavior via behavioral hijacking, relying on basic behavioral characteristics such as mouse movements is no longer adequate and can increase the false positive rate and impact the experience of real users.

These types of bots present the most difficult challenge for mitigating credential stuffing. Stopping the most sophisticated bots requires more data, such as the client’s browsing behavior throughout the session, to analyze the client’s intent and thus identify if the request is malicious. For example, is it normal behavior when a user visits a streaming service’s login page directly without going through the homepage? Is it normal for a user to immediately navigate to the account page after logging into the streaming service and not perform any other action? These data points can precisely identify the intent of bots. The user interaction with the streaming service throughout the entire session and other behavioral data can produce a richer, more complete fingerprint with a lower chance for false positives.

Managing bots

Once you’ve successfully detected a bot attempting to make a login request, what’s the correct response? Is it to block the bot and hope it goes away? In most cases, that is the wrong action. Suppose you respond with a 4xx error, such as a 401 Unauthorized response. Attackers know the current inadequate techniques and update their automation tools to overcome your detection mechanism through trial and error. In this case, you’ve inadvertently helped the attackers by providing a feedback loop to alert them to evolve their method.

While it’s inevitable that sophisticated bot operators will eventually detect that they are being mitigated and evolve their methods, there are some good practices to avoid or delay these efforts. When detected, instead of blocking the bot requests, the server can send a standard, expected response code when a login attempt is successful, such as 200 OK, coupled with a static boilerplate response that does not expose sensitive data.

Bot operators are more likely than not to assume that a successful response indicates their current method is successful. And that the stolen credentials are useful even though it may not be the case, keeping the attacker in the dark. Another option is to tarpit the bot request by not providing any response, leaving the bot request hanging until it times out. This can be done if you use a large, globally distributed platform with a lot of server capacity, such as a content delivery network (CDN). These methods of misinformation are likely more effective than simply blocking the bot requests.

Another strategy for managing bots, which has less impact on the user experience in the event of a false positive, requires a suspected bot to solve a CAPTCHA. Only after completing the CAPTCHA will the login be successful. This allows real users to continue even if they are misidentified as a bot. It also provides valuable feedback to adjust your detection method to reduce false positives.

Keep streaming safe

Preventing credential stuffing attacks is an important priority for any OTT streaming service. As these services gain in popularity, so too do the security risks. A multi-layer approach to application security and bot management can accurately identify even the most sophisticated bots used to power credential stuffing attacks and prevent such attacks from impacting your customer experience or reputation.

Learn more about how our cloud security capabilities can protect your online presence from credential-stuffing attacks, DDoS attacks, and more.