Original source: Edgecast
Conventional wisdom says businesses must balance the cost of security with performance and user experience. This idea implies that security is a tax on every digital interaction. And to achieve optimal responsiveness and load times, security must be minimized. Or better yet, optimized. But is there a case when adding security to your application stack increases performance? We’ve found that the answer is yes. In this blog, we’ll explore how we reached this conclusion and explore the security-related causes of application performance degradation, how the edge can alleviate them and how one customer realized a 9% performance gain.
The Edgecast platform processes billions of requests daily. Our threat intelligence engine leverages this massive data source to analyze security patterns and update the signatures that power our web application firewall (WAF), bot mitigation and other security services. On an average day, we find more than 16,000 malicious requests probe a typical website. Most of these probes are automated, searching and probing for unprotected web applications and vulnerabilities 24/7. These requests can bog down your services, especially if your application is already under a heavy load from legitimate users.
Removing malicious traffic before it enters your application is a best practice for improving performance. If you’re using a WAF, you’re already doing this. But the relationship to your origin matters. Running the WAF at the CDN edge ensures malicious traffic is dropped and will not proxy back to the origin.
A CDN leverages caching to reduce the load on origin. But modern CDNs also optimize requests for dynamic content between the CDN edge and the origin servers. This applies to both malicious and legitimate traffic. Most web application threats target dynamic functions in the web application, such as login services, database queries and APIs. Unless they are filtered and removed, the CDN will treat them as legitimate, working to accelerate the good and the bad together. Assuming you’re performance minded and are using a CDN to improve the user experience and control costs, your application stacks may unwittingly optimize all requests, including malicious ones, which can degrade origin resources – or even take them offline.
A WAF is designed to solve the challenge noted above. WAFs can be deployed at any point along the application architecture. Its position will always impact your overall application performance – for good or bad. Whether the WAF is running on the server itself, in the data center as a separate appliance, via a third-party service, or integrated into the edge fabric or CDN, the WAF is another layer that inspects every HTTP request. Latency and round-trip time are factors, especially when multiple third-party solutions are chained together, which is becoming more common as organizations deploy a WAF, bot mitigation and DDoS protection services.
Moving the WAF to the edge through a CDN provider can net immediate performance gains because it’s designed to integrate with the edge logic, leveraging in-line processing to reduce hops.
At an architecture level, the WAF is like any other component in your application stack. The architecture performance is impacted by the design of the WAF, which includes the raw horsepower such as CPU, RAM, SSD, and the software components such as the WAF operating system and ruleset. New threats are discovered daily. Over time, these new signatures can create a bloated WAF runtime that has more logic to process.
If optimizing performance is your objective, it’s essential to understand your WAF provider’s approach. Edgecast doesn’t simply add new signatures to our WAF. We optimize them to take advantage of our system architecture and configure them to work efficiently together, which is why we beat leading source security software in performance tests. To get the details, see our technical article “Improving Application Performance with Faster Security Rules.”
A prospective client was reviewing our platform to enhance the performance of their website. During the proof of concept, our network showed performance gains. But interestingly, an additional 27% performance gain resulted when the WAF was activated.
Upon further analysis, we discovered the WAF was blocking a sustained layer 7 attack. Since the attack was not volumetric, it didn’t result in a catastrophic outage. And because the client was not using a WAF, they were unaware of the attack and unsure of its duration.
DNS: Time required to resolve the domain name into an IP address Time to First Byte: The time it took from the request being issued to receiving the first byte of data from the primary URL for the test(s). This is calculated as DNS + Connect + SSL + Send + Wait. Webpage response: The time it took from the request being issued to receiving the last byte of the final element on the page. For web tests, the agent will wait for up to 2 seconds after document complete for no network activity to end the test.
HTTP transmission delays are caused by many variables along the path from the client’s browser to the web application and back again. The WAF is yet another component in the application stack that can be optimized, just like other components, to accelerate application content. The next time you look for solutions to improve cache hit ratios and time to first byte, start by scrutinizing your WAF. Leveraging a performance-optimized edge cloud to run security could lead to bottleneck removal and immediate double-digit performance gains.
Let's connect so you can start realizing performance gains today.
Get the information you need. When you’re ready, chat with us, get an assessment or start your free trial.