Skip to main content

Traffic Jams in the Cloud: Are Overloads Sabotaging Your Application's Reliability?

· 11 min read
Tanveer Gill

Imagine a bustling highway system, a complex network of roads, bridges, tunnels, and intersections, each designed to handle a certain amount of traffic. Now, consider the events that lead to traffic jams - accidents, road work, or a sudden influx of vehicles. These incidents cause traffic to back up, and often, a jam in one part of the highway triggers a jam in another. A bottleneck on a bridge, for example, can lead to a jam on the road leading up to it. Congestion creates many complications, from delays and increased travel times, to drivers getting annoyed over wasted time and too much fuel burned. These disruptions don’t just hurt the drivers, they hit the whole economy. Goods are delayed and services are disrupted as employees arrive late (and angry) at work.

But highway systems are not left to the mercy of these incidents. Over the years, they have evolved to incorporate a multitude of strategies to handle such failures and unexpected events. Emergency lanes, traffic lights, and highway police are all part of the larger traffic management system. When congestion occurs, traffic may be re-routed to alternate routes. During peak hours, on-ramps are metered to control the influx of vehicles. If an accident occurs, the affected lanes are closed, and traffic is diverted to other lanes. Despite their complexities and occasional hiccups, these strategies aim to manage traffic as effectively as possible.

The highway congestion scenario is not much different from the world of modern cloud applications. Sometimes servers face capacity reduction due to failures, misconfigurations, or sudden spikes in demand, causing a sort of digital traffic jam in the cloud. This can lead to slow application responses, timeouts or cascading failures causing complete outage. Repercussions can be severe. For a business, it can result in lost revenue, customer departure, and a damaged reputation. If the issue isn't promptly addressed, it can erode trust in the application, potentially causing further damage to the user base. It can be stressful for the engineering team, as they scramble to identify the root cause and remediate the issue before the end-users are impacted.

Just as highway systems have evolved strategies to manage traffic, web applications need to be designed to handle failures and unexpected events. While investigating the real root cause of an issue, an application should be able to gracefully degrade to handle the increased load. This can be achieved by implementing strategies like load shedding, throttling, circuit breaking, and traffic prioritization. In this blog, we will delve into the triggers of overload in web applications and explore strategies for managing them.

Overloads in Modern Applications

Modern cloud applications are composed of numerous interconnected components. The advent of service-oriented architectures, powered by managed services, containers, and orchestration engines, has significantly enhanced the agility and scalability of these applications. But this complexity also introduces a web of dependencies between services. One component failure can trigger failure in others. For example, a cache failure could impose a higher load on the database, or a subset of servers going offline could overload the remaining servers.

Overloads are a common issue in the cloud, often causing 'traffic jams' in the system. The API-driven nature of modern applications means there are many potential failure points, with overload at the root of most of them. A quick survey of the incident database VOID reveals a common theme: dealing with overloads where some part of the system hits a limit, falls over, and results in a cascading failure.

Let's explore the common triggers for these overloads.

External Triggers

Insufficient Capacity Allocation

Just as a two-lane road struggles to handle the traffic of a six-lane highway, a service with insufficient capacity can easily become overloaded. This often happens when the capacity allocation does not account for peak traffic or fails to anticipate growth.

Traffic

Unexpected Overload due to New Product Launch or Viral Campaign

An unexpected overload, often referred to as the ‘Slashdot effect’, can occur following a new product launch or a post going viral. This unexpected overload can overwhelm a service that isn't prepared to handle the increased load.

Retry Storm after a Temporary Failure

Retries are a common strategy for dealing with temporary service failures. However, they can lead to a self-reinforcing feedback loop triggered after a spike or an intermittent failure. Due to the added load of re-tries, the service cannot serve any requests within timeout, resulting in a permanent state of overload even long after the initial trigger resolves.

Retry Storm

Internal Triggers

Performance-regressions due to bugs in new deployments

Service upgrades, while necessary for maintaining and improving applications, can sometimes introduce new bugs or performance regressions. These issues can lead to slower response times, and the regular incoming load might exceed the provisioned capacity, causing overloads.

Slowdowns in Upstream Dependencies

Modern applications often rely on a variety of upstream services or third-party dependencies. When these services slow down, it can cause requests to back up, leading to an increase in response time latency in the dependent service. This backup can create a ripple effect, causing overloads in other parts of the system.

Cache Failure Leading to Higher Load on Database

A cache failure can lead to a higher load on the database, as more requests are sent directly to the database instead of being handled by the cache. This can cause a significant increase in load, leading to potential overloads.

Query of Death and Noisy Neighbor in Multi-tenant Systems

In multi-tenant systems, a poorly optimized query (often referred to as the ‘query of death’) or a ‘noisy neighbor’ (a tenant that consumes more than its fair share of resources) can lead to an increase in load, potentially causing overloads.

Failovers

Failovers, while designed to enhance system reliability, can sometimes lead to overloads. This can happen when the failover process results in a sudden shift of load to the standby systems, overwhelming them.

In conclusion, effective load management is critical for the reliability of cloud applications. By understanding the triggers of overload and implementing strategies to manage them, we can ensure that our applications remain reliable, even in the face of unexpected traffic spikes or service failures.

Strategies for Managing Load

Auto Scaling

One could throw more compute at the problem, scaling the service instances until the overload gets resolved. Auto-scaling is a strategy for managing load by automatically scaling the service up or down based on the incoming load. This approach has its limitations as the time taken to scale up can be too long to prevent cascading failures due to overload.

Another challenge is that auto-scaling often requires an in-depth understanding of the application's behavior and load patterns. Without sufficient data or insights, it can be difficult to set the right scaling policies. Incorrectly configured auto-scaling can lead to over-provisioning, under-provisioning, or constant fluctuation in resource allocation, all of which can negatively impact the application's performance and cost efficiency.

Furthermore, auto-scaling may inadvertently exacerbate the problem by shifting the bottleneck to another service or resource, much like adding more lanes to a congested highway might just move the traffic jam elsewhere.

Therefore, it is recommended to use auto-scaling alongside a load throttling technique to ensure the service doesn't get overloaded while it is being scaled in the background. This combined approach allows for more effective load management, ensuring that the service can handle spikes in demand while maintaining optimal performance and cost efficiency.

Circuit Breaking

Circuit breaking is a common service protection strategy implemented at the client. It involves monitoring the health of upstream services and breaking the circuit when they are unhealthy. This strategy prevents a downstream service from overloading the upstream services, which can lead to a cascading failure. The downside to this approach is that it completely turns off the tap, potentially leading to a poor user experience as it blocks critical and non-critical traffic equally, much like closing a busy highway would disrupt all traffic, not just the non-essential vehicles.

Concurrency Limiting

Concurrency limiting is a load throttling technique implemented at the server level. It involves limiting the number of requests that a service can process at a given time. This strategy ensures that the service doesn't get overloaded, even when the incoming load exceeds the provisioned capacity. The requests exceeding the concurrency limit can be handled in a variety of ways, including:

  • Request Queuing: When the service is overloaded, requests are queued and processed in a specific order. It can lead to increased response times for the queued requests. The ordering of the queue can be based on a variety of factors, including priority, time of arrival, etc.
  • Load Shedding: Requests are dropped when the concurrency limit of the service is exceeded. The client may retry the request later.

Concurrency limiting is a useful strategy for managing load. However, it can be challenging to determine the correct concurrency limit for a service. Setting the limit too low can lead to a poor user experience, while setting it too high can result in overload.

Rate Limiting Users & Devices

Rate limiting is a strategy that prevents a subset of users or devices from overwhelming the service. It ensures no single user or service monopolizes the system’s resources for an extended period. However, setting rate limits too low can lead to a poor user experience. Conversely, setting them too high can open the door to abuse and overload. While rate limiting is effective in preventing abuse, it falls short in preventing overloads as it does not account for service health.

Adaptive Rate Limiting

Imagine a smart traffic light system that adjusts the timing of its signals based on real-time traffic conditions. When traffic is light, it allows vehicles to pass through more quickly. During rush hour, it manages the flow of vehicles to prevent congestion. This dynamic adjustment helps maintain a smooth flow of traffic, regardless of the conditions.

In the context of cloud applications, adaptive rate limiting dynamically adjusts the rate limit based on service health, for example, by tracking response latencies, error rate, infrastructure saturation etc. This strategy ensures that the service doesn't get overloaded, even when the incoming load exceeds the provisioned capacity. Implementing adaptive rate limits throughout the stack helps prevent cascading failures and improves the reliability of cloud applications.

But how do we ensure that the most important requests are processed first? This question leads us to the concept of traffic prioritization.

Prioritization

Imagine a highway during rush hour when emergency vehicles need to get through. In such scenarios, other vehicles yield, allowing these critical vehicles to pass first. This is a form of prioritization, ensuring that the most important ‘requests’ (the emergency vehicles) are processed before others.

In a similar vein, when adaptive rate limiting puts requests exceeding the rate limit into a queue, prioritization becomes key. This strategy involves identifying critical requests and prioritizing them over non-critical ones. For instance, a service might prioritize requests from paying customers over free users. This ensures that the most important requests are processed first, while the non-critical ones are queued or dropped. This is particularly useful for applications that have a mix of critical and non-critical requests.

For example, FluxNinja's open source project Aperture incorporates this concept with a weighted fair queuing scheduler. This allows prioritization of traffic based on business value ensuring that the most important requests get through, even during periods of high traffic.

Workload Prioritization

Conclusion

Just as cities have evolved their traffic management systems over time, tech giants like Google, Meta, Netflix, and LinkedIn have developed sophisticated systems to handle overload and traffic management issues. These systems, however, are often tailored to their specific application architectures, making them complex and challenging to generalize or adopt for other applications.

At FluxNinja, we believe that every web application deserves the best traffic management, regardless of its size or complexity. Just as every city, from small towns to bustling metropolises, benefits from efficient traffic systems, every web application can benefit from sophisticated load management capabilities. Our vision is to democratize these capabilities, making them accessible to all developers and operators of web applications, whether they're running a small blog or a large-scale e-commerce site.

We have developed Aperture, an open source tool that brings the power of advanced load management to your applications. Aperture provides a diverse set of load management strategies, including auto-scaling, rate limiting users & devices, adaptive rate limiting, and prioritization.

We invite you to sign up for Aperture Cloud, offering Aperture as a service with robust traffic analytics, alerts, and policy management. Together, we can prevent 'traffic jams' in the cloud and ensure our applications run seamlessly, regardless of the load.