Fireside Chat : Managed Security in Action - Operational Realities for CISOs - Register Now !

DDoS Symptoms: Confirm DDoS vs a Traffic Spike in 10 Minutes

If your website or API suddenly slows down, starts timing out, or goes offline, the first question is simple: is this a DDoS attack or a normal outage? This guide is for site owners, DevOps engineers, IT admins, and small business teams who need a fast way to spot DDoS symptoms and decide what to check next.

A distributed denial-of-service (DDoS) attack is a deliberate attempt to disrupt a service by flooding it with traffic from many sources, so it looks like normal demand, just at a volume and pattern your systems cannot handle.

This article gives you two things: a 60-second symptoms checklist (so you can triage quickly) and a step-by-step confirmation workflow (so you can separate DDoS from legitimate traffic spikes, misconfigurations, or infrastructure failures and respond with confidence).

60-second DDoS Symptoms Summary

DDoS symptoms are signs that your website, API, or network is being overwhelmed by unusually high or abnormal traffic, causing slow performance, timeouts, 5xx errors (often 503/504), or complete unavailability. Because legitimate traffic spikes can look similar, confirm by checking for abnormal traffic patterns (endpoint hot spots, repeated requests, uniform client profiles) and resource saturation (bandwidth, connections, CPU, memory).

Quick checklist

If you see two or more of these at the same time, treat it as a likely DDoS until proven otherwise:

  • Sudden slowdown or intermittent outages, especially across multiple regions
  • Timeouts increase (users report pages hanging or APIs not responding)
  • 5xx errors spike (commonly 503 or 504)
  • Traffic surges without a known campaign or event
  • One page, route, or API endpoint becomes the hotspot (login, search, checkout, /api/*)
  • Bandwidth, packets per second, or connections jump sharply
  • CPU or memory climbs unexpectedly, and the app or database starts struggling
  • Other services on the same network degrade at the same time

Fast diagnostic table (Symptom → Check → Meaning → Next action)

Symptom What to check What it usually means Next action
Sudden slowness or partial outage
  • Latency dashboards
  • Synthetic checks from 2 to 3 regions
  • CDN or WAF analytics
Could be DDoS or an internal bottleneck Confirm traffic patterns, then raise edge protections
503 or 504 spike
  • Error rate by endpoint
  • Origin health upstream timeouts
Server overload or upstream failure, DDoS is one possible cause Identify the hottest endpoint, rate limit or challenge, protect the origin
Traffic spike with no known trigger
  • Requests per second
  • Traffic time series
  • Top URLs, top IPs and ASNs
Suspicious if concentrated or unnatural Rate limit at edge, block obvious bad sources, enable bot controls
One endpoint hammered
  • Top paths
  • Cache hit ratio
  • App traces
Common in application-layer floods Add per-route limits, caching, bot checks, and targeted rules
Bandwidth maxed but CPU not maxed
  • Network throughput
  • Packets per second
  • Interface saturation
Often volumetric flooding Engage CDN or scrubbing, escalate to upstream mitigation
New connections fail or handshake problems
  • Load balancer or firewall connection tables
  • SYN indicators
Often state exhaustion (for example, SYN floods) Enable SYN protections, tune limits, offload to DDoS provider
CPU or memory jumps plus DB or cache strain
  • CPU, memory, DB connections
  • Queue depth
  • Thread pools
Often application overload Protect expensive routes, degrade gracefully, scale cautiously
Other services degrade on the same network
  • Shared link utilization
  • Firewall health
  • QoS signals
Collateral impact from link saturation Prioritize critical services, upstream filtering or scrubbing
Periodic bursts (for example, every 10 minutes)
  • Time-series charts
  • WAF event timeline
Automation, probing, or burst DDoS Tighten thresholds, tune alerts, preserve evidence

 

Under attack right now? Get the Indusface SOC on the bridge.

If you are seeing application-layer DDoS symptoms, you do not have to triage this alone. Once you reach out to us on the under attack page, Indusface security engineers get on a call with your team to:

  • Validate what is happening using live traffic and platform signals
  • Identify what is getting hit first, bandwidth, connections, or application endpoints
  • Apply the right mitigations through managed protections and keep tuning until traffic stabilizes
  • Stay engaged with your team while the incident is active, so you are not guessing under pressure

Get live help now.

Next: If this looks like DDoS, use the confirmation workflow below to separate DDoS from a legitimate traffic surge and to decide what to block, rate limit, or challenge first.

Before you go deeper, one important note: “DDoS symptoms” are usually the same user-visible problems you see during any overload event. A site can slow down, throw 5xx errors, or partially fail for many reasons, including a legitimate traffic spike, a broken release, a misconfiguration, or an upstream dependency issue.

That is why this guide starts with a fast checklist (to triage quickly) and then moves into a clearer definition of what qualifies as a DDoS symptom and why it is easy to misread. Once you understand the difference between symptoms and proof, the confirmation steps later will feel straightforward and practical.

What Counts as a “DDoS Symptom”

A DDoS symptom is any observable sign that normal users are being denied reliable access to your site, API, or network because your capacity is being overwhelmed. The key word is observable. Symptoms are what users and operators can see (slow pages, timeouts, errors). They are not proof of DDoS by themselves, because the same outward behavior can be caused by legitimate demand spikes or internal failures.

The surface symptoms everyone sees

Most suspected DDoS incidents start with one or more of these visible signals:

  • Slowness: pages load slowly, APIs respond late, checkout/login becomes laggy, or latency spikes appear suddenly.
  • Partial outages: only some parts of the application fail (for example, search works but login fails, or the website loads but specific API routes time out).
  • Full unavailability: users cannot reach the site at all, or the service is “up” but effectively unusable due to repeated timeouts and errors.

These are “surface symptoms” because they describe the experience, not the cause.

Why viral traffic and DDoS can look identical at first glance

At a high level, both a DDoS attack and a legitimate traffic surge do the same thing: they increase load. In both cases you may see:

  • Higher request rates
  • Longer queues
  • Saturated bandwidth or connection limits
  • Higher CPU or memory usage
  • More timeouts and 5xx errors

That overlap is why teams often misread early signals. A campaign launch, a news mention, a seasonal sale, or a partner email blast can produce the same immediate user impact as an attack.

The difference is usually in the shape of the traffic and the way your system fails. Legitimate surges often spread across many pages and include normal user behavior (browsing flows, conversions, diverse devices and networks). DDoS traffic is more likely to show unnatural concentration or repetition (hammering one endpoint, uniform client fingerprints, or timing patterns that do not look human). This guide’s confirmation workflow helps you check those details quickly.

Degradation-of-service is still a denial-of-service

Many people assume “denial-of-service” means a hard outage. In practice, a common outcome is degradation-of-service: the site is technically reachable, but slow enough that real users cannot complete actions reliably.

This matters because degradation can be the attacker’s goal. A slow login page, a broken checkout, or an API that times out intermittently can create real business damage while being harder to label as an “incident” than a full outage. Treat sustained, unexplained degradation the same way you would treat downtime: verify whether traffic and resource signals point to DDoS and respond early before the impact escalates.

DDoS Symptoms by Category and Examples

When people say, “we might be under a DDoS,” they are usually reacting to one of five symptom categories. Grouping symptoms this way helps you move from vague panic (“the site is slow”) to a more precise diagnosis (“we are seeing endpoint concentration plus bandwidth saturation”), which is what you need to respond quickly.

1. Availability symptoms – Can users reach the service at all?

Availability symptoms are the most obvious because they show up as “the site is down” or “the API is unreachable.” They can be total or partial.

Common availability symptoms

  • Intermittent outages: the site works for a minute, then drops, then comes back.
  • Full inaccessibility: users cannot load the site or connect to the API at all.
  • Timeouts: requests hang until they fail, especially on critical flows (login, checkout, search, /api/*).
  • Connection resets: clients get disconnected mid-request or can’t establish connections reliably.
  • Queueing: requests eventually complete, but only after long waits (a sign your stack is overwhelmed rather than broken).

Examples

  • Your status page says “up,” but real users cannot log in and get repeated timeouts.
  • External monitors show periodic failures even though your origin servers never fully crash.

Symptom → Where you will notice it first

  • Users/support tickets, uptime checks, API clients, synthetic monitoring, load balancer health.

2. Performance symptoms – The service is up, but it feels unusable

Performance symptoms often show up before a full outage. This is where “degradation-of-service” lives: the application technically works, but it is slow enough to lose users and revenue.

Common performance symptoms

  • Slow page loads or API latency spikes that begin suddenly.
  • Increased TTFB (Time to First Byte): the browser connects, but the server takes longer to respond.
  • Elevated tail latency: the average might look okay, but p95/p99 response times jump sharply (a classic overload signal).

Examples

  • Most pages load, but checkout takes 15–30 seconds and many sessions abandon.
  • Your p50 latency doubles, but p95 becomes 10x, and that is what users feel.

Symptom → Where you will notice it first

  • APM dashboards, CDN/WAF analytics, real user monitoring, application logs, database/cache metrics.

3. Error symptoms – What users see as “it broke”

Errors are the visible “output” of overload. During suspected DDoS, two codes tend to get mentioned a lot: 503 and 504.

Common error symptoms

  • 503 spikes: often means the service is unavailable or overloaded and cannot handle requests right now.
  • 504 spikes: often means a gateway or proxy timed out waiting for an upstream service (for example, a load balancer waiting on an origin).

What these errors mean, and what they don’t

  • A spike in 503/504 can be a DDoS symptom because overload is a common outcome of DDoS.
  • But 503/504 are not proof of DDoS. You can see the same pattern from:
    • a bad deployment,
    • a database outage,
    • an exhausted connection pool,
    • a dependency failure,
    • a misconfigured load balancer,
    • or simply a legitimate surge in users.

Treat 503/504 as a signal to investigate traffic and capacity, not as a verdict.

Symptom → Where you will notice it first

  • Server logs, reverse proxy/CDN logs, APM error rate graphs, user reports, monitoring alerts.

4. Traffic pattern symptoms – Does the traffic “look wrong”?

Traffic pattern is where DDoS starts to separate itself from legitimate demand. In real incidents, this is often the quickest way to build confidence that you are dealing with an attack.

Common traffic-pattern symptoms

  • Source concentration: bursts from a single IP, a narrow IP range, or traffic that looks suspiciously uniform (same user-agent patterns, same geo mix, same device profile).
  • Endpoint concentration: one URL, route, or API endpoint gets hammered disproportionately (login, search, /api/auth, /wp-login.php, a product page, a GraphQL endpoint).
  • Unnatural timing: repeated spikes every fixed interval, sudden bursts at odd hours, or patterns that feel automated rather than user driven.

Examples

  • Requests to one endpoint jump 50x while the rest of the site remains normal.
  • A large percentage of traffic claims to be the same browser version or arrives in clean, repeating waves.

Symptom → Where you’ll notice it first

  • CDN/WAF dashboards, load balancer access logs, web server logs, NetFlow/traffic analytics.

5. Resource and collateral symptoms – What is getting saturated?

This category answers: what is the bottleneck? DDoS attacks often create very specific saturation signals, and they can spill over into unrelated services.

Common resource/collateral symptoms

  • CPU or memory pressure: application servers spike in CPU, RAM, threads, or queues.
  • Bandwidth saturation: network throughput maxes out (sometimes before CPU is stressed).
  • Connection pressure: connection tables fill up on load balancers, firewalls, or reverse proxies.
  • Collateral impact: other services on the same network degrade because the shared link or shared infrastructure is under strain.

Examples

  • Your CPU is normal, but the network link is pinned at its maximum capacity, and everything slows down.
  • Your website struggles, and suddenly email gateways, VPN access, or other customer-facing services also become unstable.

Symptom → Where you will notice it first

  • Infrastructure monitoring (bandwidth, pps, connections), load balancer/firewall stats, server CPU/RAM graphs, cloud metrics.

How to use this section: If you can describe what you are seeing in at least two categories (for example, “503 spikes” plus “endpoint concentration” plus “bandwidth saturation”), you are ready for the confirmation workflow in the next section.

DDoS Symptoms by Attack Type

Here is a useful way to think about DDoS symptoms: the attacker is trying to exhaust one of three things. Your bandwidth, your connection capacity, or your application’s ability to do work. The symptoms look different depending on which “limit” is being hit.

1. Volumetric attacks – Bandwidth floods

Volumetric attacks try to consume all available bandwidth between you and the Internet. The goal is simple: create congestion so real users cannot get through.

What you observe

  • Widespread slowness, often across multiple services, not just one page
  • High packet loss or jitter, especially during peak waves
  • External monitors show the site as intermittently reachable, then unreachable

Typical clue

  • Network throughput is maxed out before application CPU is maxed out.Your servers might look “fine” on CPU, but the link is saturated, so users still cannot reach you.

Examples

  • Your inbound traffic graph is pinned near the maximum capacity of your connection, and everything connected to that link degrades at the same time.
  • Users report that nothing loads, and your own team may struggle to SSH or access admin panels because the network path is congested.

You typically notice symptoms of volumetric floods through network throughput and packets-per-second dashboards, ISP or cloud network metrics, edge or scrubbing provider telemetry.

2. Protocol and state exhaustion

These attacks, such as SYN floods, aim at the connection layer. Instead of flooding you with “full requests,” the attacker tries to overwhelm the systems that track connections and sessions, like load balancers, firewalls, proxies, and sometimes the server itself.

What you observe

  • Users cannot establish new connections reliably, even if your site still works for some existing sessions
  • Spikes in connection failures, retries, or handshake timeouts
  • Load balancer or firewall stress indicators, especially connection tracking and state table pressure

What “half-open” behavior means

A SYN flood abuses the TCP handshake. The attacker sends a large number of initial SYN packets to start connections but does not complete the handshake. That leaves many connections in a “half-open” state, consuming resources and filling queues or state tables. Once those limits are hit, legitimate users struggle to connect, even if your application code is not the main problem.

Typical clue

  • You see symptoms of “can’t connect” and device stress, and your logs show fewer completed requests than you would expect given the incoming packet rates.

Examples

  • Your web servers have normal CPU, but the load balancer shows unusually high connection attempts and dropped or timed out handshakes.
  • New sessions fail first, while users who are already connected keep working for longer than expected.

You notice symptoms of protocol and state exhaustion in load balancer metrics, firewall and proxy connection tables, SYN and handshake metrics, network security device dashboards.

3. Application-layer/Layer-7 DDoS

Application-layer DDoS attacks target the app itself. The traffic can look legitimate because it uses real HTTP requests and normal-looking URLs. The attacker’s advantage is that each request forces your application, cache, or database to do work.

What you observe

  • Application CPU rises and stays elevated
  • Database or cache strain, such as rising connections, slow queries, or queue depth
  • 5xx errors increase as the app or upstream services start failing under load
  • Clear endpoint hotspots, where one route or function gets hammered

Typical clue

  • Your app and backend resources get stressed first, not the network link. Latency increases, tail latency becomes ugly, and error rates climb even though bandwidth may not be fully saturated.

Examples

  • Dashboard, search, checkout, or a specific API route becomes the bottleneck, and everything else looks relatively normal.
  • Requests appear “valid,” but the volume and repetition are unnatural, and caching does not help because the attacker is hitting dynamic or expensive endpoints.

You typically notice symptoms of layer-7 DDoS attacks on APM traces, application logs, WAF or CDN request analytics, database metrics, cache hit ratios, endpoint-level latency and error dashboards.

DDoS or Legit Spike? A Step-by-Step Confirmation Workflow

Once you can recognize the “shape” of an incident (bandwidth, connections, or application overload), the next problem is the one that causes the most confusion in the moment: is this a DDoS attack, or did real users simply show up all at once?

The good news is you do not need perfect certainty to act. You just need a practical way to separate three buckets quickly:

  1. Likely DDoS
  2. Likely legitimate demand spike
  3. Likely internal regression or upstream outage

Use the workflow below during the incident. Time-box it to 10 to 20 minutes, then make a call and start mitigation.

Step 1: Confirm impact from multiple vantage points

Start by verifying that the issue is real and not limited to one network path.

  • External uptime check: Validate reachability from outside your environment.
  • Synthetic transaction: Test a real action, not just the homepage. Try search, checkout, or a key API call.
  • Compare across regions and ISPs: Check from at least two regions and two different networks if you can. If only one ISP or geography is impacted, you might be dealing with routing issues, ISP trouble, or a localized dependency problem.

In this step you are trying to learn if the impact is global or isolated.

Step 2: Check traffic analytics

This is usually the fastest way to spot “attack-shaped” traffic. Now look at what the traffic is doing. In a real DDoS, the pattern often gives itself away.

Focus on:

  • Top endpoints: Which URLs, routes, or API methods are spiking? One endpoint becoming a hotspot is a strong clue.
  • Top sources: Top IPs and IP ranges. If you have it, also look at ASN distribution.
  • Geo mix: Sudden changes in geography can be a clue, but geo alone is not a verdict.
  • Client fingerprints: Look for traffic that is “too uniform” to be real. Examples include identical user agents at unusual proportions, repetitive headers, identical request shapes, or requests that never behave like a normal session.

In this step you are trying to learn whether the traffic looks like normal users moving through flows, or like automation hammering a target.

Step 3: Check infrastructure signals

This step helps you identify what limit is being hit first: bandwidth, connections, or application capacity.

Look at:

  • Network throughput and packets per second: Are you near link capacity? Is pps climbing rapidly?
  • Connection counts: Are load balancers, firewalls, reverse proxies, or app servers hitting connection limits?
  • Load balancer health: Are targets healthy? Are new connections failing? Are retries and handshake issues increasing?
  • If applicable, SYN backlog or connection table pressure: If you can see these metrics, spikes here are a classic sign of state exhaustion behavior.

In this step you are trying to learn whether the network saturates first, the connection layer saturates first, or the app and backends saturate first.

Step 4: Make the call on DDoS vs likely demand vs likely outage

Use this quick decision table to avoid overthinking. You are looking for the simplest explanation that fits what you saw in steps 1 to 3.

What you observe Most likely explanation What to do next
Traffic is broad across many pages, sources are diverse, behavior matches real sessions, and conversions or key events rise Legitimate demand spike Scale carefully, protect expensive routes with rate limits, keep monitoring for bot abuse
Traffic is narrow and repetitive, one API endpoint is the hotspot, client profiles look uniform, requests do not resemble real sessions Likely DDoS (often application-layer) Apply targeted controls first: per-route rate limits, bot challenges, WAF rules, cache where safe
Errors and latency start right after a deploy or config change, traffic patterns look normal Likely internal regression or dependency outage Roll back, fix configuration, validate upstream dependencies, then reassess
Network throughput is pinned near capacity and multiple services degrade at once, CPU is not the first thing to spike Likely volumetric flood or upstream congestion Engage upstream mitigation (CDN/scrubbing/ISP), prioritize critical services
New connections fail more than requests, load balancer or firewall shows connection pressure and handshake failures Likely protocol or state exhaustion Enable connection protections, tune limits, offload to DDoS provider, watch connection tables

 

Rule of thumb: If you cannot confidently decide, treat it as “unknown overload” and turn on conservative protections while you continue investigating.

Step 5: Capture evidence

Even if you restore service quickly, capture evidence while it is happening. It makes mitigation cleaner, post-incident reviews faster, and future prevention easier.

Capture:

  • Time window: start time, peak time, mitigation change times
  • Top targets: most-hit API endpoints/URIs and their request rates
  • Top talkers: top IPs, IP ranges, ASNs (if available), user agents
  • Sample logs: short slices of access logs and WAF/CDN events from the peak window
  • Screenshots of metrics: throughput, pps, connection counts, error rates, latency (p95/p99), CPU/memory, load balancer health
  • Actions taken: what you changed, when you changed it, and what happened after

Once you classify what you are dealing with (volumetric, state exhaustion, or application-layer), the mitigation path becomes much more predictable, and you can focus on the controls that move the needle fastest.

Where to Check First When You Suspect a DDoS Attack

At this point you have a decent read on whether this is likely DDoS, legitimate demand, or an internal issue. The next challenge is speed. During an incident, the fastest teams have the required tools and more importantly know exactly where to look first. There are three possible options here 1. CDN/WAF 2.Cloud and 3. On-Prem.

Use the path that matches your setup.

If you are behind a CDN or WAF

Start at the edge. In many DDoS scenarios, the edge layer will show the clearest story before your application logs do.

Look at:

  • Traffic overview: requests per second, bandwidth, and spikes over time
  • Top targets: top URLs, routes, and API endpoints under load
  • Top sources: countries, IPs, IP ranges, and ASNs (if available)
  • Security events: WAF rule matches, bot signals, challenge rates, and blocks over time
  • Cache behavior: cache hit ratio and whether the hotspot endpoints are cacheable
  • Origin signals from the edge: origin response codes (5xx), origin latency, timeouts to origin
  • Logs you can filter fast: edge access logs and WAF event logs for the peak window

What you are trying to answer

  • Is the edge seeing a narrow hotspot or broad traffic?
  • Are mitigations such as blocks and CAPTCHA challenges already triggering, or is most traffic passing through to the origin?
  • Is the origin struggling even when the edge is absorbing traffic?
  • Does the traffic look “human” (diverse flows), or “automated” (repetitive and uniform)?

If you are in the cloud or heavily cloud-hosted

Cloud incidents can look noisy because multiple layers scale and fail independently. Your goal is to identify what is saturating first, and whether autoscaling is helping or making things worse.

Look at:

  • Load balancer metrics: request rate, target response time, 4xx/5xx, unhealthy targets, active connections
  • Compute metrics: CPU, memory (if available), network in/out, instance or container restarts
  • Application health: p95/p99 latency, error rate by endpoint, queue depth, thread pools, DB connection pool usage
  • Network signals: throughput, packets per second (if available), connection counts, dropped packets
  • Autoscaling activity: scale-out events, cooldowns, and whether new capacity is actually reducing latency/errors
  • Logging for correlation: WAF/CDN logs (if present), load balancer access logs, application logs during the peak window

What you are trying to answer

  • Is the bottleneck bandwidth, connections, or application/backend capacity?
  • Are errors and latency concentrated on one endpoint (common in application-layer floods), or spread broadly?
  • Is the load balancer dropping connections or marking targets unhealthy, suggesting connection or state pressure?
  • Is autoscaling helping, or are you scaling into a bottleneck (for example, database limits) and just burning cost?
  • Does the traffic pattern look like real users (diverse paths and sessions), or like repetitive automation?

If you are on-prem (or run your own network edge)

On-prem incidents often reveal themselves at the network and device layer first. Your quickest win is to confirm whether you are saturating an interface, exhausting connection state, or getting hit with abnormal packet patterns.

Look at:

  • Firewall and router health: interface utilization, packet drops, CPU utilization, connection table usage
  • Link saturation: inbound and outbound throughput, errors, and discards on key interfaces
  • Flow visibility: NetFlow or sFlow (if you have it) to identify top talkers and top destinations
  • IDS/IPS alerts: spikes in signatures related to floods, scanning, or protocol abuse
  • Load balancer and reverse proxy stats: active connections, handshake failures, queueing, and upstream timeouts
  • Server-level logs: access logs for hotspot endpoints, error rate changes, and connection reset patterns

What you are trying to answer

  • Is the Internet link saturated, or is a device (firewall, router, load balancer) becoming the choke point?
  • Are packet drops and errors increasing on key interfaces during the slowdown window?
  • Are connections failing to establish (state exhaustion), or are established connections slow (application overload)?
  • Is the attack hitting one service or endpoint, or degrading everything that shares the edge link?
  • Which sources are driving the most traffic (top talkers), and what are they targeting?

What to Do Immediately When You See DDoS Symptoms

Once you know where to look, the response becomes much less chaotic. The goal in the first 15 to 30 minutes is not “perfect attribution.” The goal is to keep the service usable for real users while you narrow the attack shape.

Here is a safe, practical sequence that works in most environments.

1) Switch into incident mode

  • Declare an incident owner and a small response team.
  • Open one comms channel for the incident and keep it active in the form of a chat, meeting, or a war room.
  • Notify the right providers early: CDN/WAF vendor, cloud provider support, ISP or upstream partner if bandwidth saturation is suspected.
  • Keep external communication running: if you have a status page or customer comms process, use it. Silence creates confusion.

2) Turn on or raise protections at the edge first

If you have a CDN/WAF, the edge is usually the safest place to apply controls because it reduces load before it reaches your origin.

Start with controls that are reversible and targeted:

  • Rate limit the hotspot endpoints such as login, dashboard, search, checkout and other expensive API routes
  • Enable challenge flows such as crypto challenges or CAPTCHAs for suspicious traffic patterns
  • Increase bot protections especially on endpoints that should not be hit at high frequency
  • Block clearly malicious patterns such as obvious scanners, malformed requests and repeated identical payloads
  • Apply geo blocking in regions where the app was not designed to be used

A simple operating rule is to make one meaningful change, note the time, and watch the effect for a few minutes before stacking more changes.

3) Protect the origin and reduce blast radius

During many incidents, the origin is not necessarily failing because it is weak. It could be failing because it is doing too much work per request.

Actions that often help quickly:

  • Restrict direct access to the origin so traffic must come through the CDN/WAF and prevent the problem which is quite common.
  • Add tighter per-route controls on expensive endpoints especially APIs that trigger the databases.
  • Tune timeouts and queues carefully to avoid cascading failure. Shorter timeouts can reduce resource lockup, but do not cut so aggressively that real users cannot complete actions.

4) Reduce expensive functionality temporarily and buy time

This is one of the most underused levers. A short-term reduction in “cost per request” can stabilize the system while mitigations ramp up.

Examples include:

  • Serve a static fallbackfor non-critical pages
  • Temporarily disable search, recommendations, or other heavy features
  • Disable or slow down non-essential API endpoints
  • Pause background jobs that compete for CPU/DB connections
  • If safe, increase caching for pages that can tolerate it

The intent is not to permanently degrade the product. It is to keep critical flows usable.

5) Keep monitoring availability, traffic, compute, and costs

During active mitigation, keep a tight watch on:

  • Availability: uptime checks and synthetic transactions
  • Traffic: request rate, top endpoints, top sources, challenge/block rates
  • Performance: p95/p99 latency, error rates by endpoint
  • Infrastructure: bandwidth, packets per second, connections, CPU/memory, DB health
  • Costs: in cloud environments, large spikes in traffic and logging can create unexpected bills during and after the event

6) Capture evidence while you act

Do not postpone this. Capture the peak window, top endpoints, top sources, and screenshots of key metrics. It will help you work with providers, explain the incident internally, and prevent a repeat.

Once the service is stable, you can tighten protections, tune rules with confidence, and do a short post-incident review to close bypasses and reduce “expensive endpoints” exposure.

Seeing DDoS symptoms right now? Bring the Indusface SOC into the Incident

If your site or API is slowing down or timing out and you suspect a DDoS attack, you do not have to triage this alone. Once you reach out to us on the Under Attack page, Indusface security engineers join a live call to confirm an application-layer DDoS using real-time traffic signals and apply managed mitigations until the attack subsides.

Under DDoS Attack? Get live help now.

Common Misconceptions about DDoS Symptoms

When a site is slow or unreachable, it is easy to jump to conclusions. That is normal. The problem is that a few popular assumptions lead teams to chase the wrong fix, waste time, or block real users. Use the clarifications below as quick guardrails.

Misconception 1: “A 503 error always means it’s a DDoS”

A 503 simply means the service is unavailable right now. It can happen during a DDoS because overload is one common cause, but it is not a DDoS verdict by itself.

A 503 spike can also be caused by:

  • A bad deployment or configuration change
  • An upstream dependency failing
  • Database or cache saturation
  • A capacity limit being hit during real traffic growth

Treat 503 as a signal to investigate traffic patterns and resource saturation. Do not treat it as proof of an attack.

Misconception 2: “Lag in a game proves I’m being DDoSed”

Lag is not enough evidence. Many things can cause it, including Wi-Fi interference, ISP congestion, routing issues, local device load, or the game server struggling.

If you suspect a network attack at home, look for supporting signals such as:

  • Packet loss during the same window as the lag
  • Repeated disconnects or connection resets
  • Router or modem logs that show abnormal inbound traffic
  • Confirmation from your ISP that they are seeing unusual traffic toward your connection

Without those signals, lag is far more likely to be a normal connectivity issue than a DDoS event.

Misconception 3: “If I block one IP, the DDoS stops”

Sometimes blocking a single IP helps during a simple DoS event. With DDoS, the traffic usually comes from many sources, often changing quickly. Blocking one IP can feel satisfying, but it rarely changes the outcome.

What works better is blocking or slowing traffic based on patterns. Some mitigation methods include:

  • Rate limiting a hotspot endpoint
  • Challenging suspicious clients
  • Blocking known bad signatures or automation signals
  • Pushing mitigation upstream to a CDN, WAF, or scrubbing provider

If you do block IPs, treat that as a quick, temporary action, not the core strategy.

Indusface
Indusface

Indusface is a leading application security SaaS company that secures critical Web, Mobile, and API applications of 5000+ global customers using its award-winning fully managed platform that integrates web application scanner, web application firewall, DDoS & BOT Mitigation, CDN, and threat intelligence engine.

Frequently Asked Questions (FAQs)

What are the most common DDoS symptoms?

The most common symptoms are the same ones users report during any overload event. These include:

  • Pages or APIs suddenly become slow
  • Timeouts increase
  • 5xx errors rise, often 503 or 504
  • Parts of the site fail while others still work
  • Bandwidth, connection counts, CPU, or memory spikes without a clear reason

The key is seeing multiple symptoms together, plus traffic patterns that do not look like normal users.

Can a DDoS attack cause 503 errors? +

Yes. A 503 can happen during a DDoS when the server is overloaded and cannot handle more requests.
But a 503 does not automatically mean DDoS. The same error can be caused by a bad deployment, an upstream dependency failing, a database bottleneck, or a legitimate traffic surge that exceeded capacity.

Use 503 as a signal to investigate traffic shape and resource saturation, not as proof of an attack.

How do I tell DDoS vs a marketing spike? +

Start with behavior and concentration:

  • A marketing spike is usually broad across many pages and includes normal journeys such as browsing, add-to-cart, logins, and conversions.
  • A DDoS is more likely to be narrow and repetitive, with one endpoint hammered and request patterns that do not look like real sessions.

Below checks can help diagnose fast:

  • Are conversions and key events rising or flat?
  • Is one route or API endpoint the clear hotspot?
  • Do client fingerprints look too uniform, such as the same user agent or headers at unnatural proportions?
  • Do the spikes arrive in fixed waves or other patterns that do not look human?
What does a DDoS look like in server logs? +

It depends on the attack type, but common log patterns include:

  • Sudden request-rate jumps, often focused on one URL or API route
  • Repeated requests that do not follow normal user flows
  • A high volume of similar requests from many sources, sometimes with minimal variation in headers
  • A rising ratio of errors and timeouts compared to successful responses
  • For application-layer floods, lots of hits on expensive endpoints such as login, search, checkout, and dynamic API calls

Logs become much more useful when you pair them with the time window and top endpoints from your edge or load balancer dashboards.

What are the symptoms of a SYN flood vs an HTTP flood? +

A simple way to remember it is this: SYN floods attack connections, HTTP floods attack work.

SYN flood symptoms:

  • New connections fail to establish
  • Handshake timeouts or connection failures increase
  • Load balancers and firewalls show connection table or state pressure

HTTP flood symptoms:

  • App CPU rises and stays elevated
  • Database or cache strain increases
  • Endpoint hotspots appear, with certain routes driving most errors and latency
  • 5xx errors climb as the application and dependencies get overloaded
How long do DDoS attacks usually last? +

There is no single typical duration. Some attacks are short bursts designed to test defenses. Others are sustained campaigns that last hours or days, or they come in waves that pause and return.

If impact is high and not improving within minutes, treat it as potentially sustained. Escalate early to upstream mitigation and keep monitoring, rather than waiting for it to stop on its own.

What should I collect as evidence during a suspected DDoS? +

Capture evidence while it is happening. It makes mitigation and post-incident analysis much easier.

Collect:

  • Start time, peak time, and when you applied each mitigation change
  • Top endpoints and their request rates during the peak window
  • Top sources, including IPs, ranges, and ASNs if you have them
  • Sample access logs and edge or WAF event logs from the peak window
  • Screenshots of key metrics: throughput, packets per second, connection counts, error rates, p95 and p99 latency, CPU and memory, load balancer health
  • A short note of what worked and what did not

This is usually enough to support your provider escalations and prevent repeat attacks.

Join 51000+ Security Leaders

Get weekly tips on blocking ransomware, DDoS and bot attacks and Zero-day threats.

We're committed to your privacy. indusface uses the information you provide to us to contact you about our relevant content, products, and services. You may unsubscribe from these communications at any time. For more information, check out our Privacy Policy.

AppTrana

Fully Managed SaaS-Based Web Application Security Solution

Get free access to Integrated Application Scanner, Web Application Firewall, DDoS & Bot Mitigation, and CDN for 14 days

Get Started for Free Request a Demo

Gartner

Indusface is the only cloud WAAP (WAF) vendor with 100% customer recommendation for 4 consecutive years.

A Customers’ Choice for 2024, 2023 and 2022 - Gartner® Peer Insights™

The reviews and ratings are in!