Live Security Walkthrough : Protecting Exposed AI Servers & Hijacked GPUs - Register Now !

Credential Stuffing Symptoms: A Diagnostic Guide for DevOps & SREs

The alert comes in at 2 AM. Login error rates are elevated. Account lockout volume is three times the daily average. Support is getting tickets from users who say they never tried to log in. You check for a recent deployment. Nothing in the last six hours. You check the auth service. It looks stable. The load is unremarkable. Whatever is happening does not look like your infrastructure breaking. It looks like your users are. 

This is credential stuffing. Attackers take username and password combinations leaked from other breaches, run them through an automated tool, and try them against your login endpoint at scale. The accounts that match get flagged for takeover. The rest get discarded. The attack is quiet by design. Individual IP addresses stay below lockout thresholds. Request rates look close to normal. No single signal screams attack. You piece it together from four different dashboards. 

Verizon’s DBIR consistently ranks credential abuse as the leading cause of data breaches, and most teams only confirm it after significant account exposure has already occurred. The gap is not awareness. It is the absence of a clear diagnostic path from scattered auth anomalies to a confident call. 

This guide gives you that path. It is structured for SREs and DevOps engineers who are first in line when auth behavior goes wrong, whether or not a security team is in the loop. If you have already confirmed the attack and need immediate help, go here firstUnder Attack

Section 1: Rule Out the Obvious Internal Causes First 

Before you look outward, look inward. Credential stuffing and a self-inflicted auth failure produce nearly identical symptoms in the first ten minutes. The difference is in where the failure concentrates. 

Recent deployment or auth service config change. Check your change management log for anything pushed to the auth service, API gateway, or feature flags in the last two to four hours. A deployment-caused failure localizes: one endpoint breaks while others stay clean, or errors cluster in one region or pod. If a rollback stabilizes metrics within three to five minutes, you are done. If it does not, move on. 

Password policy or session configuration change. A new password length requirement, a changed session timeout, or a shifted token expiry can trigger a wave of failed logins that looks like an attack. The tell is cohort concentration: failures appear for users whose credentials or sessions fall outside the new rules, not randomly across your entire user base. 

Identity provider or SSO misconfiguration. If failures are appearing on one login path while direct authentication stays clean, the problem is at the integration layer. Check your IdP config and SAML/OIDC settings before assuming account-level compromise. 

Rate limiter or CAPTCHA rule change. A loosened or disabled rule can make a credential stuffing attack that was already running look like it suddenly started. If the timing of your anomaly lines up with a rate limiter or CAPTCHA change, flag it before concluding the attack is new. 

Before you proceed, confirm all four: 

  1. No deployment, config push, or feature flag change in the last four hours 
  2. No password policy, session timeout, or token expiry change recently 
  3. IdP and SSO integration returning clean on all login paths 
  4. Rate limiter and CAPTCHA rules unchanged and active 

All four checked? The failure is not self-inflicted. Move to Section 2. 

Any box unchecked? Investigate that change first before assuming an external attack. 

Section 2: What to Pull, Where to Find It, and What It Means 

Once you have ruled out internal causes, try to confirm credential stuffing from multiple log streams. The pattern only becomes clear when you read across multiple sources simultaneously.  

As a best practice, open five views in parallel and keep the same time window across all of them. Start with the last 15 minutes. If the pattern is unclear, widen to 60 minutes.  

Group results into 5-minute buckets so you can see whether the failure pattern is steady and distributed rather than a short burst or a single bad deploy. 

If you are short on time, run these six checks in order and move to Section 3 when the picture is clear: 

  • Auth failures by 5-minute bucket 
  • Failed accounts by attempt count 
  • Auth-route volume at the edge 
  • Unique IP and ASN spread on auth routes 
  • Lockouts, MFA challenges, and password reset events 
  • Rate-limit and bot-control hits 

The question you are trying to answer is: are login failures rising broadly across many accounts, concentrated on auth routes, distributed across many low-volume sources, and steady enough to suggest credential replay rather than a broken login path? If yes, move to Section 3. 

1. Auth logs: start with the failure shape 

This is the fastest way to tell whether the failure pattern is broad or narrow.  

What to open: Failed login attempts for the last 15 minutes, grouped into 5-minute windows by outcome and unique account. 

What suspicious looks like: Failed logins climb, success ratio drops, and many accounts are touched once or twice instead of one account being hammered repeatedly. 

02:11 login_attempts=1480 success=912 fail=568 fail_ratio=38%
02:16 login_attempts=1512 success=601 fail=911 fail_ratio=60%
02:21 login_attempts=1544 success=433 fail=1111 fail_ratio=72% 
02:22 failed_accounts=1087 
02:22 accounts_with_1_fail=814 
02:22 accounts_with_2_fails=201 
02:22 accounts_with_3plus_fails=72
 

What it usually means: This is a broad, shallow pattern. That fits credential stuffing. If one or two usernames dominate instead, think brute force. 

2. Edge (WAF or CDN) logs: check whether auth routes are taking the hit 

Next, open the edge view for login and auth endpoints only.  

What to open: WAF or CDN traffic for /login, /signin, /oauth/token, /session, /password/reset, and MFA routes.  

Get details on request count by auth route, unique source IPs, status code mix, country mix, ASN mix and user-agent distribution. 

What suspicious looks like: Auth routes get noisy while the rest of the app stays mostly normal. Request volume may be only modestly higher than usual, but 401s, 403s, lockouts, or challenges rise fast. 

02:25 /login rps=184 status_401=126 status_200=21
02:25 /oauth/token rps=61 status_401=48 status_200=6
02:25 /password/reset rps=22 status_200=19
02:25 /home rps=410 status_200=407 status_5xx=0 

What it usually means: This is not a broad application outage. The problem is concentrated on identity entry points. 

3. Source spread: look for many IPs doing very little each 

This is where stuffing starts to separate from noisier login attacks. 

What to open: Failed login traffic grouped by unique source IP, ASN, and country. 

What suspicious looks like: Unique IP count is high, but each IP contributes only one or two attempts. 

02:20 window=5m unique_src_ips=1842 attempts=2630 p95_attempts_per_ip=2
02:20 window=5m unique_asns=163 hosting_or_proxy_asn_share=58%
02:20 window=5m unique_countries=17 

02:18:10 POST /login 401 src=198.51.100.23 user=alice@example.com
02:18:12 POST /login 401 src=203.0.113.44 user=raj@example.com
02:18:14 POST /login 401 src=198.51.100.201 user=maya@example.com
02:18:17 POST /login 401 src=203.0.113.92 user=tom@example.com 

What it usually means: The traffic is being distributed on purpose to stay under simple per-IP thresholds. 

4. Client identity: user agents look too neat 

Mature attacks often try to look normal and often go beyond just a single, fake user agent. 

What to open: Failed auth requests grouped by user agent and, if available, client fingerprints such as JA3, JA4, or HTTP/2 settings.  

What suspicious looks like: Either some user agents repeat too often, or a short list of browser strings rotates in a machine-like way. 

02:23:01 POST /login 401 ua="python-requests/2.31.0" 
02:23:02 POST /login 401 ua="python-requests/2.31.0" 
02:23:04 POST /login 401 ua="python-requests/2.31.0" 

 
02:24:11 POST /login 401 ua="Mozilla/5.0 Chrome/131.0"
02:24:12 POST /login 401 ua="Mozilla/5.0 Safari/605.1.15"
02:24:13 POST /login 401 ua="Mozilla/5.0 Mobile/15E148"
02:24:14 POST /login 401 ua="Mozilla/5.0 Chrome/131.0" 

What it usually means: The traffic is pretending to be diverse, but the pattern is too clean to be organic.  

5. Fingerprint mismatch: browser label and client behavior do not match 

This is a stronger signal when you have it, but do not depend on it. 

What to open: Client fingerprint clusters for failed login traffic. 

What suspicious looks like: Different browser labels collapse into the same tiny fingerprint cluster. 

02:26 ua="Mozilla/5.0 Chrome/131.0" ja3=ab12cd34 count=412
02:26 ua="Mozilla/5.0 Safari/605.1.15" ja3=ab12cd34 count=397
02:26 ua="Mozilla/5.0 Mobile/15E148" ja3=ab12cd34 count=405 

What it usually means: The traffic is wearing different masks, but the client behavior underneath is the same. 

6. App and auth-backend signals: check whether the pressure is spreading inward 

This tells you whether the problem is still a failed-login event or starting to affect downstream auth systems. 

What to open: Session creation, password reset, MFA challenge, account lockout, auth service latency, and user lookup load. 

What suspicious looks like: Failed logins rise, lockouts spread across unrelated users, MFA challenges increase, and password reset requests start to follow failed attempts. 

02:21:10 LOGIN_FAIL user=alice@example.com src=198.51.100.24 
02:21:28 LOGIN_FAIL user=raj@example.com src=198.51.100.91 
02:21:45 MFA_CHALLENGE user=alice@example.com src=203.0.113.8 
02:22:04 PASSWORD_RESET_REQUESTED user=alice@example.com src=203.0.113.8 
02:22:19 ACCOUNT_LOCKED user=raj@example.com src=198.51.100.91  

 
02:25 auth_requests=1820/min auth_failures=1294/min p95_latency=420ms 
02:25 user_lookup_reads=6100/min cache_hit_rate=41% 
02:25 mfa_challenges=388/min account_lockouts=177/min 

7. Control effectiveness: see whether the attack is living below your thresholds 

This is where you find out whether your defenses are actually shaping the traffic. 

What to open: Rate limiter, account lockout, challenge, and bot-control logs. 

What suspicious looks like: Lots of sources cluster just under the threshold, while only a small number trigger enforcement. 

02:27 rule=login_rate_limit threshold=5/min action=challenge hits=112
02:27 rule=account_lockout threshold=10/15m action=lock hits=29
02:27 top_pattern: src_ips_with_4_attempts_in_1m=641 

Section 3: Credential Stuffing vs Brute Force vs Password Spraying vs ATO 

This is where teams waste time. The auth graph looks bad in all three cases. The fastest way to separate them is to ask three questions. Are many attempts hitting the same account? Is the same password being tried across many accounts? Or have some logins already succeeded and turned into suspicious post-login activity? 

A quick way to scan it is this: 

Pattern Primary Tell First Check
Brute force Same account hit many times Attempts per account
Credential stuffing Many accounts hit once or twice Unique accounts per IP and fail ratio
Password spraying Same password fingerprint across many accounts Repeated password fingerprint
Account takeover Suspicious activity after login success Sensitive post-login actions

Brute force versus credential stuffing 

Brute force is narrow. One account or a small set of accounts gets hammered again and again. Credential stuffing is broad. Many accounts get touched once or twice, often from many IPs. 

Brute force looks like this: 

02:11:04 POST /login 401 src=198.51.100.24 user=alice@example.com
02:11:07 POST /login 401 src=198.51.100.24 user=alice@example.com
02:11:11 POST /login 401 src=198.51.100.24 user=alice@example.com
02:11:15 POST /login 401 src=198.51.100.24 user=alice@example.com
02:11:18 POST /login 401 src=198.51.100.24 user=alice@example.com 

Credential stuffing looks like this: 

02:11:04 POST /login 401 src=198.51.100.24 user=alice@example.com
02:11:05 POST /login 401 src=203.0.113.8 user=raj@example.com
02:11:07 POST /login 401 src=198.51.100.91 user=maya@example.com
02:11:09 POST /login 401 src=203.0.113.77 user=tom@example.com
02:11:10 POST /login 401 src=198.51.100.141 user=anna@example.com  

Check attempts per account and unique accounts targeted per source IP. If one account is getting repeated failures, think brute force. If many accounts are getting low-repeat failures, think stuffing. 

Password spraying versus credential stuffing 

Password spraying is one weak password, or a tiny set of passwords, tried across many accounts. Credential stuffing uses different username and password pairs. The usernames may still be broad, but the password pattern is different. 

Password spraying looks like this: 

02:14:01 POST /login 401 src=203.0.113.14 user=alice@example.com pwd_fp=fp_91ac
02:14:04 POST /login 401 src=203.0.113.14 user=raj@example.com pwd_fp=fp_91ac
02:14:07 POST /login 401 src=203.0.113.14 user=maya@example.com pwd_fp=fp_91ac
02:14:10 POST /login 401 src=203.0.113.14 user=tom@example.com pwd_fp=fp_91ac 

Credential stuffing looks like this: 

02:14:01 POST /login 401 src=203.0.113.14 user=alice@example.com pwd_fp=fp_91ac
02:14:04 POST /login 401 src=203.0.113.21 user=raj@example.com pwd_fp=fp_77de
02:14:07 POST /login 401 src=203.0.113.39 user=maya@example.com pwd_fp=fp_3bc1
02:14:10 POST /login 401 src=203.0.113.48 user=tom@example.com pwd_fp=fp_5a92 

Check whether the same password fingerprint repeats across many usernames. Do not log raw passwords. Use a safe internal fingerprint or equivalent auth-side comparison signal. If the password value repeats across many accounts, think spraying. If the password pattern varies by account, think stuffing. 

 Account takeover versus active stuffing 

Credential stuffing is the login attempt phase. Account takeover starts after some of those logins succeed. That is when the signal moves from failed auth to suspicious post-login behavior. 

Active stuffing looks like this: 

02:18:00 POST /login 401 src=198.51.100.24 user=alice@example.com
02:18:02 POST /login 401 src=203.0.113.8 user=raj@example.com
02:18:05 POST /login 401 src=198.51.100.91 user=maya@example.com 

Account takeover looks like this: 

02:18:00 POST /login 200 src=198.51.100.24 user=alice@example.com
02:18:12 POST /account/email/change 200 src=198.51.100.24 user=alice@example.com
02:18:40 POST /mfa/disable 200 src=198.51.100.24 user=alice@example.com
02:19:03 POST /api/payment-method/add 200 src=198.51.100.24 user=alice@example.com  

Check successful logins followed by sensitive actions from a new geo, ASN, device, or session pattern. If you only see failed logins, stay on the stuffing path. If you see successful logins followed by risky account activity, escalate to account takeover response. 

 Section 4: Confirmation Checklist Before You Act 

Do not wait for perfect certainty. Use this as a fast gate before you commit to a credential stuffing response path. Everything below should be answerable from the views you already have open. 

  1. [ ] Login failure rate is elevated, but per-IP attempt count stays low
    Yes / No 
  2. [ ] Failed logins or account lockouts are spread across many unrelated users, not concentrated on one or two accounts
    Yes / No 
  3. [ ] Source IPs span many ASNs, proxy-heavy networks, or residential ranges
    Yes / No 
  4. [ ] User agent patterns or TLS fingerprints on the login endpoint look synthetic, overly uniform, or mechanically rotated
    Yes / No 
  5. [ ] Internal cause has already been ruled out, including recent deployment, config change, rate-limit change, CAPTCHA change, IdP issue, or SSO misconfiguration
    Yes / No 

If three or more answers are yes, and the last item is yes, you are likely dealing with credential stuffing. Move to response. 

If fewer than three answers are yes, keep investigating. You may be looking at brute force, password spraying, a noisy auth regression, or an early-stage incident that has not fully declared itself yet. 

If the last item is no, stop here and investigate the internal cause before treating this as an external attack. 

If the attack is active and you need immediate support, go here: Under Attack. 

Indusface
Indusface

Indusface is a leading application security SaaS company that secures critical Web, Mobile, and API applications of 5000+ global customers using its award-winning fully managed platform that integrates web application scanner, web application firewall, DDoS & BOT Mitigation, CDN, and threat intelligence engine.

Frequently Asked Questions (FAQs)

How do I tell if a login failure spike is credential stuffing or a bad deployment?

A bad deployment usually localizes to a route, region, pod, or user cohort and often stabilizes after rollback. Credential stuffing spreads across many unrelated accounts and usually persists when no internal change explains the pattern. OWASP also recommends tracking failed logins by account, not just IP, because distributed login abuse can hide in an IP-first view. 

What log sources should I check first during a suspected credential stuffing attack? +

Start with auth logs grouped by unique account and time window, not source IP. Then check WAF or CDN logs for pressure on/login,/oauth/token, and/password/reset, followed by lockouts, MFA challenges, and rate-limiter hits. OWASP’s authentication guidance specifically says failed-login counters should be associated with the account rather than the source IP. 

What does credential stuffing look like in auth logs? +

Credential stuffing usually shows a rising failure ratio, many unrelated accounts with one or two failed attempts each, and high source-IP diversity with low attempts per IP. Total login volume may rise only slightly, which is why the failure shape matters more than raw request count. MITRE describes it as credential reuse across unrelated services. 

What is the difference between credential stuffing and brute force attacks? +

Brute force usually means many password attempts against the same account or a very small set of accounts. Credential stuffing spreads attempts across many accounts using real credential pairs from previous breaches. MITRE treats credential stuffing as distinct from generic brute force because the attacker is replaying known credentials rather than broadly guessing passwords. 

How is credential stuffing different from password spraying? +

Password spraying uses one weak password, or a very small list of passwords, across many accounts. Credential stuffing uses many different username-password pairs from breach data, so the password pattern varies by account.  

When does credential stuffing become account takeover? +

Credential stuffing becomes account takeover when successful logins are followed by suspicious post-login actions such as email changes, MFA disablement, new payment methods, or unusual API activity. At that point, the incident has moved from failed-login abuse to active account exploitation. The credential-stuffing stage is the access attempt; the takeover stage is what happens after some of those attempts succeed. 

How can rate-limiter logs confirm credential stuffing? +

Rate-limiter logs become highly suggestive when many IPs cluster just below the enforcement threshold instead of repeatedly tripping it. That pattern usually means the attack is tuned to evade per-IP controls rather than crashing into them. OWASP’s authentication guidance is relevant here too, because distributed attacks can spread attempts across many IPs and slip past IP-only controls. 

Can credential stuffing happen without a big traffic spike? +

Yes. Credential stuffing can be distributed across many IPs, so the clearest signal is often a rising login failure ratio across many unrelated accounts, not a dramatic surge in total traffic. OWASP’s guidance to track failed logins by account rather than source IP fits this exact pattern. 

Join 51000+ Security Leaders

Get weekly tips on blocking ransomware, DDoS and bot attacks and Zero-day threats.

We're committed to your privacy. indusface uses the information you provide to us to contact you about our relevant content, products, and services. You may unsubscribe from these communications at any time. For more information, check out our Privacy Policy.

AppTrana

Fully Managed SaaS-Based Web Application Security Solution

Get free access to Integrated Application Scanner, Web Application Firewall, DDoS & Bot Mitigation, and CDN for 14 days

Get Started for Free Request a Demo

Gartner

Indusface is the only cloud WAAP (WAF) vendor with 100% customer recommendation for 4 consecutive years.

A Customers’ Choice for 2024, 2023 and 2022 - Gartner® Peer Insights™

The reviews and ratings are in!