Why Automated Accessibility Scanners Only Catch 30% of Issues (And What to Do About It)

Automated accessibility scanners are fast, scalable, and a valuable first line of defense — but research consistently shows they catch only 30–57% of real WCAG violations. Understanding the gap, what scanners miss, and how to build a layered testing strategy is essential for anyone serious about compliance and inclusion.

<p>You run an automated accessibility scan, the dashboard comes back green, and you breathe a sigh of relief. But here is an uncomfortable truth: that clean report may be hiding the majority of your site's real accessibility barriers. Research and independent studies consistently show that automated scanners detect somewhere between 30% and 57% of actual WCAG violations — meaning that anywhere from half to two-thirds of the problems your disabled users encounter every day are completely invisible to the tools most teams rely on.</p> <h2>The State of Automated Accessibility Testing</h2> <p>Automated accessibility testing has exploded in popularity, and for good reason. <cite index="4-1">More teams are turning to automation to screen for accessibility issues: 50% of respondents in one 2024 survey said they use automated accessibility tools to identify potential issues, up from 40% in 2023.</cite> The appeal is obvious — scanners are fast, relatively cheap, and can be integrated directly into CI/CD pipelines. They catch the obvious, repeatable, rule-based violations at scale: the missing <code>alt</code> attribute, the form input without a label, the button with an empty accessible name.</p> <p>But the coverage ceiling is a stubborn problem that no scanner vendor has been able to break through. <cite index="2-1">According to Deque, "you can find on average 57% of WCAG issues automatically," and even then tools will return components as incomplete where manual review is needed.</cite> That figure of 57% represents the optimistic end of the spectrum, achieved by one of the most mature and widely trusted accessibility engines on the market using a pragmatic, real-world measurement methodology. Other estimates are considerably lower. <cite index="14-7">Automated tools catch approximately 30–40% of WCAG violations, with the remaining 60–70% requiring manual testing.</cite></p> <p>The discrepancy between 30% and 57% comes down to how you define the denominator. <cite index="12-13,12-14">Deque arrived at the 57% figure by taking a pragmatic, real-world approach rather than a theoretical one — sampling a large number of sites and measuring how many of the actual documented accessibility defects would have been detected using axe-core.</cite> When researchers instead measure coverage against all WCAG success criteria as a theoretical set, the numbers fall sharply. <cite index="11-1">As of this writing, filtering for WCAG 2.2 Levels A and AA to show only approved automated testing rules reveals partial or full coverage for only 17 of 55 Success Criteria.</cite> Either way you slice it, automated testing leaves a significant — and legally dangerous — gap.</p> <p>The problem is compounded by how difficult that gap is to see from the outside. A passing scan actively signals safety, which is exactly when teams are most likely to stop looking. The dashboard is green. Shipping happens. Real users with disabilities hit real barriers.</p> <h2>What Scanners Are Actually Good At</h2> <p>Before diving into the coverage gap, it is worth being clear about what automated tools genuinely do well. They are fast, consistent, and tireless at checking the things that can be determined purely by reading the DOM. <cite index="13-25">Accessibility automation can reliably catch common WCAG violations like missing alt text, empty links, improper form labels, and low color contrast ratios.</cite> These are structural, binary checks — either the attribute exists or it does not, either the contrast ratio passes 4.5:1 or it fails.</p> <p>The WebAIM Million report, which analyses the top one million home pages annually, gives a vivid picture of just how prevalent these detectable errors remain. <cite index="8-1">95.9% of home pages had detected WCAG 2 failures.</cite> The six most common categories — low contrast text, missing alt text, missing form labels, empty links, empty buttons, and missing document language — <cite index="8-28,8-29">account for 96% of all detected errors, and these most common errors have been the same for the last seven years.</cite> Automated tools are genuinely helpful at surfacing these high-frequency, low-complexity violations at scale. The trouble is that fixing only these issues still leaves a site with most of its real barriers intact.</p> <h2>Why the Gap Exists: What Scanners Cannot Evaluate</h2> <p>The coverage ceiling is not a failure of engineering — it is a fundamental limitation of what a machine can assess without human judgment. <cite index="19-2">The gap exists because machines cannot understand context, user intent, or subjective issues like whether heading hierarchy makes sense or whether alt text is accurate.</cite> A scanner can confirm that an image has an <code>alt</code> attribute. It cannot tell you whether that attribute reads <em>"photo-123-final-v2.jpg"</em> or a genuinely useful description. <cite index="13-30">Tools can flag that an image has alt text, but only a person can judge if that text actually describes the image well.</cite></p> <p>Here are the major categories of issues that consistently escape automated detection:</p> <ul> <li><strong>Screen reader experience:</strong> <cite index="22-1,22-2">Automated tools cannot listen to how a screen reader announces content. They can check ARIA attribute validity but cannot determine if the resulting announcements make sense to users.</cite> A form field might have a technically valid <code>aria-label</code> that reads out as a confusing string of characters to a real NVDA or JAWS user.</li> <li><strong>Logical reading and focus order:</strong> <cite index="24-31,24-32">In practice, the reading order often doesn't make sense when screen reader users access information that may visually read perfectly fine. In a column layout, a screen reader reads the first line of column 1, then column 2, leading to confusion.</cite> Scanners analyse DOM order in isolation, without the context of how visual layout transforms that order for a sighted user.</li> <li><strong>Meaningful link and button text in context:</strong> <cite index="21-24">Automated tools can check whether a link exists and whether it includes text, but they can't always judge if the purpose of that link is clear.</cite> Five "Read more" links on the same page all pass automated checks and all fail real users who need to understand where each one leads.</li> <li><strong>Dynamic content and live regions:</strong> <cite index="24-16,24-17">Automated tools won't be able to catch issues with dynamically loaded content. One will have to run the test again after the dynamic update gets added — but even then, the tool can't say if a screen reader will read it or not.</cite></li> <li><strong>Cognitive accessibility and plain language:</strong> <cite index="29-1">Automation can detect structural issues like heading order or label presence, but cannot evaluate readability, clarity, or whether instructions are easy to follow.</cite> A complex multi-step checkout with confusing error messages can be structurally "clean" while being deeply inaccessible to users with cognitive disabilities.</li> <li><strong>Keyboard navigation in complex interactions:</strong> <cite index="29-7">Automation can test basic keyboard focus and operability, but cannot fully validate complex multi-step interactions, custom gestures, or alternative input devices.</cite> A custom date picker widget may be fully keyboard operable in theory and a complete trap in practice.</li> <li><strong>Overlapping visual elements and gradient contrast:</strong> <cite index="21-1">Automated tools can evaluate contrast ratios, but they don't always account for overlapping elements, images behind text, or dynamically changing content that interferes with readability.</cite></li> </ul> <blockquote>A clean automated scan means you have addressed the 30–40% of issues that automation can catch. The remaining 60–70% are untested. Never claim WCAG compliance based solely on automated testing.</blockquote> <p>One particularly striking piece of evidence: <cite index="16-22,16-23,16-24">in one study, government accessibility advocates in the United Kingdom intentionally created a webpage with 142 accessibility barriers, then analysed the page with 13 automated accessibility tools. The best-performing tool was only able to identify 40% of the barriers. The worst-performing tool found just 13%.</cite> Even when the deck was stacked in the tools' favor — using a controlled page with known, documented issues — the results were sobering. And combining tools doesn't fully solve it: <cite index="7-1">even using six tools in parallel, half of all WCAG 2 success criteria are not covered and 6 out of 10 violations get missed.</cite></p> <h2>The Legal Risk of Over-Relying on Automation</h2> <p>This is not just a theoretical concern about user experience. The legal stakes for accessibility non-compliance are rising sharply, and a passing automated scan offers almost no protection in a lawsuit. <cite index="31-5">In 2024, more than 4,000 lawsuits were filed in U.S. courts claiming barriers to website or mobile accessibility.</cite> <cite index="34-1">The first half of 2025 alone saw 2,014 ADA website lawsuits — a 37% increase from 2024.</cite></p> <p><cite index="35-22,35-23">Out-of-court settlements average $30,000, while court judgments average $85,000. Defense legal fees of $30,000–$175,000 apply on top in all cases.</cite> Worse, settling once is no guarantee of safety: <cite index="35-8">45–46% of 2025 federal digital accessibility lawsuits targeted companies that had already been sued before.</cite> Getting sued and patching only what automated tools flag, without addressing the broader structural gaps, simply paints a target on your back for the next plaintiff.</p> <p>It is also worth addressing a common misconception about accessibility widgets and overlays as a shortcut to compliance. Data from 2025 shows that <cite index="34-8,34-9">456 ADA lawsuits were filed against websites that had accessibility widgets installed, making up 22.64% of total lawsuits — emphasizing that simply adding an accessibility widget is not a comprehensive solution.</cite> <cite index="33-29">Automated tools can detect only 30% of WCAG issues,</cite> which means any tool or widget that relies purely on automated detection is by definition leaving the majority of issues unaddressed. What separates a genuinely valuable accessibility SDK — like Accsible — from the overlay products that have faced legal and regulatory backlash is the combination of automated remediation with a commitment to honest, layered compliance strategy rather than false guarantees.</p> <h2>A Layered Testing Strategy That Actually Works</h2> <p>The answer to the coverage gap is not to abandon automated scanners — it is to use them correctly, as the first layer in a comprehensive strategy, not the last. <cite index="18-6">Of the 86 WCAG 2.2 success criteria, seventy percent require human review to properly interpret criteria and apply them to the grey areas outside automated accessibility technology's purview.</cite> That means human judgment is not optional — it is structurally required by the standard itself.</p> <p>A robust accessibility testing programme typically works in three layers:</p> <ol> <li><strong>Automated scanning (continuous):</strong> Integrate scanners like axe-core into your CI/CD pipeline and run them on every build. Catch the structural, binary violations before they reach production. Set thresholds and fail builds on new critical violations. This is your safety net for the obvious stuff — fast, scalable, and cheap. <cite index="19-10,19-11,19-12">Run automated tools early and often during development. Integrate axe or WAVE into your CI/CD pipeline so issues are caught before code reaches QA. This shifts accessibility testing left, catching issues when they're cheapest to fix.</cite></li> <li><strong>Expert manual audit (periodic):</strong> Conduct structured manual audits against the full WCAG checklist, performed by people with deep accessibility knowledge. <cite index="30-4,30-5">Manual accessibility tests are carried out by trained experts who actively use websites with assistive technologies such as screen readers, keyboard navigation, or magnification software. They assess context and user experience — the logical focus order and intuitive feel of navigation, the clarity of forms and error messages, readability within complex content.</cite> Manual audits typically happen quarterly or when major features ship, and they should cover your highest-traffic user journeys end-to-end. <cite index="12-17,12-18">Guided manual accessibility audits sit between fully manual and fully automated testing, narrowing the coverage gap, with some estimates putting coverage as high as 80% with this approach.</cite></li> <li><strong>Assistive technology and user testing (ongoing):</strong> <cite index="25-1,25-2">You can't rely on automated tools alone for determining accessibility problems on your site. Every website project needs a user testing strategy, and it is highly recommended that you include accessibility user groups — screen reader users, keyboard-only users, non-hearing users, users with mobility impairments.</cite> Real users with disabilities find issues that no checklist anticipates. Test with NVDA and JAWS on Windows, VoiceOver on macOS and iOS, and TalkBack on Android. Navigate your entire checkout or sign-up flow using only the keyboard. Actually listen to how your content sounds when read aloud.</li> </ol> <p>When teams implement all three layers, the combined coverage can approach 80–90% of real-world issues — a dramatic improvement over the 30–57% ceiling of automation alone. The goal is not perfection on day one; it is a systematic, documented process that demonstrates genuine good-faith effort and continuously closes the gap.</p> <h2>Integrating Accessibility Into Your Development Workflow</h2> <p>The most important cultural shift is moving accessibility from a pre-launch checklist to a continuous practice. Many organisations make the mistake of treating it as a one-time audit they commission when they fear a lawsuit is coming, rather than a quality standard baked into every sprint. By the time an audit reveals problems in a production system, the cost of fixing them is five to ten times higher than it would have been at the design stage.</p> <p>Start by making accessibility criteria part of your definition of done. When a developer ships a new component, a quick automated check should run automatically. When a designer creates a new pattern, colour contrast and focus states should be reviewed before the design is even handed off. When a content editor adds a new image, they should have a clear understanding of what meaningful alt text looks like — not just that alt text is required.</p> <p>For compliance managers, the practical implication is documentation. <cite index="22-21,22-22">Some teams run automated tests but never address the findings. This provides no value and creates documentation that you knew about issues but did not fix them — problematic in legal situations.</cite> An accessibility programme is only defensible if you can show a reasonable, good-faith process of continuous improvement: regular scans, documented findings, a remediation roadmap, and evidence that you are acting on what you learn. WCAG conformance is not a binary you achieve once — it is a posture you maintain.</p> <p>Tools like Accsible exist to support this layered approach — providing an SDK that embeds accessibility improvements directly into the user experience, surfacing real-time issues, and complementing the manual audit process rather than attempting to replace it. The right overlay or SDK is not a magic shield against lawsuits; it is one component of a thoughtful programme that acknowledges what automation can and cannot do.</p> <h2>Key Takeaways</h2> <ul> <li><strong>Automated scanners are a starting point, not a finish line.</strong> Even the best tools detect between 30% and 57% of real WCAG violations. A clean scan report does not mean your site is accessible — it means the detectable subset of issues has been addressed.</li> <li><strong>The majority of WCAG success criteria require human judgment.</strong> Screen reader experience, logical reading order, meaningful link text in context, cognitive clarity, and complex keyboard interactions are all areas where automation is structurally incapable of giving you a reliable answer.</li> <li><strong>The legal environment is hostile to complacency.</strong> Over 5,100 federal ADA website lawsuits were filed in 2025, settlements routinely cost $30,000–$85,000 plus defence fees, and nearly half of defendants had already been sued before — suggesting that surface-level fixes are not enough.</li> <li><strong>A three-layer strategy — automated scanning, expert manual audits, and real assistive technology testing — can push coverage toward 80–90%</strong> and gives you the documented, good-faith compliance posture that courts and regulators expect to see.</li> <li><strong>Shift accessibility left.</strong> Catching issues at the design and development stage costs a fraction of what remediation costs post-launch. Integrate automated checks into CI/CD, make accessibility part of your definition of done, and conduct regular manual audits on your most-trafficked user journeys.</li> </ul>
TestingAutomationAuditWcagManual testingCompliance