216digital.
Web Accessibility

ADA Risk Mitigation
Prevent and Respond to ADA Lawsuits


WCAG & Section 508
Conform with Local and International Requirements


a11y.Radar
Ongoing Monitoring and Maintenance


Consultation & Training

Is Your Website Vulnerable to Frivolous Lawsuits?
Get a Free Web Accessibility Audit to Learn Where You Stand
Find Out Today!

Web Design & Development

Marketing

PPC Management
Google & Social Media Ads


Professional SEO
Increase Organic Search Strength

Interested in Marketing?
Speak to an Expert about marketing opportunities for your brand to cultivate support and growth online.
Contact Us

About

Blog

Contact Us
  • How to Use VoiceOver to Evaluate Web Accessibility

    Most page reviews start the same way. You scan the layout, tab through a few controls, and make sure nothing obvious is broken. That catches a lot. It just doesn’t tell you what the page sounds like when a screen reader is in charge. Turn on VoiceOver, and the same interface can expose gaps you would never notice visually. A button has no name. The heading structure is thinner than it looks. A menu opens, but nothing announces that anything changed.

    On macOS, Apple’s built-in screen reader gives you a direct way to hear what your site is exposing to assistive technology. We’ll cover a practical setup, the few commands that matter most, and a repeatable testing flow you can use on any page.

    Why VoiceOver Testing Catches Issues Automated Tools Miss

    VoiceOver is a screen reader that ships with Mac devices. When it is turned on, it announces what is on the screen, the role of each element, and its state. It also replaces mouse-first navigation with keyboard navigation so people can move through pages without relying on sight.

    Testing with this tool shows how your site appears to the macOS accessibility layer in Safari. You hear whether headings form a usable outline. You find out whether regions are exposed as landmarks. You also learn whether labels give enough context when they are spoken on their own, which is a big deal for links, buttons, and form fields.

    Small issues stand out more when they are spoken about. A “Learn more” link might seem harmless when it sits next to a product title. But in a list of links, it turns into five identical options with no meaning. An icon-only button might look clear on screen, but it can be announced as “button” with no name. A form field might look labeled, but if that label is not programmatically connected, the field can be read as “edit text” and nothing else.

    VoiceOver is not the only screen reader that matters. NVDA and JAWS on Windows and mobile tools on phones and tablets each have their own patterns. This approach treats macOS testing as one important view into accessibility, not a substitute for broader coverage.

    Web Content Accessibility Guidelines (WCAG) requirements still apply the same way. On desktop, the impact of focus order, labeling, and structure becomes more obvious when you stop scanning visually and start moving through a page one item at a time.

    Set Up VoiceOver in Safari for Reliable Accessibility Testing

    A stable setup makes your testing more dependable over time. You do not need anything complex. A Mac, Safari, and a space where you can hear speech clearly are enough. Headphones help if your office or home is noisy.

    Before you run your first pass, set up the few things that will affect your results:

    • Turn VoiceOver on with Command + F5
      • On some laptops, you may need fn + Command + F5
    • Choose your VO modifier key.
      • Control + Option together, or Caps Lock
    • In Safari preferences, go to the Advanced tab and enable “Press Tab to highlight each item on a webpage.”

    That Safari setting is easy to miss, but it matters. Without it, you can tab and still skip controls you need to test.

    Next, spend a minute in the screen reader settings. Adjust the speech rate to something you can follow without strain. If it is too fast, you will miss details during longer sessions. If it is too slow, you will start rushing and skipping steps. Set the voice and language so they match the primary language of the site.

    Different macOS versions and device setups can change where settings live or how certain items are announced. You do not need the “perfect” configuration. What helps most is using one setup consistently, so results are comparable from sprint to sprint.

    Turn VoiceOver On and Off Fast During Development

    You can enable the screen reader through System Settings under Accessibility, but for regular work, it is worth using the keyboard toggle. Most teams rely on Command + F5 so they can switch quickly.

    That quick toggle matters in real development cycles. You might review a component visually, turn VoiceOver on, test it, turn it off, adjust code, and repeat. If enabling and disabling feel slow, the step stops happening.

    There is a learning curve. When the screen reader is active, you will use the VO key with other keys for many commands. You also want to know how to pause speech quickly. Pressing Control to stop reading is one of those small skills that make testing feel manageable instead of chaotic.

    If you ever want a safe practice mode, turn on Keyboard Help with VO + K. VoiceOver will tell you what keys do without typing or triggering actions. Press VO + K again to exit.

    VoiceOver Commands for Web Testing You’ll Use Most

    You do not need every Mac keyboard shortcut to run useful tests. A small set covers most web content and keeps the process repeatable across a team.

    For reading:

    • Read from the top: VO + A
    • Stop reading: Control
    • Move to next item: VO + right arrow
    • Move to previous item: VO + left arrow

    For interactive elements:

    • Move focus forward: Tab
    • Move focus backward: Shift + Tab
    • Activate an item: Return or VO + Space

    Start with simple, linear navigation. Read from the top and move through items one by one. Ask yourself whether the reading order matches what you see on screen. Listen for controls that do not make sense when heard out of context, like “button” with no name or multiple links with the same vague label.

    This pace will feel slower than visual scanning. That is part of the value. The slowness makes gaps in structure, naming, and focus behavior hard to ignore.

    Use the VoiceOver Rotor to Check Headings, Links, and Landmarks

    Once you can move through a page linearly, the Rotor helps you evaluate structure much faster.

    Open the Rotor with VO + U. Use the left and right arrow keys to switch between categories and the up and down arrow keys to move through the items in that category. Common Rotor lists include headings, links, landmarks, form controls, tables, and images.

    This is where structural problems become obvious. If the page outline is weak, the Rotor will tell you fast.

    Start with headings. Move through the hierarchy and listen for a clear outline. Missing levels, unclear section names, or long stretches with no headings usually mean the structure is not supporting nonvisual navigation.

    Next, move to landmarks. This reveals whether regions like header, navigation, main, and footer are present and used in a way that matches the layout. Too few landmarks can make the page feel flat. Too many can create noise.

    Then scan links and controls. Duplicate or vague link text stands out when you hear it as a list. Controls with incomplete labeling stand out too. An icon button that looks fine can become “button” repeated several times.

    You can also filter Rotor lists by typing. If you are in the headings Rotor and type “nav,” you can jump to headings that contain those characters. Small features like this make VoiceOver testing feel less like wandering and more like a targeted review.

    A Repeatable VoiceOver Workflow for Web Accessibility QA

    You do not need to run a full audit on every page. A light, repeatable workflow is easier to sustain and still catches a large share of issues.

    When you review a new page or a major change:

    1. Turn on the screen reader and let it read the beginning of the page.
    2. Move through the page in order and note anything confusing or missing.
    3. Use the Rotor to review headings, landmarks, links, and form controls.
    4. Complete one core task using only the keyboard.

    Start by listening to how the page begins. Do you hear a clear main heading early? Does navigation appear when it should? Is anything being read that is not visible or not meant to be part of the flow?

    Then move through the content one item at a time. Note skipped content, unexpected jumps, or labels that do not match what you see.

    Next, do your structural passes with the Rotor. Headings give you a quick sense of hierarchy. Landmarks show whether regions are exposed and used correctly. Links and form controls expose labeling patterns and repetition.

    Finally, test a key journey. Pick a task that matters and complete it with no mouse. That might be searching, opening navigation, filling out a form, or moving through a checkout step. If focus jumps, if a dialog opens without being announced, or if you cannot reach a control, it is not a minor problem. It is a broken path for many users.

    Along the way, watch for patterns. Maybe icon buttons from one component set often lack names. Maybe form errors rarely announce. Patterns are the difference between fixing one page and improving the whole system.

    High-Impact VoiceOver Checks for Forms, Menus, and Modals

    Some parts of a site deserve more focused time because they carry more weight for users and for the business.

    Forms and inputs

    Fields should have clear labels, including required fields and fields with special formats. Error messages need to be announced at the right time, and focus should move to a helpful place when validation fails. If the error appears visually but VoiceOver does not announce it, users may never know what went wrong.

    Navigation menus and drawers

    Menus should announce when they open or close. Focus should shift into them when they appear and return to a sensible point when dismissed. A menu that opens visually but stays silent is a common failure.

    Modals and other overlays

    Modals should trap focus while active and hand focus back cleanly when they close. Users should not be able to tab into page content behind the modal. The modal should also have a clear label so it is announced as a meaningful dialog, not just “group.”

    Status updates and confirmation messages

    Loading indicators, success messages, and alerts should be announced without forcing users to hunt for them. If an action completes and nothing is announced, users are left guessing.

    Tables and structured data

    If you use data tables, confirm that headers are associated properly so VoiceOver can announce row and column context as you move. Tables with merged headers or empty corner cells can cause confusing output, so they are worth extra attention.

    These areas tend to reveal focus and naming issues quickly, especially when the UI is built from custom components.

    VoiceOver Bug Notes That Lead to Faster Fixes

    Bringing screen reader testing into your workflow is one of those small shifts that pays off quickly. It helps you catch problems earlier, tighten how your components behave, and ship experiences that hold up under real use.

    To keep it sustainable, capture findings in a way that leads to fixes instead of debate:

    • Record the page and the component or flow.
    • List the steps you took
    • Note what you expected to hear.
    • Note what you actually heard.
    • Explain the impact of completing a task.
    • Suggest a direction, such as a missing label, an incorrect role, or a focus not managed.

    Short, consistent notes are usually enough. Over time, your team will build a shared understanding of what “sounds right” and what needs cleanup.

    Make VoiceOver Checks Part of Your Release Routine

    VoiceOver testing can feel slow at first, and that is normal. You are training your ear to notice things your eyes will always gloss over. Keep the scope small, stay consistent, and let the habit build. A quick pass on one key flow each release is enough to change how teams design, label, and manage focus over time. And when you hit a pattern that keeps coming back, that is usually a sign the fix belongs in your shared components, not in one-off patches.

    If you want help making this a repeatable part of how you ship, 216digital can support you. We work with teams to weave WCAG 2.1 into their development roadmap in a way that fits their product, pace, and constraints. Schedule a complimentary ADA Strategy Briefing, and we’ll talk through what you’re building, where VoiceOver testing should live, and the next changes that will have the most impact.

    Kayla Laganiere

    February 11, 2026
    How-to Guides
    Accessibility, assistive technology, How-to, screen readers, VoiceOver, Web Accessibility, Website Accessibility
  • Why Accessibility Costs Feel So Unpredictable

    Budgeting for accessibility often feels like a moving target. One release looks manageable. Then a new feature ships, a vendor tool updates without warning, or a marketing push adds a batch of new pages. The estimate shifts, and the same question returns in planning discussions:

    What is this going to cost us, and for how long?

    It’s a fair question. Teams ask it often because they want to make good decisions in a fast-changing environment. Accessibility work involves design, development, content, QA, compliance, and third-party tools. When so many areas are involved, the budget spreads out too. 

    A web accessibility solution should simplify management and help shift from reactive spending to a steady, predictable investment over time.

    Below, we examine why accessibility costs fluctuate, identify sources of hidden work, and outline how to build a stable budgeting model as your digital projects expand.

    Why Web Accessibility Solutions Are Hard to Budget

    Accessibility touches almost everything. Design decisions affect keyboard flow. Development choices affect clear code structure and meaning, and interaction. Content affects readability, structure, media, and documents. QA needs to test more than visual layouts. Legal and compliance teams need confidence. Purchasing and vendor selection may bring in third-party tools and platforms that introduce their own barriers.

    When work spreads across that many functions, costs spread too. That is one reason budgeting feels unclear. There is rarely a single owner and rarely a single budget line.

    Why Web Accessibility Costs Change After Every Release

    Another reason is that digital products change constantly. Even “simple” sites evolve. New pages are published. Navigation gets adjusted. Features roll out. A CMS update changes templates. A new integration appears in a checkout flow. The scope is always moving.

    This makes one-time estimates unreliable. You can budget for current needs, but must also account for future changes.

    Standards do not make this easier. Standards such as Web Content Accessibility Guidelines (WCAG) and the Americans with Disabilities Act (ADA) describe outcomes and expectations. They do not tell you how long it will take to get there for your specific codebase, content, or workflows.

    Teams still have to turn those requirements into tasks, timelines, and costs while the product continues to evolve.

    Then there is technical debt. Older templates, inherited components, and CMS constraints require extra effort. Addressing accessibility often involves revisiting past decisions made by previous teams, adding to the overall cost.

    This is why treating web accessibility as a one-time project with a fixed end date can feel unpredictable. A web accessibility solution functions best as an ongoing quality process.

    Why Accessibility Audits Don’t Create a “Done” Moment

    Many organizations begin with an audit because it seems like the responsible, structured path: understand the issues, prioritize, fix, and retest. That approach is valid.

    However, the audit model often creates a false sense of completion.

    An audit provides only a snapshot. Even if thorough, it reflects a single moment while the product continues to change. Content updates, frequent releases, third-party widget changes, and redesigns can all occur before fixes are implemented, making the environment different by the time changes are made.

    The other challenge is scale. Audits often test representative pages or flows. Fixes still need to be applied across the full system. If a component is used in dozens of places, a single issue can become a cross-site effort. That can surprise teams who assumed the audit list was “the full scope.”

    Then comes retesting. Retesting confirms progress, but it can also reveal new issues, issues coming back, or missed patterns. Leaders request final numbers, and teams struggle to provide accurate answers without overcommitting.

    This is when budgets for a web accessibility solution begin to feel open-ended.The work continues not because it is endless, but because the product is always evolving.

    A web accessibility solution should reflect this reality and avoid relying on repeated, unexpected cycles as the primary approach.

    Hidden Web Accessibility Costs Teams Don’t Budget For

    Proposals often focus on deliverables such as testing, reporting, remediation guidance, and final checks. Those are real costs. Yet the biggest budget surprises are often internal and operational.

    Internal Time Costs: Development, Design, QA, and Product

    Accessibility work competes with other priorities. Developers may need to refactor shared components instead of building new features. Designers may need to adjust patterns that are already in production. QA adds accessibility checks alongside functional and performance testing. Product teams spend time triaging, prioritizing, and coordinating.

    This time matters because it is part of the true cost, even when it is not tracked as a separate line item.

    Training Costs: Building Accessible Patterns

    Teams do not become fluent overnight. Even experienced developers may need time to align on accessible patterns for modals, menus, focus management, form errors, and complex UI states. Content teams may need guidance on headings, link writing, media alternatives, and document remediation. Designers may need stronger guardrails for contrast, typography, and interaction states.

    Without training and shared conventions, teams spend more time on trial and error. That time becomes cost.

    Content and Marketing Costs: Pages, PDFs, Media

    Accessibility affects how content is created and published. PDFs and marketing assets may require remediation. Videos need captions. Images need meaningful alternatives. Campaign pages need structure checks. Email templates need review. In many organizations, these are high-volume workflows. Small changes multiplied by volume produce major effort.

    Testing Costs: Beyond Automated Scans

    Accessibility testing is not only automated scanning. Teams need keyboard checks, screen reader testing, mobile testing, and coverage for the access tools people use. Some build internal test environments. Others rely on external partners. Either way, expanding testing adds cost, and it often grows as the product grows.

    Design Systems and Ownership: Preventing Repeat Fixes

    If components are not accessible by default, every new feature inherits the same problems. Fixing a design system can feel expensive, but it is often the move that reduces future costs the most. When core components are solid, teams stop paying repeatedly for the same fixes.

    That prevention only holds up if teams also have clear ownership and a process that fits day-to-day work. Someone has to define what “done” means in design, development, QA, and content. Monitor barriers and maintain documentation as the product evolves. When ownership and process are missing, budgets get hit through backtracking and rework.

    These hidden costs are why accessibility can feel unpredictable even when a vendor quote looks straightforward. The quote may be accurate for what it covers. The total effort is simply larger than what is priced.

    How to Make Web Accessibility Costs More Predictable

    Accessibility costs become more predictable when accessibility becomes a capability rather than a cleanup.

    A cleanup mindset says, “We will fix everything and move on.”

    A capability mindset says, “We will build a way of working that keeps this accessible as we continue to ship.”

    This shift is practical. It changes what you budget for.

    Instead of budgeting only for audits and remediation sprints, you budget for:

    • Clear ownership and decision-making
    • Accessibility review steps in design and engineering workflows
    • Content checks that fit into publishing
    • Ongoing testing and regression prevention
    • Access to expertise for complex issues
    • Monitoring so that problems are caught early.

    When these pieces exist, the work becomes less reactive. You still fix issues. You also prevent a large portion of them from reappearing.

    A web accessibility solution should support capability-building. That is what changes the cost pattern over time.

    How to Build a Predictable Web Accessibility Budget

    A useful budget is not a perfect estimate. It is a plan that stays stable under change.

    What Drives Web Accessibility Budget Size

    Your footprint, level of exposure, and complexity matter more than revenue alone. Cost tends to rise with:

    • Multiple properties or platforms (web, mobile apps, portals)
    • High-complexity flows (checkout, enrollment, account management)
    • Heavy customization and custom components
    • Large content libraries, especially documents
    • Higher regulatory exposure and public visibility
    • Frequent releases and rapid iteration cycles

    Two organizations can look similar from the outside and still have very different accessibility needs because their digital frameworks are shaped differently.

    A Layered Budget Model: Foundation, Operations, Growth

    Layered planning is often more realistic than a single budget line. A helpful model is:

    Foundation layer

    Initial assessments, prioritization, key components, and design system improvements, and the first wave of high-impact remediation.

    Operational layer

    Ongoing monitoring, regression checks, advisory support, periodic confirmation testing, and workflow integration.

    Growth layer

    New launches, redesigns, migrations, new vendors, and major feature initiatives.

    This structure makes it easier to explain why costs shift from year to year and where predictability comes from.

    Budgeting Models That Work for How You Ship

    Common models that create clarity:

    • Fixed annual or quarterly allocation for accessibility, similar to security or compliance
    • A hybrid approach, where you invest more in year one, then shift into predictable maintenance.
    • Embedded budgeting, where each release dedicates a percentage of effort to accessible implementation and QA

    The “right” model is the one that aligns with how you ship work. Teams that release frequently typically benefit from embedded budgeting. Teams with major planned cycles may prefer hybrid planning.

    Using Ballparks Without Overpromising

    Some organizations set accessibility investment as a consistent slice of digital operations or compliance budgets. The exact number varies, but consistency often matters more than size. A smaller, steady investment can outperform a larger, sporadic one because it reduces emergency work.

    Year one often includes more discovery, remediation, and training. Year two and beyond often shift toward prevention, monitoring, and incremental improvement, which is usually easier to forecast.

    How to Talk About Web Accessibility Budgets Internally

    Accessibility budgeting is also a communication challenge. The goal is to reduce fear and increase clarity.

    Bring finance and leadership in early, before emergency requests begin. Position accessibility as part of predictable risk management and product quality, not a surprise “extra.”

    Shift the conversation away from a single big number. Many leaders ask, “What does accessibility cost?” The more helpful question is, “What do we invest each year to keep this healthy?”

    Align teams on where accessibility work lives. Product, design, development, QA, and content all influence outcomes. When each group understands its role, the cost stops looking random.

    Use language that suits stakeholders:

    • Executives: predictability, brand trust, risk management
    • Product and marketing: enhanced experience, expanded audience, fewer rebuilds
    • Engineering: cleaner systems, fewer issues coming back, reduced firefighting

    Also, define phased goals. Many organizations start with critical paths such as checkout, sign-ups, key forms, and account access. That reduces risk while keeping progress realistic.

    Conclusion: Predictable Costs Come From Process

    Accessibility stays manageable when teams treat it as part of the normal flow of updates, not a separate effort that only surfaces during audits or emergencies. Every new page, feature, template adjustment, or vendor change creates an opportunity to keep progress intact. When those moments get consistent attention, accessibility stops swinging between big pushes and surprise fixes.

    Monitoring plays a major role in that stability. Catching issues early keeps the effort small. Staying aligned during design, development, and content updates prevents the same problems from returning. Over time, this consistency is what makes budgets predictable and progress dependable.

    If you want help building that kind of steady foundation, 216digital is here to support you. Schedule a complimentary ADA Strategy Briefing to talk through your goals, understand your current setup, and map out practical steps that fit the way your team works. 

    Greg McNeil

    January 28, 2026
    Testing & Remediation
    Accessibility, Accessibility Remediation, Accessibility testing, cost, WCAG, Web Accessibility, Web Accessibility Remediation, Website Accessibility
  • Who’s Legally Responsible for Web Accessibility—You or Your Client?

    Accessibility is now a standard part of online business. That is progress. It also brings a harder question: what happens when the work gets challenged? When a demand letter or lawsuit shows up, who is responsible for web accessibility in a legal sense—the agency managing the site, or the organization that owns it?

    In most U.S. disputes, the website owner is usually the first party named. The Americans with Disabilities Act (ADA) generally places the duty on the covered entity providing the goods, services, or programs, even when access happens through a website or app. 

    But that does not mean agencies and contractors are not exposed. Vendors often enter these disputes through contract language, representations, and third-party claims after the client is sued. In some public-sector contexts, particularly in California, plaintiffs have shown a willingness to pull contractors closer to the center of the dispute.

    This article breaks down who gets held accountable, why vendors still face risk, and how courts tend to evaluate who is responsible for web accessibility once a claim is active.

    Who Is Responsible for Web Accessibility Under the ADA?

    When people ask, “Who is legally responsible?” they are often asking more than one question. One is procedural: who gets named first. The other is substantive: who the law places the duty on.

    In most disputes, the first answer is the website owner—the organization offering the public-facing service. The second answer typically points to the same place. The ADA generally ties the obligation to the covered entity providing the goods, services, or programs, including when access happens through a website or app. DOJ guidance is aimed at public-facing businesses and at state and local governments, reinforcing that expectation.

    For private-sector teams, this is the practical baseline. Title III risk typically follows the business offering the goods or services, not the agency building the site. The claim is about access to what the business provides online, so the owner is the party most likely to be named first.

    Public-sector requirements can be more prescriptive, but the structure stays similar. The obligation attaches to the entity delivering the program or service.

    The piece that often gets missed is the next question: who can still be exposed even if they are not named first. That is where vendor risk tends to show up—through contract language, representations, and downstream claims after the client is sued.

    That’s why the key question becomes not only who is named first, but how the record determines who is responsible for web accessibility once a claim is active.

    How Vendors Get Pulled In

    Even when the website owner is the primary legal target, vendors can still get pulled in. Most of the time, it happens through three documentation-driven channels.

    Contract Allocation

    The agreement can shape the dispute before it starts. Accessibility scope, testing language, warranties, exclusions, and post-launch responsibility influence whether the vendor is treated as having assumed obligations—or whether the client remains clearly responsible for web accessibility after launch.

    Third-Party Claims

    After an owner is sued, it may try to shift costs to a platform, developer, or agency through indemnity, contribution, breach of contract, or misrepresentation theories. At that point, the question is not “Is this a Title III claim?” It is “What did the vendor promise, and can the client point to it?” That record can influence how a court views who is responsible for web accessibility obligations in practice.

    Evidence and Expectations

    Proposals, SOWs, marketing pages, emails, and tickets become the record of what was represented, scoped, and delivered. In a dispute, that record can carry as much weight as the implementation itself—and it can shape arguments about who is responsible for web accessibility when expectations and outcomes don’t match.

    When an Access Claim Becomes a Contract Dispute

    A recent example shows how quickly an accessibility dispute can shift into contract territory.  In Herrera v. Grove Bay Hospitality Group, LLC, after an accessibility claim, a restaurant tried to bring its website platform into the case through a third-party complaint. The court dismissed it, relying heavily on the platform’s terms, including disclaimers of ADA compliance obligations and warranties that the services would satisfy legal requirements.

    Two takeaways for agencies and platforms:

    1. Adding a vendor is not automatic. A viable legal theory still has to survive the contract language.
    2. Courts focus on what the vendor actually assumed. If sales or scope language implies “we guarantee compliance,” you may be taking on obligations your delivery model cannot reliably support.

    That is why “ADA compliant” is a risky marketing phrase unless it is tied to a defined benchmark, a defined scope, and defensible evidence. Otherwise, it can muddy who is responsible for web accessibility when a claim tests the work.

    Responsibility Depends on the Legal Pathway

    A useful way to answer the responsibility question is to separate the underlying access claim from vendor exposure.

    The underlying ADA-style access claim typically targets the entity providing the service, the owner or operator. Vendor exposure usually flows from contracts and promises—and in some contexts, from specialized theories tied to government contracting and representations.

    That distinction matters because it changes what “responsible” means in practice. Vendors do not control whether the ADA applies to a client. You are deciding what you will commit to in writing, what you will represent, and what you can prove you delivered—especially if you later need to show how responsibility was assigned and who is responsible for web accessibility in each phase of the work.

    Define Responsibility in Contracts

    The most effective way to avoid conflict is to define responsibility early and document it in the agreement. Disputes rarely come from bad intent. They come from unclear scope and assumptions that never made it into writing.

    From a risk standpoint, agencies and vendors tend to get squeezed in two predictable ways. Both usually come back to how accessibility is described in the agreement and how the agreement answers who is responsible for web accessibility over time.

    Two Contract Traps

    A Promise Without a Standard

    If you say “ADA compliant” without defining the benchmark, you invite a fight over what you meant. If you promise accessibility outcomes, tie them to a named standard and a defined target.

    A Standard Without Coverage

    Even when WCAG is named, disputes flare up when the scope is unclear. The question becomes what WCAG applies to in this engagement. For example, does it include PDFs, third-party tools, user-generated content, post-launch edits, or new templates and features?

    In disputes, this often turns on whether the vendor assumed a duty, and whether the agreement supports the boundaries the vendor intended. That record often becomes the practical answer to who is responsible for web accessibility when the site evolves beyond the original scope.

    Websites change. Multiple parties touch the system. Your agreement should reflect that reality instead of treating accessibility as a one-time deliverable.

    What to Clarify in Contracts and SOWs

    Strong agreements spell out the standard, the testing approach, the boundaries, and the handoff so both sides can execute the work and defend what was done if questions come up later—especially when someone asks who is responsible for web accessibility after launch.

    Standard

    Identify what accessibility standard is being followed, for example, WCAG 2.1 AA, and clarify whether it applies to all templates, components, and flows, or only to defined pages.

    Testing and Evidence

    State what methods are included—automated, manual, and assistive tech review—and what proof is delivered, such as issue logs, remediation notes, sign-off steps, and before-and-after documentation.

    Boundaries

    Spell out what is out of scope, such as third-party tools, PDFs, legacy pages, and user-generated content. If content remediation is included, define which content types or volumes are covered, so it is not left to interpretation later.

    Post-Launch Ownership

    Clarify who owns accessibility after launch, what that means in practice, and how post-launch edits, new features, and template changes are handled. This is often where teams lose alignment on who is responsible for web accessibility.

    Ongoing Support

    Describe what ongoing support looks like, such as regression monitoring, periodic audits, or training, and how issues are triaged over time, including workflow, escalation, and response expectations.

    When contracts define the standard, the coverage, and the proof, they give both sides a shared operating model that still works months later, after the site has changed and the original project team has rotated.

    Sales Language Can Expand Risk

    Contracts are only part of the picture. When owners try to bring vendors into a dispute, the evidence they reach for is often straightforward: proposals, emails, marketing pages, and platform claims.

    If your materials suggest “we guarantee compliance,” “our platform ensures accessibility,” or “you won’t need to worry about WCAG,” you may be creating avoidable exposure. Those statements are easy to quote, easy to misunderstand, and hard to defend without clear deliverables and documentation.

    If your materials imply you are responsible for web accessibility end-to-end, that language can be used to argue you assumed duties beyond the SOW.

    The goal is not to hide behind vague language. It is to use wording that matches what you will actually do, what is in scope, and what you can show when someone asks.

    The Bottom Line: Responsible for Web Accessibility

    So, who’s responsible for web accessibility—you or your client?

    In practice, accessibility holds up when responsibility is documented, transparent, and treated as ongoing. That clarity protects people who rely on accessible digital experiences, strengthens partnerships, and keeps accessibility from becoming a source of conflict instead of progress.

    If you treat accessibility as a one-time deliverable, responsibility will always be contested. If you treat it as an ongoing practice, responsibility becomes manageable—and shared with purpose.

    At 216digital, we can help you build a practical strategy to integrate WCAG 2.1 into your development roadmap—on your terms. If you want a clear plan for aligning ADA expectations, scope, and documentation with real-world delivery, schedule an ADA Strategy Briefing.

    Greg McNeil

    January 26, 2026
    Legal Compliance
    Accessibility, ADA Lawsuit, ADA Lawsuits, agency accessibility solutions, Legal compliance, Web Accessibility, Website Accessibility
  • How to Build Accessible Form Validation and Errors

    A form can succeed or fail based on how predictable its validation and recovery patterns are. When users can’t understand what the form expects or how to correct an issue, the flow breaks down, and small problems turn into dropped sessions and support requests. The reliable approach isn’t complex—it’s consistent: clear expectations, helpful correction paths, and markup that assistive technologies can interpret without ambiguity. That consistency is the foundation of accessible form validation.

    Two ideas keep teams focused:

    • Validation is the contract. These are the rules the system enforces.
    • Error recovery is the experience. This is how you help users get back on track.

    You are not “showing error messages.” You are building a recovery flow that answers three questions every time:

    1. Do users notice there is a problem?
    2. Can they reach the fields that need attention without hunting?
    3. Can they fix issues and resubmit without getting stuck in friction loops?

    Accessible Form Validation Begins on the Server

    Server-side validation is not optional. Client-side code can be disabled, blocked by security policies, broken by script errors, or bypassed by custom clients and direct API calls. The server is the only layer that stays dependable in all of those situations.

    Your baseline should look like this:

    • A <form> element that can submit without JavaScript.
    • A server path that validates and re-renders with field-level errors.
    • Client-side validation layered in as an enhancement for speed and clarity.

    A minimal baseline:

    <form action="/checkout" method="post">
     <!-- fields -->
     <button type="submit">Continue</button>
    </form>

    From there, enhance. Client-side validation is a performance and usability layer (fewer round-trips, faster fixes), but it cannot become the source of truth. In code review terms: if disabling JavaScript makes the form unusable, the architecture is upside down.

    Once submission is resilient, you can prevent a large share of errors by tightening the form itself. This foundation keeps your accessible form validation stable even before you begin adding enhancements.

    Make the Form Hard to Misunderstand

    Many “user errors” are design and implementation gaps with a different label. Prevention starts with intent that is clear during keyboard navigation and screen reader use.

    Labels That Do Real Work

    Every control needs a programmatic label:

    <label for="email">Email address</label>
    <input id="email" name="email" type="email">

    Avoid shifting meaning into placeholders. Placeholders disappear on focus and do not behave as labels for assistive technology. If a field is required or has a format expectation, surface that information where it will be encountered during navigation:

    <label for="postal">
     ZIP code <span class="required">(required, 5 digits)</span>
    </label>
    <input id="postal" name="postal" inputmode="numeric">

    This supports basic expectations in WCAG 3.3.2 (Labels or Instructions): users can understand what is needed before they submit.

    Group Related Inputs

    For radio groups, checkbox groups, or multi-part questions, use fieldset + legend so the “question + options” relationship is explicit:

    <fieldset>
     <legend>Contact preference</legend>
    
     <label>
       <input type="radio" name="contact" value="email">
       Email
     </label>
    
     <label>
       <input type="radio" name="contact" value="sms">
       Text message
     </label>
    </fieldset>

    This prevents the common failure where options are read out as a scattered list with no shared context. Screen reader users hear the question and the choices as one unit.

    Use the Platform

    Choose appropriate input types (email, tel, number, date) to use built-in browser behavior and reduce formatting burden. Normalize on the server instead of making users guess the system’s internal rules:

    • Strip spaces and dashes from phone numbers.
    • Accept 12345-6789, but store 12345-6789 or 123456789 consistently.
    • Accept lowercase, uppercase, and mixed-case email addresses; normalize to lowercase.

    The more variation you handle server-side, the fewer opaque errors users see.

    Don’t Hide Labels Casually

    “Visual-only” placeholders and icon-only fields might look clean in a mock-up, but they:

    • Remove a click/tap target that users rely on.
    • Make it harder for screen reader users to understand the field.
    • This leads to guessing when someone returns to a field later.

    If you absolutely must visually hide a label, use a visually-hidden technique that keeps it in the accessibility tree and preserves the click target.

    You’ll still have errors, of course—but now they’re about the user’s input, not your form’s ambiguity.

    Write Error Messages That Move Someone Forward

    An error state is only useful if it helps someone correct the problem. Rules that hold up well in production:

    • Describe the problem in text, not just color or icons.
    • Whenever possible, include instructions for fixing it.

    Instead of: Invalid input

    Try: ZIP code must be 5 digits.

    Instead of:Enter a valid email

    Try: Enter an email in the format name@example.com.

    A practical markup pattern is a reusable message container per field:

    <label for="postal">ZIP code</label>
    <input id="postal" name="postal" inputmode="numeric">
    <p id="postalHint" class="hint" hidden>
     ZIP code must be 5 digits.
    </p>

    When invalid, show the message and mark the control:

    <input id="postal"
          name="postal"
          inputmode="numeric"
          aria-invalid="true"
          aria-describedby="postalHint">

    Visually, use styling to reinforce the error state. Semantically, the combination of text and state is what makes it usable across assistive technologies. Clear, actionable messages are one of the most reliable anchors of accessible form validation, especially when fields depend on precise formats. 

    With messages in place, your next decision is the presentation pattern.

    Pick an Error Pattern That Matches the Form

    There is no universal “best” pattern. The decision should reflect how many errors are likely, how long the form is, and how users move through it. Choosing the right pattern is one of the most important decisions in accessible form validation because it shapes how people recover from mistakes.

    Pattern A: Alert, Then Focus (Serial Fixing)

    Best for short forms (login, simple contact form) where one issue at a time makes sense.

    High-level behavior:

    1. On submit, validate.
    2. If there’s an error, announce it in a live region.
    3. Mark the field as invalid and move focus there.

    Example (simplified login form):

    <form id="login" action="/login" method="post" novalidate>
     <label for="username">Username</label>
     <input id="username" name="username" type="text">
     <div id="usernameHint" class="hint" hidden>
       Username is required.
     </div>
    
     <label for="password">Password</label>
     <input id="password" name="password" type="password">
     <div id="passwordHint" class="hint" hidden>
       Password is required.
     </div>
    
     <div id="message" aria-live="assertive"></div>
    
     <button type="submit">Sign in</button>
    </form>
    
    <script>
     const form = document.getElementById("login");
     const live = document.getElementById("message");
    
     function invalidate(fieldId, hintId, announcement) {
       const field = document.getElementById(fieldId);
       const hint = document.getElementById(hintId);
    
       hint.hidden = false;
       field.setAttribute("aria-invalid", "true");
       field.setAttribute("aria-describedby", hintId);
    
       live.textContent = announcement;
       field.focus();
     }
    
     function reset(fieldId, hintId) {
       const field = document.getElementById(fieldId);
       const hint = document.getElementById(hintId);
    
       hint.hidden = true;
       field.removeAttribute("aria-invalid");
       field.removeAttribute("aria-describedby");
     }
    
     form.addEventListener("submit", (event) => {
       reset("username", "usernameHint");
       reset("password", "passwordHint");
       live.textContent = "";
    
       const username = document.getElementById("username").value.trim();
       const password = document.getElementById("password").value;
    
       if (!username) {
         event.preventDefault();
         invalidate("username", "usernameHint",
           "Your form has errors. Username is required.");
         return;
       }
    
       if (!password) {
         event.preventDefault();
         invalidate("password", "passwordHint",
           "Your form has errors. Password is required.");
         return;
       }
     });
    </script>

    Tradeoff: On longer forms, this can feel like “whack-a-mole” as you bounce from one error to the next.

    Pattern B: Summary at the Top (Errors on Top)

    Best when multiple fields can fail at once (checkout, account, applications). Behavior:

    1. Validate all fields on submit.
    2. Build a summary with links to each failing field.
    3. Move the focus to the summary.

    This reduces scanning and gives users a clear plan. It also mirrors how many people naturally work through a list: top to bottom, one item at a time. When built with proper linking and focus, this supports WCAG 2.4.3 (Focus Order) and 3.3.1 (Error Identification).

    Pattern C: Inline Errors

    Best for keeping the problem and the fix in the same visual area. Behavior:

    • Show errors next to the relevant control.
    • Associate them programmatically with aria-describedby (or aria-errormessage) and mark invalid state.

    On its own, inline-only can be hard to scan on long forms. The sweet spot for accessible form validation is often:

    Summary + inline

    A summary for orientation, inline hints for precision.

    Make Errors Machine-Readable: State, Relationships, Announcements

    Recovery patterns only help if assistive technology can detect what changed and why. This pattern also matches key WCAG form requirements, which call for clear states, programmatic relationships, and perceivable status updates.

    1) State: Mark Invalid Fields

    Use aria-invalid="true" for failing controls so screen readers announce “invalid” on focus. This gives immediate feedback without extra navigation.

    2) Relationships: Connect Fields to Messages

    Use aria-describedby (or aria-errormessage) so the error text is read when the user reaches the field. If a field already has help text, append the error ID rather than overwriting it. This is a common regression point in component refactors.

    <input id="email"
          name="email"
          type="email"
          aria-describedby="emailHelp emailHint">

    This approach makes sure users hear both the help and the error instead of losing one when the other is added.

    3) Announcements: Form-Level Status

    Use a live region to announce that submission failed without forcing a focus jump just to discover that something went wrong:

    <div id="formStatus" aria-live="assertive"></div>

    Then, on submit, set text like: “Your form has errors. Please review the list of problems.”

    Someone using a screen reader does not have to guess whether the form was submitted, failed, or refreshed. They hear an immediate status update and can move to the summary or fields as needed.

    Use Client-Side Validation as a Precision Tool (Without Noise)

    Once semantics and recovery are solid, client-side validation can help users move faster—so long as it does not flood them with interruptions.

    Guidelines that tend to hold up in production:

    • Validate on submit as the baseline behavior.
    • Use live checks only when they prevent repeated failures (complex password rules, rate-limited or expensive server checks).
    • Prefer “on blur” or debounced validation instead of firing announcements on every keystroke.
    • Avoid live region chatter. If assistive tech is announcing updates continuously while someone types, the form is competing with the user.

    Handled this way, accessible form validation supports the person filling the form instead of adding cognitive load.

    Define “Done” Like You Mean It

    For high-stakes submissions (financial, legal, data-modifying), error recovery is not the whole job. Prevention and confirmation matter just as much:

    • Review steps before final commit.
    • Confirmation patterns where changes are hard to reverse.
    • Clear success states that confirm completion.

    Then keep a test plan that fits into your workflow:

    • Keyboard-only: complete → submit → land on errors → fix → resubmit.
    • Screen reader spot check: “invalid” is exposed, error text is read on focus, form-level status is announced.
    • Visual checks: no color-only errors, focus is visible, zoom does not break message association.
    • Regression rule: validation logic changes trigger recovery-flow retesting.

    Teams that fold these checks into their release process see fewer “the form just eats my data” support tickets and have a clearer path when regression bugs surface.

    Bringing WCAG Error Patterns Into Your Production Forms

    When teams treat error recovery as a first-class experience, forms stop feeling like traps. Users see what went wrong, reach the right control without hunting, and complete the process without unnecessary friction. That is what accessible form validation looks like when it is built for production conditions instead of only passing a demo.

    If your team needs clarity on where accessibility should live in your development process, or if responsibility is spread too thinly across roles, a structured strategy can bring confidence and sustainability to your efforts. At 216digital, we help organizations integrate WCAG 2.1 compliance into their development roadmap on terms that fit your goals and resources. Scheduling a complimentary ADA Strategy Briefing gives you a clear view of where responsibility sits today, where risk tends to build, and what it takes to move toward sustainable, development-led accessibility that your teams can maintain over time.

    Greg McNeil

    January 22, 2026
    How-to Guides, Web Design & Development
    Accessibility, forms, How-to, WCAG, Web Accessibility, web developers, web development, Website Accessibility
  • Can User-Generated Content Trigger an ADA Demand Letter?

    Reviews. Comments. Community posts. Q&A threads. Uploaded photos. Customer-submitted listings.

    If your website includes user-generated content (UGC), you already know one truth: your content changes faster than any single QA pass can keep up with. And that’s where a very real business concern kicks in:

    It’s the same question behind a lot of ADA demand letters and accessibility complaints: if someone else posts it, does it still fall on you?

    Short answer: It can.

    A longer (and more useful) answer: UGC usually isn’t the only reason a business gets an ADA demand letter, but it can absolutely strengthen a complaint—especially when it blocks participation, creates barriers on high-traffic pages, or shows up inside key buying or support experiences.

    Before we go further: this article is informational, not legal advice. If you receive a demand letter or legal threat, it’s smart to involve qualified counsel.

    Now let’s unpack the real issue behind the question: the risk depends on where UGC appears, how it’s created, and what control your platform gives you.

    What User-Generated Content Is and Where It Shows Up

    User-generated content is anything your users create instead of your internal team. That includes reviews, comments, forum posts, Q&A answers, uploaded photos, listings, profiles, and all the little pieces people add to your site as they interact with it.

    But here’s the part that catches most teams off guard:

    UGC isn’t one thing. And it definitely doesn’t behave the same everywhere it appears.

    Some UGC sits inside clean templates and barely shifts. Some shows up as long, free-form posts with images and embeds. And some becomes full pages that search engines crawl and customers rely on. Each of those patterns carries a different level of accessibility exposure.

    Light UGC Explained

    Light UGC is the easiest to keep stable. Think short reviews, star ratings, or a simple comment thread. These usually live inside structured components, so the content itself doesn’t wander too far.

    What does wander?

    The widgets around it.

    A star-rating tool that doesn’t work with a keyboard, a comment form missing labels, or a “load more” button with no name—all of that can cause more trouble than the content itself ever would.

    Rich UGC and Accessibility

    Rich UGC gives users more room to shape the experience. That includes long posts, formatted text, uploaded images, embedded videos, and external links.

    Freedom is great for expression. It’s less great for predictability.

    One user uploads an image with text baked in. Another pastes an embed that traps keyboard focus. Someone else writes a long post using bold text as fake headings. Suddenly, your page has structure on the surface but not in the markup where assistive tech looks for it.

    Structural UGC and Exposure

    Structural UGC is where things move from “content” to “actual pages.” These are profiles, listings, job posts, directory entries, marketplace items—anything that functions like a standalone page and attracts search traffic.

    This is the kind of UGC that matters most for accessibility risk because it doesn’t sit quietly in a small section. It becomes part of the paths people use to make choices, complete tasks, or decide whether your product fits their needs.

    When structural user-generated content is inconsistent or hard to navigate, the impact shows up fast.

    Where User-Generated Content Lives (and Why Placement Matters)

    The biggest shift in risk isn’t the content—it’s where the content lands.

    A slightly messy review on a low-traffic page may not change much. But that same review sitting inside a product page with heavy purchase intent? Different story. And the same is true across the site.

    UGC becomes more consequential when it appears in places like:

    • Product pages with reviews, photos, or Q&A
    • Service pages with before/after uploads
    • Location or listing pages that customers rely on to compare options
    • Support threads and community answers that function as your “real” FAQ
    • Profiles or listings that act like landing pages and show up in search results

    All of these are places where people come to decide something. If the UGC in those areas is inaccessible—or if the tools that publish it create predictable failures—that can turn into a barrier for someone trying to participate or complete a task.

    Here’s the part most businesses miss:

    The risk isn’t just the content users post. It’s the system your platform uses to collect it, shape it, and display it.

    The templates, editors, upload flows, moderation tools, and UI patterns are where most preventable accessibility issues start. When those pieces aren’t designed with accessibility in mind, even simple UGC can become part of a complaint.

    And once UGC becomes part of the user journey, it becomes part of the accessibility equation—especially when an ADA demand letter points to barriers on real pages people depend on.

    How User-Generated Content Factors Into Website Accessibility Complaints

    At a high level, here’s what matters:

    ADA Title III focuses on equal access and non-discrimination for goods and services offered by businesses open to the public.

    Even though the ADA doesn’t spell out one single required web standard for private businesses, accessibility claims often point to Web Content Accessibility Guidelines (WCAG) as the measuring stick in practice. WCAG evaluates pages as they are delivered to users—meaning all visible content, including UGC, can contribute to non-conformance.

    And this is where UGC gets tricky.

    Responsibility for UGC Accessibility Issues

    If the content is on your domain, inside your customer journey, and presented as part of your experience, then it functions like part of the service you’re offering.

    That doesn’t mean every single user mistake equals an automatic lawsuit. But it does mean the experience can become a valid complaint when barriers prevent people from completing tasks or participating fully.

    How WCAG Evaluates UGC on Pages

    WCAG conformance is evaluated based on what’s actually on the page. That includes:

    • Content
    • UI components
    • Third-party widgets
    • User-generated content

    WCAG also recognizes the concept of partial conformance when certain issues are truly outside the author’s control. But partial conformance is not a shield you hide behind—it’s a disclosure approach, and a sign you should reduce what’s out of your control wherever possible.

    A useful comparison: the DOJ’s Title II website accessibility factsheet for public entities highlights that third-party content can still be part of a covered service when it’s baked into the experience. Title II and Title III are different, but the principle is instructive:

    If people rely on it to access the service, it needs to be accessible.

    So yes—UGC can increase risk. But it doesn’t do it randomly. It does it in predictable ways tied to control and foreseeability.

    How User-Generated Content Can Create Accessibility Barriers

    Let’s break this into a simple framework you can actually use.

    Barriers From Inaccessible UGC Tools

    This is the category that gets businesses into trouble fastest—because it’s not “user behavior.” It’s your platform UI.

    Examples:

    • Review/comment forms are missing labels
    • Error messages that aren’t programmatically connected to fields
    • Star-rating widgets that can’t be used with a keyboard
    • Upload buttons with no accessible name (“button button”)
    • No status updates during upload (screen reader users stuck guessing)
    • Rich text editors that trap focus or don’t announce controls properly
    • Captcha or anti-spam tools that block assistive tech users

    If someone can’t post, submit, edit, or participate because the controls aren’t accessible, that’s a strong accessibility barrier. And it’s directly attributable to the business experience—not the user’s content.

    If your UGC system prevents participation, that can absolutely support an ADA demand letter.

    Foreseeable Accessibility Failures

    This is where many businesses accidentally create “accessibility debt” at scale.

    Examples of foreseeable failures from UGC:

    • Users upload images without providing any alt text.
    • Links labeled only as “click here,” offering no context.
    • Flyers or announcements as images with all the text baked in.
    • Users choose font colors or backgrounds that create contrast failures.
    • Visual formatting—like bold text—to imitate headings instead of using proper structure.
    • Using emojis as bullet points or even headings without adding a text equivalent.

    If a system consistently produces inaccessible pages, it’s hard to argue the issue is “random user behavior.”

    Platform defaults shape outcomes.

    And if the outcome repeatedly blocks access, that’s a foreseeable risk—exactly the kind of pattern that shows up in accessibility complaints.

    When User-Generated Content Stops Key Tasks

    Sometimes, the UGC isn’t “broken” in an obvious way. It’s just in the wrong place.

    When the content shows up in the same high-value areas—product pages, listings, community answers, support information—the stakes rise fast. These are the pages people rely on to make choices, compare options, understand requirements, and get support.

    Even if your official content is accessible, the user journey is what counts.

    If UGC becomes the deciding factor, someone needs to:

    • Choose a product
    • Confirm compatibility
    • Understand sizing or ingredients
    • Access instructions
    • Troubleshoot an issue
    • Get help without calling

    …and if that UGC is inaccessible, it can become part of the access barrier.

    How to Make User-Generated Content Accessible by Design

    This is where accessibility becomes realistic. Because the goal is not to make every user into an expert.

    The goal is to build a system where the easiest path is also the most accessible path.

    Build Accessible UGC Submission Tools

    Treat your UGC publishing experience like a product, not an afterthought.

    At minimum:

    • Every input has a clear label.
    • Keyboard-only users can complete the flow.
    • Focus order is logical
    • Errors are clear, specific, and programmatically tied to fields.
    • Buttons announce what they do.
    • Upload state changes are announced (progress, success, failure)

    If your creation tools fail, the experience fails—no matter how good your main site is.

    Prompt Users for Necessary Details

    For image uploads, use an alt text prompt that’s friendly and short. For example:

    • A required/optional alt text field depending on context
    • A helper line like: “Describe what matters in this photo.”
    • A checkbox: “This image is decorative” (when appropriate)

    This single prompt eliminates a huge portion of predictable failures.

    And yes—this is your responsibility. Because you control the workflow.

    Limit Risky Formatting Options

    This one surprises people, but it’s important:

    If users can style content however they want, your site can become non-conforming instantly.

    Practical guardrails:

    • Limit text colors and backgrounds to approved combinations.
    • Block low-contrast combinations automatically.
    • Provide headings/lists as structured tools, not “fake formatting.”
    • Prevent users from creating “headings” by just increasing font size.

    If a page can be made inaccessible through user styling, that’s a platform design decision—not a user obligation.

    Managing Rich Media Accessibility

    If users upload video or audio:

    • Prompt for captions and/or transcripts
    • Offer a “pending accessibility” state
    • Add a follow-up workflow to complete accessibility after posting.
    • Provide a clear way to edit and add accessibility later.

    Even a basic process here reduces risk dramatically.

    Maintaining Accessible User-Generated Content Over Time

    Even with solid guardrails, user-generated content keeps moving. New posts show up, trends change, and older content stays live in the places people rely on. So the goal isn’t “fix it once.” It’s keeping the system steady.

    Check the templates that carry the most weight

    Pick a small set of UGC-heavy templates—product pages, listings, support threads, community Q&A—and review them on a regular cadence. If a component update breaks keyboard flow, labels, or focus, you want to catch it before it spreads.

    Give your team a simple playbook.

    Moderators and content teams don’t need to learn WCAG. They just need a short list of patterns to flag, like missing alt text, image-based announcements, unclear link text, or embeds that don’t work with a keyboard.

    Make reporting and fixes easy.

    Add a straightforward way to report accessibility issues, and route those reports to someone who can act. When something needs remediation, start with the least disruptive fix—add text equivalents, correct formatting, or adjust the template so the issue doesn’t keep reappearing.

    At the end of the day, WCAG looks at what’s on the page as delivered. If UGC lives in the experience, it’s part of what users have to work with—so it needs ongoing care.

    Making User-Generated Content a Strength, Not a Liability

    User-generated content will always shift and surprise you a little, and that’s fine. What matters is knowing your site can handle those shifts without creating new barriers every time someone posts a review or uploads a photo. When the basics are solid—the tools, the guardrails, the way you spot issues—you don’t have to brace for impact. Things stay steady.

    If you want help looking at the parts of your site where UGC and accessibility meet, 216digital can walk through those areas with you and point out what will actually make a difference. When you’re ready, schedule an ADA briefing with 216digital and put a clear, manageable plan in place so accessibility stays reliable as your UGC grows.

    Greg McNeil

    January 21, 2026
    Legal Compliance
    Accessibility, Content Creators, Content Writing, Demand Letters, Legal compliance, User-Generated Content, WCAG, Website Accessibility
  • How Empty Buttons Break Accessibility and How to Fix Them

    You’ll run into this sooner or later in accessibility work: everything in a pull request looks fine. Layout sits correctly, icons load, hover states behave, keyboard focus moves. And then you turn on a screen reader, and it announces the same thing over and over:

    “Button. Button. Button.”

    That’s an empty button.

    And yes—teams ship them all the time, even experienced ones.

    They show up across small sites, large platforms, custom apps, and CMS-driven systems. Large-scale audits have found them on a significant share of homepages year after year, which tells you this isn’t a “carelessness” problem. Things look correct visually, so nobody questions whether the button actually exposes a name.

    Once you see how often that gap slips through, it becomes easier to break the issue apart—what an empty button is at the markup level, why teams run into them so frequently, how screen readers respond to them, and the patterns and fixes that prevent them from resurfacing in future work.

    What Empty Buttons Are and Why They Matter

    An empty button is a control that does not expose an accessible name. It might not be “empty” visually—the UI may show an icon, a custom SVG, or a shape styled entirely through CSS. It might animate and look fully functional. None of that changes the fact that assistive technology can’t understand what the control does. Visually complete or not, a button without a name is effectively invisible to anyone relying on that information.

    Common UI Patterns That Produce Empty Buttons

    You’ll see empty buttons come from a handful of predictable patterns:

    • Icon-only controls such as search, hamburger menus, close icons, and carousel arrows
    • Buttons that use CSS background images or pseudo-elements for their visible “content.”
    • SVGs inside a button without a usable label or internal text
    • Buttons where text was added through CSS instead of being in the DOM
    • Submit or action inputs that never received a value

    These patterns all have the same underlying problem: nothing meaningful ends up in the accessibility tree. Even experienced teams get caught by this because none of these cases look broken during normal UI review. The button appears complete, so the missing name goes unnoticed.

    Buttons vs Links: Preventing Naming Failures

    A fair number of empty buttons start long before anyone looks at labels—they come from using the wrong element altogether.  A control can look like a button in the UI, but under the hood, it’s a styled link coming from a CMS. Or a real <button> gets wired up to handle navigation because it was the fastest way to hook into a script. Both choices create avoidable problems.

    The semantic rule is not complicated, and it holds up across every framework and CMS:

    • Links move the user to a new location.
    • Buttons trigger something in the current view.

    Assistive technology depends on that distinction. Screen readers keep separate lists for links and buttons. Keyboard users expect different activation behavior. When the markup doesn’t match the intent, the control becomes unpredictable—especially for anyone not relying on visuals to figure out what’s going on.

    If you’re working in a design system or component library, this is where discipline pays off. Define two primitives that share the same visual design but render the correct element under the hood:

    • A real <button> component for actions
    • A LinkButton (or equivalent) that renders an anchor for navigation

    You keep visual consistency, but the markup stays honest. That reduces confusion for assistive tech and cuts down on empty-button issues that come from mismatched roles or controls that never get a proper name.

    How Screen Readers Handle Empty Buttons

    A screen reader can only announce what a button does if it has a real, programmatic name. When that name is missing, the control becomes indistinguishable from every other unlabeled button in the interface. At that point, the user isn’t navigating—they’re guessing.

    Screen readers follow a strict naming order. They don’t pull from styling, layout, or anything “visual.” They rely on one of these sources, in this order:

    • Visible text inside the <button> element
    • aria-label
    • aria-labelledby
    • Alt text on images inside the button
    • The value on an <input type="submit">

    Anything outside this list won’t create a name.

    That includes:

    • CSS content
    • Pseudo-elements
    • Background images
    • Any text that isn’t actually in the DOM

    If the label isn’t provided through one of the recognized naming mechanisms, assistive technology has nothing to announce. A lot of empty-button issues come from assuming CSS or decorative SVGs can stand in for meaningful text—they can’t.

    When you need to confirm what a screen reader will announce, check the source the same way the browser does. In Chrome or Edge DevTools, open the Accessibility panel and look at Computed Name. If that field is empty, the button has no name for any user relying on a screen reader.

    How to Find Empty Buttons Before Release

    Catching empty buttons early is far easier than chasing them after a release. A reliable workflow keeps these issues visible long before they reach production, and it doesn’t require heavy tooling—just consistent layers of checks.

    Automated Checks for Empty Buttons

    Start with automated scans. They’re fast, predictable, and empty buttons show up as clear failures in most tools. They’re straightforward to fix and worth addressing immediately, rather than letting them roll into regressions.

    Page-Level Spot Checks in Key UI Areas

    After automation, move through the areas where empty buttons tend to cluster:

    • Header controls such as search, menu, account, and cart
    • Modal and drawer close buttons
    • Carousel next/previous controls
    • Cookie banners with accept or dismiss icons

    In the DOM, look for button elements with no text, empty aria-label values, and input buttons missing a value.

    Keyboard and Assistive Technology Checks

    Keyboard testing surfaces empty buttons quickly. If focus lands on a control and gives you no indication of its purpose, that’s a naming failure. A short screen reader pass will confirm whether the accessible name is missing.

    CMS and Plugin Sources of Empty Buttons

    Many empty buttons come from CMS templates and plugins where the markup isn’t easily changed. When you run into one, narrow it down to one of three paths:

    • Is there a setting that adds or overrides the label?
    • Can the template be patched safely?
    • Does the component need to be replaced altogether?

    This keeps the work focused on concrete fixes instead of guessing where the problem originated.

    Fixing Empty Buttons With Accessible Labels

    Most empty buttons come from repeatable markup patterns, so the fixes follow equally repeatable shapes.

    Adding Visible Text to Name Buttons

    <button type="button">Subscribe</button>

    Providing Labels for Icon-Only Buttons

    <button type="button" aria-label="Close dialog">
     <svg aria-hidden="true" focusable="false"></svg>
    </button>

    Using Screen-Reader-Only Text for Labels

    <button type="button">
     <svg aria-hidden="true" focusable="false"></svg>
     <span class="sr-only">Search</span>
    </button>

    Fixing Input Buttons Missing Values

    <input type="submit" value="Submit" />

    Using Alt Text Correctly on Image Buttons

    <button type="submit">
     <img src="email.svg" alt="Subscribe to newsletter">
    </button>

    All of these preserve a stable, accessible name, so the control keeps working even if styles or icons change later.

    WCAG Requirements Related to Empty Buttons

    When a button has no accessible name, it violates several parts of the Web Content Accessibility Guidelines (WCAG). These requirements ensure that every interactive control can be understood and operated by people using assistive technology. Empty buttons fall short because they provide no text, label, or programmatic name for a screen reader to announce.

    Empty buttons map to several WCAG criteria, most commonly:

    • 1.1.1 Non-text Content (A)
    • 2.4.4 Link Purpose (In Context) (A)

    Tools may categorize them differently, but they’re all pointing at the same failure: the control doesn’t expose a usable name.

    How to Prevent Empty Buttons Long-Term

    Long-term prevention comes from a few small guardrails.

    Setting Standards for Icon and Action Buttons

    If your team builds icon-button components, require a label prop. That forces naming decisions into the place where they’re easiest to enforce.

    Adding Tests to Catch Unnamed Buttons Early

    Add tests that check for unnamed buttons in rendered output. Even a small number of targeted cases can stop regressions before they appear in the UI.

    Reviewing CMS Output for Labeling Support

    When reviewing new plugins or themes, inspect their output. If controls can’t be labeled cleanly, decide early whether they should be configured, patched, or replaced.

    Building Better Accessible Buttons

    Empty buttons are simple to overlook but deeply disruptive for users who rely on assistive technology. When a button can’t introduce itself, the interface stops feeling usable and starts feeling unpredictable.

    From a development standpoint, addressing this isn’t complicated—it just requires intention. Choose the correct semantic element. Provide a name that lives in the DOM. Confirm the accessible name using your tools. And treat CMS and design system choices as part of your front-end architecture, not an afterthought.

    If your team wants support refining button patterns, improving audits, or building accessibility practices that scale, 216digital can help. To map WCAG 2.1 requirements into your actual development workflow, schedule an ADA Strategy Briefing and get guidance tailored to your stack and long-term accessibility plans.

    Greg McNeil

    January 19, 2026
    How-to Guides
    Accessibility, empty buttons, How-to, WCAG, Web Accessibility, web developers, web development
  • How New U.S. Laws Could Change Accessibility Lawsuits

    Accessibility lawsuits often start the same way. Someone flags barriers on your site, a letter arrives, and your team is asked to respond fast. That moment is rarely tidy. You are dealing with legal exposure, technical facts, and a customer experience problem at the same time.

    Lawmakers are now proposing changes that could affect how these complaints move forward. Some ideas focus on requiring notice and a short remediation window. Others aim to define clearer federal website standards. States are also experimenting with ways to discourage filings they view as abusive. These proposals can change timing and paperwork, but they do not change what users face on the site today.

    Below, we’ll take a closer look at the proposals taking shape and what they may suggest for future enforcement.


    Why Lawmakers Are Pushing for Accessibility Reform

    Across the country, lawmakers are responding to concerns that show up again and again when teams talk about demand letters and claims. Some are about cost and volume. Others are about uncertainty and inconsistent expectations.

    The Pressure From High-Volume Filings

    One of the strongest drivers is the rise in high-volume filings that reuse the same allegations with only minor changes. These accessibility lawsuits regularly target small and mid-sized organizations that already have limited time and budget to respond. Even when a team wants to do the right thing, the first step is often paperwork, outside counsel, and internal coordination.

    Recent data shows how often the same organizations get pulled back in. In 2025, more than 5,000 digital accessibility cases were filed, and over 1,400 involved businesses that had already faced an ADA web claim. In federal court, about 46 percent of filings named repeat defendants.

    Why States Point to Missing Title III Web Standards

    Another driver is the long-running frustration with the Department of Justice’s lack of clear Title III web standards. States point to that gap when explaining why they are stepping in. Without federal regulations, expectations vary by jurisdiction. That creates uneven enforcement and room for conflicting court outcomes, even when the underlying barrier is similar.

    Balancing Litigation Reform and Civil Rights

    It is also important to recognize what private enforcement has done for access. Many of the improvements users rely on today came from individuals asserting their rights and pushing systems to change. Reform proposals often say they are trying to reduce opportunistic litigation without weakening civil rights. At the same time, some disability advocates warn that certain approaches can delay access if timelines stretch too far or if progress requirements stay vague.

    Lawmakers are moving in different directions to tackle these concerns. That brings us to the next question.

    What kinds of changes are actually being proposed?


    Three Legal Changes Shaping Accessibility Lawsuits

    Across federal and state discussions, most proposals about accessibility lawsuits fall into three categories. Each one could influence how demand letters work and how teams respond.

    Federal Notice and Remediation Window Proposals

    Some members of Congress have suggested adding a requirement that a notice be given before a lawsuit can proceed. Under these proposals, organizations would receive a written description of the alleged barrier and a short remediation window to show progress. One example is the ADA 30 Days to Comply Act. It outlines a written notice, a 30-day period to describe improvements, and an additional period tied to demonstrated progress.

    A key nuance matters here. The bill focuses on architectural barriers at existing public accommodations. People often discuss these proposals alongside digital claims, but the text is narrower than many headlines suggest. Even so, the structure signals interest in early notice paired with proof of meaningful action.

    Federal Website Accessibility Standards Proposals

    Alongside notice concepts, Congress is also considering action focused on digital accessibility standards. The Websites and Software Applications Accessibility Act of 2025 aims to set uniform expectations for websites and applications. It also directs federal agencies to define standards, update them over time, and clarify how digital access fits within existing civil rights protections.

    If a federal standard becomes established, organizations would have a clearer target to design and test against. That also means teams may have less room to argue that they were unsure what to follow. Day-to-day development, QA, and content workflows would matter more because compliance would depend on consistent results, not occasional one-time reviews.

    State Laws Targeting Abusive Website Accessibility Litigation

    Several states are exploring their own approaches. Kansas has already created a mechanism for determining whether website accessibility litigation is abusive. Courts can consider whether the business attempted to remediate issues within a set period and whether improvements occurred within a ninety-day window. Missouri has introduced similar bills built around notice, remediation timelines, and potential fee shifting for bad-faith claims.

    These laws do not remove the obligation to maintain accessible websites. They focus on how courts should evaluate filings that appear designed for settlement volume rather than user access.


    What May Change in Accessibility Lawsuits and What Will Not

    These proposals could affect the process around accessibility lawsuits, but they do not change the core expectation that users need to complete tasks without barriers. It helps to separate what may shift from what stays the same.

    What May Change

    Organizations may receive more detailed notices that cite specific pages, steps, or interactions. Response timelines may tighten if new regulations define how quickly a team must respond or document progress. Settlement leverage could shift in places where remediation windows, presumptions, or fee-shifting concepts affect how cases are evaluated.

    What Will Not Change

    Users still run into barriers today. A delayed filing does not remove the barrier for someone trying to complete a checkout, submit a form, access account settings, or read essential content. If issues remain unresolved or progress is not measurable, legal action can still move forward. A remediation window is not extra time. It is a countdown.


    Multi-State Website Compliance and Accessibility Risk

    If your website serves users across the country, state-level differences create practical challenges. Exposure does not depend only on where a business is located. It also depends on where users live and which courts may have jurisdiction over a claim.

    How State Approaches Differ

    Florida uses a different model. Organizations can file a remediation plan in a public registry. Courts can consider this plan when evaluating good-faith actions and potential attorney fees in Title III cases filed within the state.

    California has explored a small-business-focused approach, such as a 120-day window to fix issues before statutory damages or fees are available. These experiments show that states are testing different tools to encourage remediation and reduce rushed filings.

    Teams need a repeatable way to keep their sites usable across many jurisdictions.


    Remediation Windows and a 30-Day Response Plan

    A remediation window helps only when teams can move with structure and focus. Without a workflow, the pressure to fix issues quickly can lead to patch-level changes that create new problems. A clear process prevents that and keeps everyone aligned.

    Days 0 to 3

    Capture the notice, save screenshots, and list the URLs and user steps cited. Assign a single internal owner who can coordinate legal, product, and development.

    Days 4 to 10

    Reproduce the issues on the named flows. Test with keyboard and at least one screen reader. Trace the problems back to specific components, templates, or vendor scripts so you can fix the causes, not just page-level symptoms.

    Days 11 to 25

    Run a focused remediation sprint. Prioritize barriers that block task completion. Involve design and quality assurance so that fixes fit your system and avoid new regressions.

    Days 26 to 30

    Retest the affected flows. Capture what changed, when it shipped, and how it was verified. Add any related systemic issues to your backlog with clear owners and target dates.

    This type of workflow reveals the deeper tension behind many of these proposals. Reform can influence pacing, but the work of removing barriers remains the same.


    Legislative Reform and Real Access

    It is understandable that organizations want protection from high-volume filings that feel more like templates than tailored complaints. Responding takes time, budget, and focus, and many teams do not have much of any of those to spare.

    At the same time, disability advocates warn that lengthy remediation windows can delay access. If the standard for demonstrating progress is vague, people with disabilities may wait longer for functional experiences. What matters most is that barriers get fixed and stay fixed.

    This tension is unlikely to disappear. It will continue because expectations around digital access are rising.


    How to Make Website Accessibility Sustainable

    The most reliable way to reduce risk is to keep accessibility work steady and consistent. That includes defining a clear accessibility standard, often WCAG 2.1 AA in practice. It also means keeping a backlog that mirrors actual user journeys and testing flows, rather than focusing only on individual pages.

    Build Around High-Value User Journeys

    A backlog is most useful when it maps to tasks that support the business and the customer. That means prioritizing flows like navigation, product discovery, forms, authentication, and checkout, plus the templates and components that power them.

    Prevent Regressions Between Releases

    Development and content teams benefit from adding monitoring and release checks. This avoids regressions that might otherwise go unnoticed. Documenting testing steps, changes, and verification helps demonstrate good-faith progress if a notice arrives. For many organizations, reviewing vendor risk and third-party scripts is another important control point.

    Track How Regulations Are Evolving

    These practices are becoming more important as regulations solidify. The Department of Justice has already finalized its Title II rule for state and local governments. Although Title III remains unsettled, expectations around digital access are becoming more defined.

    If you’re deciding where to start, focus on the tasks that matter most to users. Improving key tasks protects both customers and teams.


    How Teams Can Stay Ready as Regulations Take Shape

    As lawmakers continue shaping how digital access is defined, businesses deserve guidance that reduces confusion, not adds to it. Clear standards give teams room to plan, improve, and maintain their websites without fear of being caught off guard. They also help shift the conversation away from surprise claims and toward steady, predictable work that fits into normal development cycles.

    If your organization wants help building a reliable accessibility plan that supports long-term stability, 216digital is here for you. Schedule a complementary ADA Strategy Briefing and let’s build a path that fits your team and your goals.

    Greg McNeil

    January 16, 2026
    Legal Compliance
    Accessibility, accessibility laws, Legal compliance, state accessibility laws, Web Accessibility, web accessibility lawsuits, Website Accessibility
  • ADA Demand Letter for Websites: What It Looks Like

    You open your inbox and see an email from a law office. Or a certified letter shows up at your door. It claims your website is inaccessible and says you may be in violation of the ADA. It is not a lawsuit, but it is also not nothing. An ADA demand letter can bring a wave of worry, yet it also gives you information you can use. When you understand how these letters work, you can read them with clarity, check what is accurate, and decide your next steps without fear.

    Two questions usually come up right away. Is the letter legitimate, or is it something else? And what should be in it if it is credible? This article walks through how to recognize the parts of a letter, what each part means, and which details matter when one lands in your inbox.

    A quick note. This is practical guidance, not legal advice. If a letter looks credible, involve counsel as soon as you can.

    First, let’s define what an ADA demand letter is and why the structure matters.

    What an ADA Demand Letter for a Website Is

    An ADA demand letter is a formal notice saying that parts of your website may not be accessible to people with disabilities and could violate the ADA. Letters like this usually outline the issues the sender says they found. Many use Web Content Accessibility Guidelines (WCAG) to describe those issues because it gives them a shared language for barriers such as missing alt text, keyboard traps, or unclear labels. Some letters also request remediation within a set timeframe.

    It helps to understand what an ADA demand letter is—and what it is not. While it is not a lawsuit, it can come before one. It is also not confirmation that the claims are correct, since most letters still require technical validation. And it is not always detailed: some letters are well prepared, while others are brief or contain errors.

    Once you understand the structure, it becomes much easier to read these letters calmly and with purpose.

    The Key Parts of an ADA Demand Letter

    Most website-focused ADA demand letters follow the same pattern.

    • Header and complainant information.
    • Statement of alleged violations.
    • Requested action.
    • Deadline and next steps.
    • Legal references and a signature at the end.

    This structure helps you spot what is strong, what is vague, and what needs validation. You are checking for accuracy and consistency. You are also looking for signals that the sender spent time reviewing your site instead of relying on a template.

    Let’s walk through each section.

    Header and Complainant Information

    The header identifies who is sending the letter and who they represent. It usually lists the attorney’s name, their contact information, the complainant, and the business they are writing to. You should see your organisation’s name and your website’s domain written clearly.

    Capture these details right away.

    Compare the letter’s date to the date you received it.

    Note how it arrived, whether through email or postal mail.

    Look closely at the domain listed. Does it match your active site?

    Check for reference numbers or mention of specific pages.

    A few fast credibility checks can make a big difference. Does the letter spell your business name correctly? Does it give complete contact information? Is the letter signed? If the sender cannot get the name of the site right, it weakens the letter. Copy-and-paste errors also matter, especially if they reference parts of a site you do not have.

    Next comes the core of the letter.

    Statement of Alleged Violations in an ADA Demand Letter

    This section outlines the accessibility concerns the sender claims to have found. Some letters use short bullet points. Others include a short narrative explaining what action failed.

    Many reference common issues such as:

    • Missing alt text on images.
    • Videos with no captions.
    • Color contrast problems.
    • Navigation barriers for keyboard users.
    • Forms are missing labels or error messages.

    The strongest letters include specific URLs, page names, or tasks that could not be completed. For example, could not submit the contact form due to missing labels. Or could not complete checkout because the keyboard could not reach the payment button. These details make validation easier.

    Weaker letters may list generic issues with no URLs or no clear examples. That does not make them false. It simply means you will need a deeper technical review.

    As you read this section, capture the issue, the page or feature, and the impact on the user. Those details help you understand the scope.

    Requested Action in an ADA Demand Letter

    This is the part where the sender lists what they want changed. It usually includes updates to code or templates, adding missing alt text, adding captions to videos, improving keyboard navigation, or correcting form issues. Some letters also ask for an accessibility statement or a better contact method.

    Pay attention to how the request is phrased. Is the sender asking for fixes to a single part of the site or the entire site? Do they point to specific WCAG criteria or make broad references? Both are workable, but specifics help you establish a path for remediation.

    Some letters offer clear, testable actions. Others mix clear requests with broad language. Capture each clear and testable action so your team knows what to validate.

    Deadline and Next Steps

    Most ADA demand letters provide a deadline. It might be framed as a request for a written response or a request for remediation within a set timeframe. Many mention possible escalation if the timeline is ignored.

    Capture the deadline right away. Note whether they are asking for an acknowledgement or a full plan. Short deadlines create pressure, but they do not tell you how long it will take to fix the underlying issues. The timeline in a letter is not the full timeline for responsible remediation.

    Legal References and Signature

    This section usually includes ADA language along with WCAG references. Some letters cite specific success criteria. Others stay broad. WCAG criteria can help frame your validation work, but they are not always complete. Look at whether the issues described are specific enough to test.

    A legitimate letter is usually signed and dated. Formatting should align with the rest of the content.

    Is the Letter Real? A Quick Verification Checklist

    You can often gauge credibility with a short review.

    • Is your business name and website identified correctly in the letter?
    • Are the sender’s details complete so you know who issued it?
    • Is the deadline stated clearly and consistently?
    • Do the listed barriers match actual pages or features on your site?
    • Are there URLs or descriptions of which tasks that could not be completed?
    • Is the letter properly signed and dated?

    There are also green flags and red flags.

    Green flags include specific examples, correct domain information, consistent formatting, and issue descriptions you can validate.

    Red flags include wrong business names, mismatched domains, generic lists with no connection to your site, and pressure to pay right away.

    If a letter appears credible, take it seriously. Capture the details. Validate the sender. Bring in legal counsel and the right internal stakeholders so you can review the claims with care and accuracy.

    How to Move Ahead After an ADA Demand Letter Lands

    Receiving a demand letter can unsettle any team, even those who already understand accessibility and ADA risk. But once you know how to read these letters, the tone shifts. You start to see the structure for what it is. A set of claims to review. A list of pages to check. A timeline to manage. A reminder that accessibility should be cared for across the full lifecycle of your site, not only when a letter arrives.

    If you want support turning the findings from a letter into a clear plan, 216digital can help you integrate WCAG 2.1 compliance into your development roadmap in a way that fits how your team works. To explore what that looks like in practice, you can schedule a complementary ADA Strategy Briefing and talk through your goals with our accessibility experts.

    Greg McNeil

    January 15, 2026
    Legal Compliance
    Accessibility, ADA Compliance, ADA Lawsuit, Demand Letters, Website Accessibility
  • WCAG 3.3.8: Rethinking Passwords, Codes, and CAPTCHAs

    The main login form usually isn’t the problem. It’s everything around it. The retry loop. The MFA branch that forces you to read a code on one device and type it into another. The recovery step that adds a challenge after you’re already stuck. That’s also where “hardening” changes tend to hide—paste blocked, autocomplete disabled, segmented OTP inputs that fight autofill.

    If you’ve ever been locked in a loop because you mistyped once, you already know how quickly “secure” turns into “unusable.” For some users that’s just irritating. For others—people dealing with memory limitations, dyslexia, ADHD, anxiety, or plain cognitive overload—it’s the point where access ends. WCAG 3.3.8 is essentially asking for one thing: don’t make recall or manual re-entry the only route through authentication.


    What WCAG 3.3.8 Actually Requires for Accessible Authentication

    WCAG 3.3.8 Accessible Authentication (Minimum) is easy to misread as “no passwords” or “no MFA.” It’s neither. It’s about whether the user has a path through authentication that does not depend on a cognitive function test. WCAG 3.3.8 focuses on removing authentication steps that rely on memory, transcription, or puzzle-solving when no accessible alternative exists. In practice, you cannot make “remember this” or “retype this” the gate unless you also provide a supported alternative or a mechanism that reduces the cognitive burden.

    What Counts as a Cognitive Function Test in Authentication

    A cognitive function test includes anything that requires the user to remember, transcribe, or solve something in order to log in. That includes remembering site-specific passwords, typing codes from one device into another, or solving distorted text in a CAPTCHA.

    Allowable Alternatives Under WCAG 3.3.8

    Under WCAG 3.3.8, a cognitive function test cannot be required at any step in an authentication process unless the page provides at least one of these options:

    • An alternative authentication method that does not rely on a cognitive function test
    • A mechanism that assists the user, such as password managers or copy and paste
    • A test based on object recognition
    • A test based on personal non-text content that the user previously provided

    Object recognition and personal content are exceptions at Level AA, yet they are still not ideal for many users with cognitive or perceptual disabilities. From an inclusion standpoint, it is better to avoid them when a simpler option exists, such as letting the browser fill in credentials or using passkeys.

    This applies to authenticating an existing account and to steps like multi-factor authentication and recovery. It does not formally cover sign-up, although the same patterns usually help there too.


    Cognitive Function Tests Hiding in Authentication Flows

    Most 3.3.8 issues don’t show up on the main login screen. They show up in the surrounding steps: the retry loop after a failed password, the MFA prompt, the recovery flow, or the extra verification that triggers when traffic looks unusual. When you walk through those paths end-to-end, you can see where memory or transcription slips back in.

    Memory-Based Authentication Pressure Points

    Asking users to recall a username, password, or passphrase without any assistive mechanism is a cognitive function test. Security questions like “What street did you grow up on” or “What was your first pet’s name” add even more recall pressure, often years after someone created the answers.

    Transcription-Based Authentication Pressure Points

    Many authentication flows expect people to read a one-time passcode from SMS or an authenticator app and then type it into a separate field. This becomes even harder when paste is blocked or when the code lives on a different device, and the user must move between them.

    Puzzle-Style Pressure Points and CAPTCHA

    Traditional CAPTCHAs that rely on distorted text, fine detail image selection, or audio that must be transcribed all require perception, memory, and focus under time pressure.

    If a CAPTCHA or extra test appears only after multiple failures or “suspicious” activity, it still has to comply with the success criterion.


    Fast WCAG 3.3.8 Wins With Password Managers and Paste

    Start with the stuff that breaks the widest range of users and is easiest to fix. If a password manager can’t reliably fill the form, or paste is blocked in password or code fields, the flow forces recall and transcription. That’s exactly what WCAG 3.3.8 is trying to remove.

    Implementation Details That Improve Accessible Authentication

    Allowing password managers to store and fill credentials removes the need for users to remember complex passwords. Allowing paste lets people move secure values from a password manager, secure notes, or another trusted source into the login form without retyping.

    Here’s what tends to matter in real implementations:

    • Use clear labels and proper input types so browsers and password managers can correctly identify login fields.
    • Avoid autocomplete="off" on username and password fields.
    • Do not attach scripts that block paste or interfere with autofill.

    A basic compliant login form can look like this:

    <form action="/login" method="post">
     <label for="username">Email</label>
     <input id="username" name="username" type="email"
            autocomplete="username" required>
    
     <label for="password">Password</label>
     <input id="password" name="password" type="password"
            autocomplete="current-password" required>
    
     <button type="submit">Log in</button>
     <a href="/forgot-password">Forgot password?</a>
    </form>

    A show password toggle is also helpful. It lets users check what they have typed without guessing, which reduces errors for people who struggle with working memory or fine motor control.

    From a security standpoint, allowing paste and password managers aligns with modern guidance. Strong, unique passwords managed by tooling are safer than short patterns that people try to remember across dozens of sites.


    Offering Authentication Paths That Reduce Cognitive Load

    Even with perfect autofill support, passwords are still a brittle dependency. WCAG 3.3.8 expects at least one route that doesn’t ask the user to remember or retype a secret. Passwordless options are the cleanest way to do that without playing whack-a-mole with edge cases.

    Magic Links by Email

    Users enter an email address and receive a time-limited, single-use link. Clicking that link completes authentication. Done well, this path removes passwords and codes entirely.

    Third-Party Sign In

    Signing in with an existing account from a trusted provider can also reduce cognitive load when the external account is already configured for accessible authentication. It shifts the cognitive work away from your login page, so you must still consider whether the rest of your flow remains usable.

    When you implement these methods, keep security fundamentals in place. Tokens should be single-use, expire after a reasonable window, and be protected by sensible rate limits. You can keep a strong security posture without making users memorize or transcribe extra values.


    Passkeys and WebAuthn as an Accessible Authentication Pattern

    Passkeys are one of the rare shifts where security and cognitive accessibility improve together. No remembered secrets. No code transcription. Authentication becomes a device interaction, which lines up cleanly with what WCAG 3.3.8 is trying to achieve.

    Why Passkeys Align Well With WCAG 3.3.8

    Passkeys based on WebAuthn use public key cryptography tied to the user’s device. Users confirm through a fingerprint, face recognition, device PIN, or a hardware key. They do not have to remember strings or retype codes, which removes a large source of cognitive effort.

    A simplified client example might look like this:

    const cred = await navigator.credentials.get({ publicKey });
    
    await fetch("/auth/webauthn/verify", {
     method: "POST",
     headers: { "Content-Type": "application/json" },
     body: JSON.stringify(cred),
    });

    Design your interface so people can choose the method that works best for them. Do not force a single modality. Some users will prefer biometrics, others a hardware key, others a platform prompt. Always keep an accessible fallback available in case a device method fails.


    Rethinking MFA Without Creating New WCAG 3.3.8 Barriers

    MFA is where a lot of otherwise compliant logins fail. The password step might be fine, then the second factor turns into a transcription test. If the only available MFA path is “read six digits and type them,” you don’t actually have a low cognitive route through authentication under WCAG 3.3.8.

    MFA Patterns That Avoid Cognitive Barriers

    • Push notifications that allow the user to approve a sign-in with a simple action.
    • Hardware security keys that require a button press instead of code entry.
    • Device prompts that rely on the operating system’s secure authentication methods.

    If OTP is staying, the bar is simple. Make it fillable and pasteable, and don’t punish slower entry.

    • Allow paste and platform autofill for OTP fields.
    • Avoid very short expiration windows that penalize slower users.
    • Be careful with multi-input digit fields and ensure they support pasting a full code.

    A basic single-field OTP input can look like this:

    <label for="otp">Verification code</label>
    <input id="otp" name="otp"
          inputmode="numeric"
          autocomplete="one-time-code">

    This keeps the security benefit of MFA without turning the second factor into a failure point.


    CAPTCHA and Bot Protection Without Cognitive Puzzles

    CAPTCHAs often get introduced after a login endpoint gets abused. The default implementations are usually cognitive tests, and they tend to appear when the user is already in a retry loop or being flagged as suspicious. That is a bad time to add a puzzle.

    Bot-Mitigation Patterns That Don’t Burden the User

    Object recognition and personal content challenges may technically meet Level AA, but they still exclude many users and should not be your first choice. A better strategy is to move bot checks out of the user’s direct path whenever possible.

    Prefer controls that don’t ask the user to prove they’re human:

    • Rate-limiting login attempts.
    • Device or geo-based risk checks.
    • Invisible CAPTCHA that runs in the background.
    • Honeypot inputs that automated scripts are likely to fill.

    For example, a simple honeypot field can look like this:

    <div style="position:absolute;left:-9999px" aria-hidden="true">
     <label for="website">Website leave blank</label>
     <input id="website" name="website" tabindex="-1" autocomplete="off">
    </div>

    If the backend treats any non-empty value as a bot signal, most automated scripts are filtered without showing users a challenge at all.


    Testing Authentication Journeys Against WCAG 3.3.8

    You can’t validate WCAG 3.3.8 from markup alone. You need to run the flow the way users actually run it, including autofill, paste, and OS prompts. Then you need to intentionally trigger the “extra verification” paths because that’s where most failures live.

    Manual Tests That Matter for Accessible Authentication

    • Log in with a browser password manager and a popular third-party password manager.
    • Confirm that paste works in username, password, and OTP inputs.
    • Trigger retry flows, lockouts, and “suspicious” paths and check for hidden CAPTCHAs or extra steps.
    • Walk through every MFA route and confirm that at least one complete path avoids unsupported cognitive tests.

    Automated Checks for the Supporting Code

    Automation still helps as a tripwire, just not as the final verdict. Custom checks can flag:

    • Inputs with autocomplete="off" where credentials belong
    • Password and OTP fields that attach paste blocking handlers
    • Known CAPTCHA patterns that appear in authentication contexts

    The target is not “no friction.” The target is “no cognitive gate without a supported way through it.”


    Improving Login Usability Through WCAG 3.3.8

    WCAG 3.3.8 is much easier to handle when you treat authentication as a system, not a single screen. Most barriers show up in the supporting paths, not the main form. Once those routes are mapped and cleaned up, keeping at least one low-cognitive path end to end stops feeling like extra work and starts feeling like a more stable design pattern. You still keep strong security, but you drop the steps that slow users down or lock them out.

    If you want help threading accessible authentication and broader WCAG 2.2 requirements into your existing roadmap, 216digital can support that process. To see what that could look like for your team, you can schedule a complementary ADA Strategy Briefing.

    Greg McNeil

    January 14, 2026
    How-to Guides
    Accessibility, How-to, WCAG, WCAG 3.3.8, WCAG Compliance, WCAG conformance, web developers, web development, Website Accessibility
  • Why Accessibility Belongs in Your CI/CD Pipeline

    Teams that ship often know how a small change can ripple through an application. A refactor that seems harmless can shift focus, hide a label, or break a keyboard path in a dialog that once felt dependable. Users notice it later, or support does, and by then the code has moved on. Fixing that one change now touches several places and pulls attention away from the current work. Treating inclusion only at the end of a project makes this pattern more likely.

    Putting checks for accessibility inside your CI/CD pipeline keeps them close to the code decisions that cause these issues. The goal is not to slow teams down. It is to give steady feedback while changes are still small and easy to adjust, so regressions do not build up in the background.

    Why Accessibility Testing Belongs in the CI/CD Pipeline

    Modern web applications rarely stand still. Large codebases, shared components, and parallel feature work all raise the chances that a small update will affect behavior somewhere else. In many enterprise environments, a single UI component can be consumed by dozens of teams, which means a code-level issue can propagate quickly across products.

    Accessibility Challenges in Enterprise CI/CD Environments

    At scale, accessibility is hard to keep stable with occasional audits. Shared components carry most of the interaction logic used across applications, so when those components shift, the impact shows up in many places at once, including flows that teams did not touch directly.

    Expectations are also higher. Laws and standards such as the Americans with Disabilities Act (ADA), the European Accessibility Act, Section 508, and EN 301 549 establish that digital experiences are expected to work for people with disabilities. These requirements apply broadly, but scrutiny tends to increase as products gain traffic and visibility. When a core flow fails for keyboard or assistive technology users at that scale, the impact is harder to ignore.

    Enterprise environments add structural complexity as well. Large codebases, custom components, multi-step journeys, and frequent releases across distributed teams all create more chances for regressions to appear. Because these systems evolve continuously, complying with Web Content Accessibility Guidelines (WCAG) becomes an ongoing concern rather than a one-time remediation task.

    Taken together, that scale, visibility, and constant change push many companies toward code-level practices that support inclusion. Solving issues where you build and update components yields stronger, longer-lasting results than fixing them after they show up.

    Why the CI/CD Pipeline Is Critical for Enterprise Accessibility

    For enterprise teams, long-term inclusion depends on how interfaces are built at the code level. Semantics, keyboard behavior, focus handling, and ARIA logic form the structure that assistive technologies rely on. When these fundamentals are stable, the application behaves more predictably, and changes in one area are less likely to break interactions elsewhere.

    Code-level practices also match the way large systems are assembled. Shared component libraries, design systems, and multiple development streams all draw from the same patterns. When quality is built into those patterns, improvements reach every product that depends on them instead of being applied page by page. This helps teams control regressions and avoid fixing the same issue in different parts of the codebase.

    The CI/CD pipeline is the practical enforcement point for this work. Many organizations already use it to protect performance, security, and reliability. Adding checks that support inclusion into the same flow keeps them aligned with other quality signals developers already trust. WCAG highlights predictable sources of defects, such as missing semantics, inconsistent focus behavior, or insufficient role mapping, and those issues typically originate inside components rather than individual pages.

    Because every change passes through the CI/CD pipeline, it becomes a consistent checkpoint for catching regressions introduced by refactors, new features, or reuse in new contexts. This shifts inclusion from a periodic cleanup task to an ongoing engineering concern that is handled where code decisions are made.

    What Automation Can Reliably Catch

    Automation is most effective when it targets patterns that behave the same way across the codebase. A few areas consistently meet that bar.

    High-Coverage Scanning Across Large Codebases

    Automated checks handle large surfaces quickly. They scan templates, shared layouts, and common flows in minutes, which is useful when multiple teams ship updates across the same system. This level of coverage is difficult to achieve manually on every release.

    Identifying Common Issues Early in Development

    Many accessibility issues follow predictable patterns. Missing alternative text, low contrast, empty or incorrect labels, and unclear button names show up often in shared components and templates. Automation flags these reliably so they can be corrected before the same defect repeats across the application.

    Supporting Teams With Limited Review Capacity

    Manual testing cannot cover every change in a busy sprint. Automated scans provide a first pass that confirms whether the fundamentals are still intact. They surface simple regressions early, allowing human reviewers to focus on interaction quality and flow level behavior where judgment matters most.

    Fitting Into Established Engineering Workflows

    Automated checks fit cleanly into modern development practices. They run against components, routes, and preview builds inside the pipeline and appear next to other quality signals developers already track. Because findings map to rendered output, it is clear where issues originate and how to fix them.

    Strengthening Component Libraries Across the Stack

    Teams that rely on shared component libraries gain additional value from automation. Fixing a defect in one component updates every part of the application that uses it. This stabilizes patterns, reduces duplicated work, and lowers the chance of future regressions introduced through refactors or new feature development.

    Where Manual Accessibility Testing Is Still Essential

    Automated checks validate structure. Human reviewers validate whether the interaction holds up when someone relies on a keyboard or a screen reader. They notice when focus moves in ways the markup does not explain, when announcements come in an order that breaks the task, or when repeated text forces extra steps that slow the flow down.

    That gap is where automation stops. Meeting an individual standard does not guarantee the experience works in practice. Some decisions require interpretation. Reviewers can weigh design intent, compare two valid approaches, and choose the pattern that is clearer and more stable for users who depend on assistive technology.

    Human review also connects issues back to the systems that produced them. When a dialog, button, or error pattern behaves inconsistently, reviewers can trace the problem to the component, token, or workflow behind it. Fixing it there prevents the same defect from reappearing across teams and features.

    How to Add Accessibility Checks to Your CI/CD Pipeline

    Once you know what automation can handle and where human judgment is needed, you decide how to wire both into everyday delivery.

    Most teams start at the pull request level. Running checks on each PR surfaces issues while the change set is small and the context is still clear. Reports that point to specific components or selectors keep debugging time low and make it easier to fix problems before they spread.

    From there, checks can be layered inside the CI/CD pipeline without getting heavy. Lightweight linting catches obvious issues before code leaves the branch. Component-level checks validate shared patterns in isolation. Flow level scans cover high-impact routes such as sign-in, search, and checkout. Keeping each layer focused reduces noise and makes failures easier to act on.

    For teams with existing accessibility debt, a baseline approach helps. Builds fail only when new violations appear, while older issues are tracked separately. That stops regressions without forcing a full remediation project before anything can ship. Teams can then reduce the baseline over time as capacity allows.

    Severity levels give teams room to tune enforcement. Blocking issues should stop a merge. Lower-impact items can start as warnings and become stricter as patterns stabilize. PR checks stay fast, while deeper scans run on a nightly or pre-release schedule, so feedback remains useful without slowing reviews.

    Monitoring Accessibility Regressions Across Releases

    Even with strong CI/CD pipeline coverage, changes outside the codebase can introduce issues. CMS updates, content shifts, feature flags, and third-party integrations all influence how users experience a page. Many teams run scheduled scans on critical flows for this reason, especially when those flows depend on dynamic or CMS driven content.

    A clear definition of done keeps expectations aligned across teams. Keyboard navigation works through core paths. Labels and messages are announced correctly. Focus is visible and follows a logical sequence. Automated checks pass or have a documented exception when they do not.

    Treat post-deployment signals like any other quality metric. Track regressions per release, watch trends in recurring violations, and measure time to fix. The goal is not perfect numbers. It is keeping patterns stable as the system continues to evolve.

    Making Accessibility a Standard Part of Your Release Process

    When teams treat inclusion like any other quality concern in the CI/CD pipeline, it becomes part of day-to-day engineering instead of a separate task. Releases stabilize. Regressions fall. Features ship without blocking users who rely on assistive technology.

    The starting point can be small. A team can choose a few essential routes, add targeted scans in the CI/CD pipeline, and agree on a baseline that prevents new issues from entering the codebase. As that workflow stabilizes, coverage can expand to additional routes and enforcement can become more precise.

    At 216digital, we help teams build a practical plan for integrating WCAG 2.1 compliance into their development workflow. If you want support shaping an approach that fits your stack, your release rhythm, and your long-term goals, you can schedule a complementary ADA Strategy Briefing. It is a chance to talk through your current process and explore what a sustainable accessibility roadmap could look like.

    Greg McNeil

    January 12, 2026
    Testing & Remediation
    Accessibility, CI/CD Pipeline, web developers, web development, Website Accessibility
1 2 3 … 27
Next Page

Find Out if Your Website is WCAG & ADA Compliant







    216digital Logo

    Our team is full of professionals in Web Accessibility Remediation, eCommerce Design & Development, and Marketing – ready to help you reach your goals and thrive in a competitive marketplace. 

    216 Digital, Inc. BBB Business Review

    Get in Touch

    2208 E Enterprise Pkwy
    Twinsburg, OH 44087
    216.505.4400
    info@216digital.com

    Support

    Support Desk
    Acceptable Use Policy
    Accessibility Policy
    Privacy Policy

    Web Accessibility

    Settlement & Risk Mitigation
    WCAG 2.1/2.2 AA Compliance
    Monitoring Service by a11y.Radar

    Development & Marketing

    eCommerce Development
    PPC Marketing
    Professional SEO

    About

    About Us
    Contact

    Copyright © 2026 216digital. All Rights Reserved.