Artificial Intelligence for Accessibility and Performance Auditing: Automated Findings with Human Judgment

Authors

  • Harshit Sunilkumar Vora

Keywords:

Web Accessibility Auditing, WCAG Conformance Evaluation, Automated Detection Coverage, Large Language Model Augmentation, Continuous Auditing Pipeline

Abstract

Automated accessibility and performance auditing tools have become integral to modern web development pipelines, yet systematic evidence shows that treating their outputs as definitive conformance verdicts leads to programs that are overconfident in coverage and underinvest in expert judgment. Deterministic rule engines reliably surface structural defects at scale but remain fundamentally constrained in their ability to evaluate success criteria requiring semantic interpretation, contextual reasoning, or natural language understanding. Established standard frameworks—structured around principles of perceivability, operability, understandability, and robustness—provide the normative foundation against which both automated and human findings must be mapped to remain institutionally credible and legally defensible. Performance auditing presents a structurally parallel set of challenges, where threshold-based metrics require human disambiguation before remediation decisions can be responsibly made. The empirical boundaries of automated detection are quantified through mutation testing and coverage analysis, confirming that no single tool is sufficient and that tools are structurally complementary rather than interchangeable. Artificial intelligence augmentation extends automated coverage into semantically demanding criteria, achieving meaningful detection rates that conventional rule engines cannot approach, while introducing anchoring risks that demand carefully designed human-in-the-loop workflows. A continuous auditing pipeline with graded confidence tiers — separating high-confidence structural findings, medium-confidence semantic assessments, and low-confidence interaction-dependent evaluations — provides the operational architecture necessary to allocate expert attention proportionally, measure program quality over time, and produce findings that are auditable, reproducible, and defensible across tool versions and evaluation cycles.

DOI: https://doi.org/10.17762/ijisae.v14i1s.8222

 

Downloads

Download data is not yet available.

References

Tolu Adedoja, "Automated Evaluation of Detectable Accessibility Issues on U.S. State Government Homepages: A Baseline Assessment Ahead of the 2026–2027 ADA Title II Deadlines," Research Square, 2026. [Online]. Available: https://www.researchsquare.com/article/rs-8663556/v1

Mahan Tafreshipour et al., "Ma11y: A Mutation Framework for Web Accessibility Testing," ACM Digital Library, 2024. [Online]. Available: https://dl.acm.org/doi/pdf/10.1145/3650212.3652113

Ben Caldwell et al., "Web Content Accessibility Guidelines (WCAG) 2.0," W3C Recommendation, World Wide Web Consortium, Dec. 2008. [Online]. Available: https://www.w3.org/TR/WCAG20/

Fernando Alonso, "Requirements for a Method of Software Accessibility Conformity Assessment," Proc. Int. Conf. Computers for Handicapped Persons (ICCHP), Linz, Austria, 2008. [Online]. Available: https://oa.upm.es/2425/1/INVE_MEM_2008_55928.pdf

André Pimenta Freire et al. “Accessibility Inspections Using the Web Content Accessibility Guidelines by Novice Evaluators: An Experience Report,” ACM Digital Library, 2024. https://doi.org/10.1145/3702038.3702040

Karol Król and Wojciech Sroka. “Internet in the Middle of Nowhere: Performance of Geoportals in Rural Areas According to.” MDPI, 2023. https://doi.org/10.3390/ijgi12120484

Juho Vepsäläinen et al., “Overview of Web Application Performance Optimization Techniques,” arXiv, 2024. https://arxiv.org/pdf/2412.07892

Jonathan Robert Pool, “Accessibility Metatesting: Comparing Nine Testing Tools.” W4A '23: Proceedings of the 20th International Web for All Conference, 2023. https://doi.org/10.1145/3587281.3587282

Thomas Fischer et al., “Coverage of web accessibility guidelines provided by automated checking tools.” Universal Access in the Information Society, 2025. https://link.springer.com/content/pdf/10.1007/s10209-025-01263-x.pdf

Heidilyn V. Gamido, Marlon V. Gamido. "Comparative review of the features of automated software testing tools.” International Journal of Electrical and Computer Engineering, 2019. https://www.researchgate.net/profile/Heidilyn-Gamido/publication/335928031

Juan-Miguel López-Gil and Juanan Pereira, "Turning manual web accessibility success criteria into automatic: an LLM-based approach," Universal Access in the Information Society, vol. 24, pp. 837–852, March 2025. https://doi.org/10.1007/s10209-024-01108-z

ZIYAO HE et al. (2025). “Enhancing web accessibility: Automated detection of issues with generative AI,” Proceedings of the ACM on Software Engineering. https://doi.org/10.1145/3729371

Downloads

Published

14.02.2026

How to Cite

Harshit Sunilkumar Vora. (2026). Artificial Intelligence for Accessibility and Performance Auditing: Automated Findings with Human Judgment. International Journal of Intelligent Systems and Applications in Engineering, 14(1s), 594–601. Retrieved from https://www.ijisae.org/index.php/IJISAE/article/view/8222

Issue

Section

Research Article