Techniques for WCAG 2.0 and 2.1 Screen reader compatibility
Shows how WCAG sufficient techniques and failures behave in commonly used screen readers.
The results include two types of test:
- Expected to work - these tests show support when accessibility features are used correctly
- Expected to fail - these tests show what happens when accessibility features are used incorrectly (marked with )
WCAG sufficient techniques - reliability by user agent
Reliability of WCAG sufficient techniques in different screen reader / browser combinations. Expected failures (e.g. missing ALT on an IMG) are not included in the reliability graph.
The solid area in the graph shows percentage of tests that pass in all tested interaction modes. The cross hatched area shows partial passes that only work in some interaction modes.
An example of a partial pass is when form labels are read when tabbing, but ignored in browse mode.
Combo | Versions | Reliability | Test Changes |
---|---|---|---|
JAWS Chrome | JAWS 2024.2409.2 with Chrome 131 | 1 better | |
JAWS Edge | JAWS 2024.2409.2 with Edge 131 | 1 better | |
JAWS Firefox | JAWS 2024.2409.2 with FF 128 | 23 better | |
JAWS IE | JAWS 2019.1912.1 with IE11 | 21 better | |
NVDA Chrome | NVDA 2024.4 with Chrome 131 | 1 better 1 worse | |
NVDA Edge | NVDA 2024.4 with Edge 131 | 4 better | |
NVDA Firefox | NVDA 2024.4 with FF 128 | 26 better | |
NVDA IE | NVDA 2019.2 with IE11 | 4 better | |
VoiceOver Mac | VoiceOver macOS 14.6 with Safari 17.6 | 21 better | |
VoiceOver iOS | VoiceOver iOS 17.7 with Safari iOS 17.7 | 11 better | |
WindowEyes IE | WindowEyes 9.2 with IE11 | 14 better 1 worse | |
Dolphin IE | Dolphin SR 15.05 with IE11 | ||
SaToGo IE | SaToGo 3.4.96.0 with IE11 | ||
Average | Including older versions |
The average includes all versions, but some browser/AT combinations have tests for multiple versions (NVDA / JAWS / VoiceOver), while others only have tests for a single version (SaToGo and Dolphin).
WCAG sufficient techniques - reliability trend
This graph shows reliability over time for WCAG techniques in NVDA, JAWS and Voiceover. Other screen readers don't have enough historical data yet to plot trends.
WCAG sufficient techniques - very reliable
These are WCAG sufficient techniques, and work reliably across all tested screen readers, including older versions.
These work in 100% of the tested screen reader / browser combinations.
Screen Reader | NVDA | JAWS | VoiceOver | |||||
---|---|---|---|---|---|---|---|---|
Browser | Edge | FF | Cr | Edge | FF | Cr | Mac | iOS |
Reliability when used correctly (100% average) | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% |
ARIA16 input type=text with aria-labelledby attribute | ||||||||
H30 Link containing img with alt | ||||||||
H36 input type=image with alt | ||||||||
H37 img with alt | ||||||||
H44 input type=text with label for | ||||||||
H67 img with null alt | ||||||||
WCAG 3.1.1 Page with lang set on the html and p elements | ||||||||
WCAG 3.1.1 text/html page with mismatching lang and xml:lang on the html element |
WCAG sufficient techniques - reliable in recent versions
These are WCAG sufficient techniques, and are expected to work, and work in the latest versions of screen readers, but not in older versions marked with
100% of these techniques are not accessibility supported (i.e. cause failures) in one or more of the tested screen reader / browser combinations. On average they cause failures in 12% of the tested combinations.
WCAG sufficient techniques - poorly supported
These are WCAG sufficient techniques, and are expected to work, but don't work in the latest versions of some screen readers. Screen readers where support got worse in the latest version are marked with
100% of these techniques are not accessibility supported (i.e. cause failures) in one or more of the tested screen reader / browser combinations. On average they cause failures in 43% of the tested combinations.
WCAG failures
These are WCAG failures, and are expected to fail.
100% of these techniques fail in one or more of the tested screen reader / browser combinations. On average they cause failures in 74% of the tested combinations.
Key
Tests expected to fail (due to authoring errors) are marked with .
- Works in 100% of tested screen readers
- Fails in 1% - 25% of tested screen readers
- Fails in 26% - 50% of tested screen readers
- Fails in 51% - 75% of tested screen readers
- Fails in 76% - 100% of tested screen readers
- Stable - works, or doesn't cause problems, in all versions of a specific combination of screen reader and browser
- Better - works, or doesn't cause problems, in the most recent version of a specific combination of screen reader and browser (improvement)
- Worse - causes problems in the most recent version of a specific combination of screen reader and browser, but used to work in older versions (regression)
- Broken - causes problems in all versions of a specific combination of screen reader and browser
Test notes
All tests were carried out with screen reader factory settings. JAWS in particular has a wide variety of settings controlling exactly what gets spoken.
Screen readers allow users to interact in different modes, and can produce very different results in each mode. The modes used in these tests are:
- Reading Content read using the “read next” command in a screen reader
- Tabbing Content read using the “tab” key in a screen reader
- Heading Content read using the “next heading” key in a screen reader
- Touch Content read when touching an area of screen on a mobile device
In the “What the user hears” column:
- Commas represent short pauses in screen reader voicing
- Full Stops represent places where voicing stops, and the “read next” or “tab” or “next heading” command is pressed again
- Ellipsis … represent a long pause in voicing
- (Brackets) represent voicing that requires a keystroke to hear