Labelling forms, images and links Screen reader compatibility

Accessibility
Dec 14, 2025

Screen reader compatibility test results for labelling, showing how failures and techniques work in different screen reader / browser combinations.

The results include two types of test:

  • Expected to work - these tests show support when accessibility features are used correctly
  • Expected to fail - these tests show what happens when accessibility features are used incorrectly

Reliability by user agent

The solid area in the graph shows percentage of tests that pass in all tested interaction modes. The cross hatched area shows partial passes that only work in some interaction modes.

An example of a partial pass is when form labels are read when tabbing, but ignored in browse mode.

ComboVersionsReliabilityTest Changes
JAWS ChromeJAWS 2025.2508.120 with Chrome 143
1 better
JAWS EdgeJAWS 2025.2508.120 with Edge 143
JAWS FirefoxJAWS 2025.2508.120 with FF 140
10 better
JAWS IEJAWS 2019.1912.1 with IE11
11 better
NVDA ChromeNVDA 2025.3 with Chrome 143
2 better
NVDA EdgeNVDA 2025.3 with Edge 143
1 better
NVDA FirefoxNVDA 2025.3 with FF 140
11 better
NVDA IENVDA 2019.2 with IE11
3 better
VoiceOver MacVoiceOver macOS 15.7 with Safari 26.0
7 better
VoiceOver iOSVoiceOver iOS 18.6 with Safari iOS 18.6
2 better
WindowEyes IEWindowEyes 9.2 with IE11
11 better
Dolphin IEDolphin SR 15.05 with IE11
SaToGo IESaToGo 3.4.96.0 with IE11
Average Including older versions

The average includes all versions, but some browser/AT combinations have tests for multiple versions (NVDA / JAWS / VoiceOver), while others only have tests for a single version (SaToGo and Dolphin).

Reliability trend

100%80%60%40%20%0%201482%201585%201685%201787%201890%201992%202095%202195%202295%202395%202496%202597%

Works as expected

These tests use conformant HTML or WCAG sufficient techniques, and work in all tested browser / screen reader combinations.

Screen ReaderNVDAJAWSVoiceOver
BrowserEdgeFFCrEdgeFFCrMaciOS
100%Link containing img with alt
100%Link containing img with title
100%button with title containing img with null alt
100%img with alt
100%img with title
100%img with null alt
100%input type=image with alt
100%input type=text inside label with text before control
100%input type=text with aria-labelledby attribute
100%input type=text with label for

Expected to work

These tests use conformant HTML or WCAG sufficient techniques and might be expected to work in screen readers. This doesn't always happen.

Screen ReaderNVDAJAWSVoiceOver
BrowserEdgeFFCrEdgeFFCrMaciOS
93%Click Here link with aria-describedby attribute
91%Click Here link with title attribute
85%Interactive iframe with role=presentation and aria-label attribute
85%Interactive iframe with role=presentation and title attribute
86%Interactive iframe with role=presentation and no accessible name
92%Link text replaced by aria-label attribute
95%Link text replaced by aria-labelledby attribute
95%Yes/No radio buttons inside fieldset element
4%abbr with title
81%area and img with alt attributes
98%area with alt attribute and img with null alt
78%area with aria-label attribute
60%area with aria-labelledby attribute
60%area with title attribute
96%button containing img with alt
96%button containing img with aria-label
81%button containing img with aria-labelledby
77%button containing img with title attribute
96%button with aria-label containing img with null alt
83%fieldset containing links
92%iframe with title attribute
87%iframe with fallback content
96%img with aria-label
90%img with aria-labelledby
67%img with figcaption
98%input type=image with aria-label attribute
92%input type=image with aria-labelledby attribute
99%input type=image with title attribute
99%input type=text inside label with text after control
95%input type=text inside label with text before and after control
94%input type=text with aria-describedby attribute
92%input type=text with aria-label attribute
93%input type=text with title attribute

Expected to fail

These tests use non-conformant HTML or WCAG failures and are expected to fail in screen readers.

Screen ReaderNVDAJAWSVoiceOver
BrowserEdgeFFCrEdgeFFCrMaciOS
49%Image map with no name attribute
47%Interactive iframe with role=presentation and negative tabindex and no accessible name
53%Interactive iframe with negative tabindex and no accessible name
100%Link containing img with null alt
100%Link containing img without alt
7%Link with aria-label containing img with no alt
14%Link with aria-labelledby containing img with no alt
4%Link with title containing img with no alt
56%Yes/No radio buttons without fieldset
96%acronym with title
100%area with no alt
100%area with null alt
100%button containing img with no alt
100%button containing img with null alt
1%button with aria-label containing img with no alt
8%button with title containing img with no alt
22%fieldset containing no controls
34%fieldset used to put border round text
100%fieldset with blank legend
48%fieldset with no legend
87%iframe where src is a PNG image
59%iframe with title matching frame filename
13%iframe with blank title
13%iframe with no fallback content and no title
67%img with alt set to ASCII art smiley
100%img with alt set to src filename
87%img with aria-describedby
100%img with null alt and non-null title attributes
100%img with null alt and non-null aria-label attributes
100%img with null alt and non-null aria-labelledby attributes
100%img with server side image map
100%img without alt
100%input type=image with no alt
100%input type=image with null alt
100%input type=text inside blank label
100%input type=text with blank label for
3%input with aria-labelledby pointing to role=presentation element
75%label elements reference controls with duplicate ids

Key

  • Stable - works, or doesn't cause problems, in all versions of a specific combination of screen reader and browser
  • Better - works, or doesn't cause problems, in the most recent version of a specific combination of screen reader and browser (improvement)
  • Worse - causes problems in the most recent version of a specific combination of screen reader and browser, but used to work in older versions (regression)
  • Broken - causes problems in all versions of a specific combination of screen reader and browser

Test notes

All tests were carried out with screen reader factory settings. JAWS in particular has a wide variety of settings controlling exactly what gets spoken.

Screen readers allow users to interact in different modes, and can produce very different results in each mode. The modes used in these tests are:

  • Reading Content read using the “read next” command in a screen reader
  • Tabbing Content read using the “tab” key in a screen reader
  • Heading Content read using the “next heading” key in a screen reader
  • Touch Content read when touching an area of screen on a mobile device

In the “What the user hears” column:

  • Commas represent short pauses in screen reader voicing
  • Full Stops represent places where voicing stops, and the “read next” or “tab” or “next heading” command is pressed again
  • Ellipsis … represent a long pause in voicing
  • (Brackets) represent voicing that requires a keystroke to hear