What Is the Accessibility Tree, Really?

Rate this content
Bookmark

Have you ever wondered how screen readers interact with browsers to provide accessible experiences? You may have heard terms like "accessibility APIs", “accessibility tree” or "accessible name computation". But what do they refer to, really? 

In this talk, we will demystify the process in which browsers generate and update the accessibility tree. We will look into its key elements, and how HTML elements and ARIA attributes map into it. Lastly, we will explore how web developers can leverage it for effective debugging. Let's dive into the inner workings of screen readers-browsers interactions!

This talk has been presented at JSNation 2024, check out the latest edition of this JavaScript Conference.

FAQ

The presenter is Mathilde, a front-end developer and accessibility professional.

The accessibility tree is a separate representation of the DOM that focuses on accessibility-related information, and it is used by browsers to pass information to platform accessibility APIs.

With GUIs, screen readers had to handle more complex information, such as text belonging to different windows, menus, buttons, and visual elements like icons, which made it difficult to maintain accuracy.

Elements that are not relevant to assistive technologies, such as those hidden by CSS properties (display:none, visibility:hidden), the hidden HTML attribute, and elements with roles like 'none' or 'presentation', are excluded from the accessibility tree.

In Chrome DevTools, developers can inspect the accessibility tree by opening the DevTools, enabling the 'Full page accessibility tree' option in the accessibility tab, and then switching to the accessibility tree view in the elements tab.

ARIA (Accessible Rich Internet Applications) attributes are used to enhance the accessibility of web content by providing additional information about UI elements to assistive technologies, such as roles, properties, and states.

The topic of the presentation is 'What's Accessibility 3?'.

In text-based operating systems like MS-DOS, screen readers accessed the characters on the screen and converted them into speech.

Accessibility APIs were introduced in the late 1990s to allow programs and applications to describe UI elements as objects with names and properties, helping assistive technologies access these objects.

A screen reader keypad is a device developed by IBM in 1988 that was used with screen reader software to help visually impaired or blind people access computers.

Mathilde Buenerd
Mathilde Buenerd
19 min
13 Jun, 2024

Comments

Sign in or register to post your comment.
Video Summary and Transcription
This is a presentation on accessibility and screen readers. The speaker discusses the evolution of screen readers and how they adapted to graphical user interfaces. Accessibility APIs and the accessibility tree are introduced, allowing programs to construct a text database used by assistive technologies. The accessibility tree may vary across browsers and platforms, excluding elements that are not relevant to assistive technologies. The ARIA hidden state and element properties play a role in determining the accessibility of elements, and the accessible name can be derived from text content or specified using ARIA attributes.

1. Introduction to Accessibility and Screen Readers

Short description:

This is a presentation by a team from the Google Cloud Platform, discussing the topic of What's Accessibility 3? The speaker, Mathilde, shares some background on the importance of accessibility and the evolution of screen readers from text-based operating systems to graphical user interfaces. She explains how screen readers adapted to the complexity of graphical user interfaces by constructing a text database called an off-screen model.

This is a presentation by a team from the Google Cloud Platform, and we're on a virtual stage to show you how to build a really powerful and fast-growing platform. Nice to see you all, especially I know it's 5 o'clock, I know it's late, we had a big day today, so I'm happy to see this room is full and that you all look awake. So thanks for being there.

So let's start. I made a mistake on this first slide. I hope it's not a bad sign for the rest of the presentation. The name of this presentation is What's Accessibility 3? Hi, my name is Mathilde, I'm a front-end developer and an accessibility professional. I left my job at Shopify a couple of weeks ago, so I don't work there anymore. You can hear from my accent that I'm from France originally, but I live in Madrid in Spain.

So the topic of today is What's Accessibility 3? and we'll cover that, but before, I'd like to go a bit back in time and give some context on why the accessibility is an interesting topic. So on this slide, we can see a picture of an 18-key square keyboard with nine digits, and the A, B, C, D letters help and stop keys. And out of curiosity, can you raise your hand if you know what this device is? I see no hands. Actually, I'm not surprised. If I saw any hand up, I would have been pretty amazed. But this device is from 1988, and it's a screen reader keypad that was developed by IBM, and so this keypad was coupled with a screen reader software as part of one of the first screen reading systems that were developed to allow people who are visually impaired or who are blind to access computers.

But did you ever wonder how screen readers were working back then? So in a very simple way, in a text-based operating system like MS-DOS, it was kind of easy for screen readers to access the characters that were presented on the screen, and all they had to do was to convert this text into speech. But as we know, computers evolved fast, and already by the end of the 80s, text-based operating systems were replaced by graphical user interfaces. And it became much more complicated for screen readers because a graphical user interface doesn't just run the characters. It runs the pixels, and the information presented on the screen is just much more complicated than what it was. For example, text can belong to different windows, but the screen readers should only read the text on the currently-selected windows. So you have different types of text. You have menus. You have items. You have buttons. And on top of that, you have elements that are purely visuals like icons. So how did screen readers adapt to that? Well, this is a picture of an article called Making the GUI Talk, so the Graphical User Interface Talk written in 1991. And it explains a new approach they've been developing at IBM to make screen readers work with graphical user interfaces. And the idea was to construct some kind of text database that models what's displayed on the screen. And this database was called an off-screen model, and in the previous article, the author explains that in the off-screen model, assistive technologies had to make assumptions about the role of things based on how they are drawn on the screen. For example, if a text has a border or background, then it's probably selected.

2. Accessibility APIs and the Accessibility Tree

Short description:

Accessibility APIs were introduced in the late 90s, allowing programs and applications to construct the text database used by assistive technologies. Accessibility APIs provide a tree representation of the interface, with objects that describe UI elements, their properties, roles, states, and events. The accessibility tree is a separate representation of the DOM that focuses on accessibility-related information. Browsers generate the accessibility tree using the render tree, which is based on the DOM minus hidden elements. DevTools can be used to view the accessibility tree.

Or if there's a blinking insertion bar around it, probably the user can enter text. I'm pretty sure you can imagine how complex the systems were and how difficult this was to maintain, and there was a sort of ambiguity. So on top of that, every time a new version of the user interface would come out, screen readers had to ensure the off-screen model was still accurate. Drinking time, sorry. So what's a better solution? Well, already at the end of the, in the late 90s, accessibility APIs were introduced, and the role of accessibility APIs was to allow programs and applications to construct the text database that assistive technologies were previously building themselves. Concretely, they allow operating systems to describe UI elements as objects with names and properties, so that assistive technologies can access those objects. And I put accessibility APIs in the plural form because there are many, they are platform-specific. You have some for Mac, you have some for Windows, you have some for Android, and they're pretty much standards.

So you might wonder what does an accessibility API look like? And in practice, it's a tree representation of the interface. For example, on the Mac, a window would be an application that contains two children, a menu bar, the thing that's at the top, and a window that contains the application itself. Then the menu bar would also contain, as children, all the menu items, and the window would contain a left bar, a close button, search inputs, et cetera. For each UI element, developers can typically set roles, is that the window, is that a menu? They can set properties. What's the position of this window? What's its size? They can set states. Is this menu item selected or not? And other properties that are useful for developers, like events. Okay, so now we all know, we know all about screen readers and accessibility APIs, but what about the accessibility tree then? Well, the accessibility tree is a separate representation of the DOM that focuses on accessibility-related information. It is built by the browser that then passes this information to the platform accessibility API. So in summary, the accessibility tree makes the link between your HTML, the platform accessibility API, and assistive technology.

So now let's look at how browsers generate this tree. If you think of a typical page that's meant to be interacted with a screen, a mouse and a keyboard, a typical critical rendering path would look like this. The browser creates the DOM and the CSS object model, then it creates the render tree, which is basically the DOM minus some elements that are hidden by CSS. When the render tree is ready, the browser can determine the exact size and position of the element, does the layout step, and then it paints the nodes on the screen one by one. And after that, the user can interact with the page with its mouse, keyboard, et cetera. Now looking at the experience from the point of view of a user that uses assistive technology, it was different. Well, the layout and the paint step aren't relevant here because it's not something that assistive technologies will leverage. However, the browser will use the render tree, the tree without the hidden elements, to construct the accessibility tree. Then the accessibility tree will pass the information to the platform accessibility API that can be queried by assistive technologies like screen readers. Okay, so let's look at what the tree looks like. So how to show the accessibility tree. The best way to get a grasp of it is to look at it using DevTools. I'm going to use Chrome DevTools in this presentation.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Modern Web Debugging
JSNation 2023JSNation 2023
29 min
Modern Web Debugging
Top Content
This Talk discusses modern web debugging and the latest updates in Chrome DevTools. It highlights new features that help pinpoint issues quicker, improved file visibility and source mapping, and ignoring and configuring files. The Breakpoints panel in DevTools has been redesigned for easier access and management. The Talk also covers the challenges of debugging with source maps and the efforts to standardize the source map format. Lastly, it provides tips for improving productivity with DevTools and emphasizes the importance of reporting bugs and using source maps for debugging production code.
Debugging JS
React Summit 2023React Summit 2023
24 min
Debugging JS
Top Content
Watch video: Debugging JS
Debugging JavaScript is a crucial skill that is often overlooked in the industry. It is important to understand the problem, reproduce the issue, and identify the root cause. Having a variety of debugging tools and techniques, such as console methods and graphical debuggers, is beneficial. Replay is a time-traveling debugger for JavaScript that allows users to record and inspect bugs. It works with Redux, plain React, and even minified code with the help of source maps.
Accessibility at Discord
React Advanced 2021React Advanced 2021
22 min
Accessibility at Discord
This Talk discusses the accessibility efforts at Discord, focusing on keyboard navigation and the challenges faced with implementing focus rings and outlines. The speaker showcases a unified focus ring system and a saturation slider to address accessibility concerns. They also highlight the implementation of role colors and the use of CSS filters for accessibility improvements. The Talk concludes with insights on runtime accessibility checking and the development of a performant core runtime system for checking accessibility issues.
From Friction to Flow: Debugging With Chrome DevTools
JSNation 2024JSNation 2024
32 min
From Friction to Flow: Debugging With Chrome DevTools
The Talk discusses the importance of removing frictions in the debugging process and being aware of the tools available in Chrome DevTools. It highlights the use of the 'Emulate a Focus Page' feature for debugging disappearing elements and the improvement of debugging tools and workflow. The Talk also mentions enhancing error understanding, improving debugging efficiency and performance, and the continuous improvement of DevTools. It emphasizes the importance of staying updated with new features and providing feedback to request new features.
Configuring Axe Accessibility Tests
TestJS Summit 2021TestJS Summit 2021
30 min
Configuring Axe Accessibility Tests
Top Content
AXe is an accessibility engine for automated web UI testing that runs a set of rules to test for accessibility problems. It can be configured to disable or enable specific rules and run based on tags. Axe provides various options, but axe linter does not support all options. The importance of investing time and resources in accessibility is emphasized, as it benefits not only those with disabilities but improves the web for everyone. Manual testing is also highlighted as a necessary complement to automated tests for addressing accessibility issues.
Debugging with Chrome DevTools
JSNation Live 2021JSNation Live 2021
11 min
Debugging with Chrome DevTools
Top Content
Here are some tips for better utilizing DevTools, including using the run command, customizing keyboard shortcuts, and emulating the focus effect. Learn how to inspect memory, use the network panel for more control over network requests, and take advantage of console utilities. Save frequently used code as snippets and use local overrides for easy editing. Optimize images by using a more optimized format like AVIF and track changes in the network panel to see the reduced data size.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured WorkshopFree
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Tracing: Frontend Issues With Backend Solutions
React Summit US 2024React Summit US 2024
112 min
Tracing: Frontend Issues With Backend Solutions
Featured WorkshopFree
Lazar Nikolov
Sarah Guthals
2 authors
Frontend issues that affect your users are often triggered by backend problems. In this workshop, you’ll learn how to identify issues causing slow web pages and poor Core Web Vitals using tracing.
Then, try it for yourself by setting up Sentry in a ready-made Next.js project to discover performance issues including slow database queries in an interactive pair-programming session.
You’ll leave the workshop being able to:- Find backend issues that might be slowing down your frontend apps- Setup tracing with Sentry in a Next.js project- Debug and fix poor performance issues using tracing
This will be a live 2-hour event where you’ll have the opportunity to code along with us and ask us questions.
React Performance Debugging
React Advanced 2023React Advanced 2023
148 min
React Performance Debugging
Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Web Accessibility for Ninjas: A Practical Approach for Creating Accessible Web Applications
React Summit 2023React Summit 2023
109 min
Web Accessibility for Ninjas: A Practical Approach for Creating Accessible Web Applications
Workshop
Asaf Shochet Avida
Eitan Noy
2 authors
In this hands-on workshop, we’ll equip you with the tools and techniques you need to create accessible web applications. We’ll explore the principles of inclusive design and learn how to test our websites using assistive technology to ensure that they work for everyone.
We’ll cover topics such as semantic markup, ARIA roles, accessible forms, and navigation, and then dive into coding exercises where you’ll get to apply what you’ve learned. We’ll use automated testing tools to validate our work and ensure that we meet accessibility standards.
By the end of this workshop, you’ll be equipped with the knowledge and skills to create accessible websites that work for everyone, and you’ll have hands-on experience using the latest techniques and tools for inclusive design and testing. Join us for this awesome coding workshop and become a ninja in web accessibility and inclusive design!
Automated accessibility testing with jest-axe and Lighthouse CI
TestJS Summit 2021TestJS Summit 2021
85 min
Automated accessibility testing with jest-axe and Lighthouse CI
Workshop
Bonnie Schulkin
Bonnie Schulkin
Do your automated tests include a11y checks? This workshop will cover how to get started with jest-axe to detect code-based accessibility violations, and Lighthouse CI to validate the accessibility of fully rendered pages. No amount of automated tests can replace manual accessibility testing, but these checks will make sure that your manual testers aren't doing more work than they need to.
The Clinic.js Workshop
JSNation 2022JSNation 2022
71 min
The Clinic.js Workshop
Workshop
Rafael Gonzaga
Rafael Gonzaga
Learn the ways of the clinic suite of tools, which help you detect performance issues in your Node.js applications. This workshop walks you through a number of examples, and the knowledge required to do benchmarking and debug I/O and Event Loop issues.