Rich Screen Reader Experiences for Accessible Data Visualization

Computer Graphics Forum (Proc. EuroVis), 2022

Best Paper Honorable Mention


Abstract

Current web accessibility guidelines ask visualization designers to support screen readers via basic non-visual alternatives like textual descriptions and access to raw data tables. But charts do more than summarize data or reproduce tables; they afford interactive data exploration at varying levels of granularity — from fine-grained datum-by-datum reading to skimming and surfacing high-level trends. In response to the lack of comparable non-visual affordances, we present a set of rich screen reader experiences for accessible data visualization and exploration. Through an iterative co-design process, we identify three key design dimensions for expressive screen reader accessibility: structure, or how chart entities should be organized for a screen reader to traverse; navigation, or the structural, spatial, and targeted operations a user might perform to step through the structure; and, description, or the semantic content, composition, and verbosity of the screen reader’s narration. We operationalize these dimensions to prototype screen-reader-accessible visualizations that cover a diverse range of chart types and combinations of our design dimensions. We evaluate a subset of these prototypes in a mixed-methods study with 13 blind and visually impaired readers. Our findings demonstrate that these designs help users conceptualize data spatially, selectively attend to data of interest at different levels of granularity, and experience control and agency over their data analysis process. An accessible HTML version of this paper is available at: http://vis.csail.mit.edu/pubs/rich-screen-reader-vis-experiences.

This paper's prototypes have been implemented in Olli — an open source Javascript library.

1 Introduction

Despite decades of visualization research and recent legal requirements to make web-based content accessible [59, 30], web-based visualizations remain largely inaccessible to people with visual disabilities. Charts on mainstream publications are often completely invisible to screen readers (an assistive technology that transforms text and visual media into speech) or are rendered as incomprehensible strings of “graphic graphic graphic” [54, 49]. Current accessibility guidelines ask visualization designers to provide textual descriptions of their graphics via alt text (short for alternative text) and link to underlying data tables [26, 60]. However, these recommendations do not provide modes of information-seeking comparable to what sighted readers enjoy with interactive visualizations. For instance, well-written alt text can provide a high-level takeaway of what the visualization shows, but it does not allow readers to drill down into the data to explore specific sections. While tables provide readers with the ability to hone in on specific data points, reading data line-by-line quickly becomes tedious and makes it difficult to identify overall trends.

Developing rich non-visual screen reader experiences for data visualizations poses several unique challenges. Although visuomotor interactions (like hovering, pointing, clicking, and dragging) have been core to visualization research [22], screen readers redefine what interaction is for visualization. Rather than primarily manipulating aspects of the visualization or its backing data pipeline [65, 29, 22], screen readers make reading a visualization an interactive operation as well — users must intentionally perform actions with their input devices in order to cognize visualized elements. Moreover, as screen readers narrate elements one-at-a-time, they explicitly linearize reading a visualization. As a result, in contrast to sighted readers who can choose to selectively attend to specific elements and have access to the entire visualization during the reading process, screen reader users are limited to the linear steps made available by the visualization author and must remember (or note down) prior output conveyed by the screen reader. Despite these modality differences, studies have found that screen reader users share the same information-seeking goals as sighted readers: an initial holistic overview followed by comparing data points [54], akin to the information-seeking mantra of “overview first, zoom and filter, and details on demand” [57].

In this paper, we begin to bridge this divide by conducting an iterative co-design process (co-author Hajas is a blind researcher with relevant experience) prototyping rich and usable screen reader experiences for web-based visualizations. We identify three design dimensions for enabling an expressive space of experiences: structure, or how the different elements of a chart should be organized for a screen reader to traverse; navigation, which describes the operations a user may perform to move through this structure; and, description, which specifies the semantic content, composition, and verbosity of text conveyed at each step. We demonstrate how to operationalize these design dimensions through diverse accessible reading experiences across a variety of chart types.

To evaluate our contribution, we conduct an exploratory mixed-methods study with a subset of our prototypes and 13 blind or visually impaired screen reader users. We identify specific features that make visualizations more useful for screen reader users (e.g., hierarchical and segmented approaches to presenting data, cursors and roadmaps for spatial navigation) and identify behavior patterns that screen reader users follow as they read a visualization (e.g., constant hypothesis testing and validating their mental models).

2 Background and Related Work

Screen Reader Assistive Technology. A screen reader is an assistive technology that conveys digital text or images as synthesized speech or braille output. Screen readers are available as standalone third-party software or can be built-in features of desktop and mobile operating systems. A screen reader allows a user to navigate content linearly with input methods native to a given platform (e.g., touch on smartphones, mouse/keyboard input on desktop). Content authors must generate and attach alt text to their visual content like images or charts in order for them to be accessible to screen reader users. Functionality and user experience differs across platforms and screen readers. In this paper, however, we focus on interacting with web-based visualizations with the most widely used desktop screen readers (JAWS/NVDA for Windows, VoiceOver for Mac).

Web Accessibility Standards. In 2014, the World Wide Web Consortium (W3C) adopted the Web Accessibility Initiative’s Accessible Rich Internet Applications protocol (WAI-ARIA) which introduced a range of semantically-meaningful HTML attributes to allow screen readers to better parse HTML elements [44]. In particular, these attributes allow a screen reader to convey the state of dynamic widgets (e.g., autocomplete is available for text entry), alert users to live content updates, and identify common sections of a web page for rapid navigation (e.g., banners or the main content). In 2018, the W3C published the WAI-ARIA Graphics Module [58] with additional attributes to support marking up structured graphics such as charts, maps, and diagrams. These attributes allow designers to annotate individual and groups of graphical elements as well as surface data values and labels for a screen reader to read aloud.

Accessible Visualization Design. In a recent survey, Kim et al. [38] describe the rich body of work that has explored multi-sensory approaches to visualization for multiple disabilities [66, 28, 36, 6, 64, 41]. Here, we focus on screen reader output native to web-based interfaces for blind users (namely via speech). Sharif et al. [54] find that many web-based charts are intentionally designed to cause screen readers to skip over them. For charts that a screen reader does detect, blind or visually impaired users nevertheless experience significant difficulties: these users spend 211% more time interacting with the charts and are 61% less accurate in extracting information compared to non-screen-reader users [54]. Despite the availability of ARIA, alt text and data tables remain the most commonly used and recommended methods for making web-based charts accessible to screen readers [26, 60, 16]. However, each of these three approaches comes with its own limitations. Static alt text requires blind readers to accept the author’s interpretation of the data; by not affording exploratory and interactive modes, alt text robs readers of the necessary time and space to interpret the numbers for themselves [42]. Recent research also suggests that blind people have nuanced preferences for the kinds of visual semantic content conveyed via text [48, 42], and desire more interactive and exploratory representations of pictorial images [45]. Data tables, on the other hand, undo the benefits of abstraction that visualizations enable — they force readers to step sequentially through data values making it difficult to identify larger-scale patterns or trends, and do not leverage the structure inherent to web-based grammars of graphics [10, 50]. Finally, ARIA labels are not a panacea; even when they are used judiciously — a non-trivial task which often results in careless designs that cause screen readers to simply read out long sequences of numbers without any other identifiable information [49] — they present a fairly low expressive ceiling. The current ARIA specification does not afford rich and nuanced information-seeking opportunities equivalent to those available to sighted readers.

There has been some promising progress for improving support for accessibility within visualization toolkits, and vice-versa for improving native support for charts in screen reader technologies. For instance, Vega-Lite [50] and Highcharts [31] are beginning to provide ARIA support out-of-the-box. Apple’s VoiceOver Data Comprehension feature [20] affords more granular screen reader navigation within the chart, beyond textual summaries and data tables, via four categories of selectable interactions for charts appearing in Apple’s Stocks or Health apps. These interactions include Describe Chart, which describes properties of the chart’s construction, such as its encodings, axis labels, and ranges; Summarize Numerical Data, which reports min and max data values, and summary statistics like mean and standard deviation; Describe Data Series, which reports the rate-of-change/growth of a curve, trends, and outliers; and Play Audiograph, which plays a tonal representation of the graph’s ascending/descending trend over time [20]. While Apple’s features are presently limited to single-line charts, SAS’ Graphics Accelerator [1] supports a similar featureset (including sonification, textual descriptions, and data tables) but for a broader range of statistical charts including bar charts, box plots, contour plots, and scatter plot matrices. Our work follows in the spirit of these tools but focuses on web-based visualizations rather than standalone- or platform-integrated software. We go beyond what ARIA supports today to enable high-level and fine-grained screen reader interactions, and hope that our work will help inform ongoing discussions on improving web accessibility standards (e.g., via an Accessibility Object Model [11]).

3 Design Dimensions for Rich Screen Reader Experiences

Currently, the most common ways of making a visualization accessible to screen readers include adding a single high-level textual description (via alt text), providing access to low-level data via a table, or tagging visualization elements with ARIA labels to allow screen readers to step through them linearly (e.g., as with Highcharts [31]). While promising, these approaches do not afford rich information-seeking behaviors akin to what sighted readers enjoy with interactive visualizations. To support systematic thinking about accessible visualization design, we introduce three design dimensions that support rich, accessible reading experiences: structure, or how elements of the visualization should be organized for a screen reader to traverse; navigation, or the mechanisms by which a screen reader user can move from one element to another; and description, or what semantic content the screen reader conveys.

Methods. We began by studying the development of multi-sensory graphical systems, covering work in critical cartography [63, 39], blind education [2, 25], tactile graphics [24, 28, 21, 3, 14], and multi-sensory visualization [43, 17, 13, 5]. Drawing on conventions and literature on crip, reflective, and participatory design [27, 53, 19], all authors began an iterative co-design process with Hajas, who is a blind researcher with relevant expertise. Hajas is a screen reader user with a PhD in HCI and accessible science communication, but he is not an expert in visualization research. Co-design — particularly as encapsulated in the disability activism slogan, “Nothing about us, without us” [19] — is important because it can eliminate prototypes that replicate existing tools, solve imaginary problems (i.e., by creating disability dongles [34]) or unintentionally produce harmful technology [56]. To balance engaging disabled users while acknowledging academia’s traditionally extractive relationship with marginalized populations [18], we intentionally acknowledge Hajas as both co-designer and co-author. We believe that the distinction between co-designer — a phrase that often discounts lived experience as insufficiently academic — and researcher is minimal; technical, qualitative, and experiential expertise are all important components of this research. Hajas’ profile is a perfect example of the intersection between lived experience of existing challenges and solutions, academic experience of research procedures, and an interest in the science of visualization. While he does not represent all screen reader users, his academic expertise and lived experience uniquely qualify him to be both researcher and co-designer. Nevertheless, to incorporate a diverse range of perspectives, we recruited additional participants as part of an evaluative study (§ 5).

Our work unfolded over 6 months and yielded 15 prototypes. All authors met weekly for hour-long video conferences. In each session, we would discuss the structure and affordances of the prototypes, often by observing and recording Hajas’ screen as he worked through them. We would also use these meetings to reflect on how the prototypes have evolved, compare their similarities and differences, and whiteboard potential design dimensions to capture these insights. Following these meetings, Hajas wrote memos detailing the motivations for each prototype, tagging its most salient features, summarizing the types of interactions that were available, enumerating questions that the prototype raises, and finally providing high-level feedback about its usefulness and usability. In the following section, we liberally quote these memos to provide evidence and additional context for our design dimensions.

3.1 Structure

We define structure to mean an underlying representation of a visualization that organizes its data and visual elements into a format that can be traversed by a screen reader. Through our co-design process, we identified two components important to analyzing accessible structures: their form, or the shape they organize information into; and entities, or which parts of the visualization specification are used to translate a chart into a non-visual structure. Design decisions about form and entities are guided by considerations of information granularity, or how many levels comprise the range between a high-level overview and individual data values.

A graphic with two parts. Part A illustrates an accessible visualization structure for an example scatterplot, and its corresponding data and encoding entities: Chart Root, Encodings, Intervals/Categories, and Data points. Part B illustrates three different ways of navigating a visualization structure: Structural, Spatial, and Targeted navigation. A long description of this graphic is available at the following link. Long Description
Figure 1: (a) An accessible visualization structure in the form of a tree and comprised of encoding entities. Solid magenta outlines indicate the location of the screen reader cursor. Solid blue arrows between labels indicate available next steps via keyboard navigability (up, down, left, right). (b) Three ways of navigating accessible visualization structures: structural, spatial, and targeted.

Form. Accessible structures organize information about the visualization into different forms, including lists, tables, and trees. Consider existing best practices and common approaches. A rasterized chart with alt text is represented to a screen reader as a single node. SVG-based visualizations can additionally be tagged with ARIA labels to describe the axes, legends, and individual data points. Despite SVG’s nesting, screen readers linearize these ARIA labels into a list structure so that the user can step through them sequentially. Data tables, on the other hand, provide a grid structure for screen readers to traverse. At each cell of the grid, the screen reader reads out a different textual description, allowing the user to explore a space by traversing the grid spatially (up, down, left, and right) instead of merely linearly. Accessible visualization research has begun to explore the use of tree structures for storing chart metadata [62], but they remain relatively rare in practice. Our prototypes primarily use trees as their branching and hierarchical organization allows users to browse different components of a visualization and traverse them at different levels of detail.

Entities. Where form refers to how nodes in a structure are arranged, entities instead refers to what aspects of the visualization the nodes represent. These aspects can include:

  • Data, where nodes in the structure represent individual data values or different slices of the data cube (e.g., by field, bins, categories, or interval ranges). For example, in a data table, every node (i.e. cell) represents a data value designated by the row and column coordinates. Depending on the form, data entities can be presented at different levels of detail. For example, one prototype we explored represents a line chart as a binary tree structure (Fig. 2e): the root node represents the entire x-axis domain, and each left and right child node recursively splits the domain in half. Users can traverse the tree downward to binary search for specific values or understand the data distribution.

  • Encodings, where nodes in the structure correspond to visual channels (e.g., position, color, size) that data fields map to. For instance, consider Figure 1a which depicts the encoding structure of a Vega-Lite scatterplot. The visualization is specified as mappings from data fields to three visual encoding channels: x, y, and color. Thus, the encoding structure, which here takes the form of a tree, comprises a root node that represents the entire visualization and then branches for each encoding channel as well as the data rectangle (x-y grid). Descending into these branches yields nodes that select different categories or interval regions, determined by the visual affordances of the channel. For instance, descending into axis branches yields nodes for each interval between major ticks; x-y grid nodes represent cells in the data rectangle as determined by intersections of the axes gridlines; and legend nodes reflect the categories or intervals of the encoding channel (i.e., for nominal or quantitative data respectively). Finally, the leaves of these branches represent individual data values that fall within the selected interval or category.

  • Annotations, where nodes in the structure represent the rhetorical devices a visualization author may use to to shape a visual narrative or guide reader interpretation of data (e.g., by drawing attention to specific data points or visual regions). Surfacing annotations in the visualization structure allows screen reader users to also benefit from and be guided by the author’s narrative intent. For example, Figure 2d illustrates an annotation tree structure derived from an example line chart with two annotations highlighting intervals in the temporal x-axis. The root of the tree has two children representing the two annotated regions. The these two annotation nodes have a child node for each data point that is highlighted within the region of interest.

Considerations: Information Granularity. When might users prefer nested structures (i.e. trees) over flat structures (i.e., lists and tables)? Like sighted users, screen reader users seek information by looking for an overview before identifying subsets to view in more detail [54]. Trees allow users to read summary information at the top of the structure, and traverse deeper into branches to acquire details-on-demand. Kim et al. use the term information granularity to refer to the different levels of detail at which an accessible visualization might reveal information [38]. They organize granularity into three levels: existence, overview, and detail. Existence includes information that a chart is present, but no information about underlying data. Overview includes summary information about data — e.g. axes, legends, and summary statistics like min, max, or mean — but not individual data points. Detail includes information about precise data values.

We use the root node to signal the existence of the tree, and deeper nodes in the tree reflect finer levels of granularity. Branch nodes give an overview summary about the data underneath, providing information scent [47], while leaf nodes map to individual data points. In his feedback about the prototype shown in Figure 1, Hajas wrote “considering how difficult reading a scatterplot with a screen reader is due to its sequential reading nature, the tree structure makes the huge number of data points fairly readable”.

Entities are not mutually exclusive, and a structure might opt to surface different entities in parallel branches. We prototyped a version of Figure 2d which placed an encoding tree and annotation tree as sibling branches under the root node. Users could descend down a given branch, and switch to the equivalent location in the other branch at will. These design decisions are motivated by findings in prior work: by placing encodings and annotations as co-equal branches, we produce a structure that preserves the agency of screen reader users either to start with the narrative arc of annotations, or follow it after having the chance to interpret the data for themselves [42]. As Hajas confirms “Depending on my task, either the encoding or annotation tree could be more important. If my task involved checking population growth in the last 100 years, I would start with the encodings. If I were to look for sudden changes in population numbers, such war-time mortality effects, I would start exploring the annotations, then tunnel back to the other tree.”

3.2 Navigation

Screen reader users need ways to traverse accessible structures to explore data or locate specific points. When browsing a webpage, screen readers provide a cursor that represents the current location in the page. Users use keyboard commands to step the cursor backward and forward in a sequential list of selectable items on the page, or jump to important locations such as headers and links. Through our prototyping process, we developed three ways of navigating through an accessible structure: structural navigation, spatial navigation, and targeted navigation (Fig. 1b). A key concern across these navigation schemes is reducing a user’s cognitive load by affording a sense of the boundaries of the structure.

Structural Navigation. Structural navigation refers to ways users move within the accessible structure. We identify two types of structural navigation. Local navigation refers to step-by-step movements between adjacent nodes in the structure. This includes moving up and down levels of a hierarchy, or moving side to side between sibling elements. Lateral navigation refers to movement between equivalent nodes in adjacent sub-structures. For example, Fig. 2a depicts a multi-view visualization with six facets. When the cursor is on a Y-axis interval for the first facet, directly moving to the same Y-axis interval on the second facet is a lateral move.

Spatial Navigation. Sometimes users want to traverse the visualization according to directions in the screen coordinate system. We refer to this as spatial navigation. For example, when traversing part of an encoding structure that represents the visualization’s X-Y grid, a downward structural navigation would go down a level into the currently selected cell of the grid, showing the data points inside the cell. A downward spatial navigation, in contrast, would move to the grid cell below the current one — i.e. towards the bottom of the Y-axis. Spatial navigation is also useful when navigating lists of data points, which may not be sorted by X or Y value in the encoding structure. Where a leftward structural navigation would move to the previous data point in the structure, a leftward spatial navigation would move to the point with the next lowest X value.

Targeted Navigation. Navigating structurally and spatially requires a user to maintain a mental map of where their cursor is relative to where they want to go. If the user has a specific target location in mind, maintaining this mental map in order to find the correct path in the structure to their target can create unnecessary cognitive load. We use targeted navigation to refer to methods that only require the user to specify a target location, without needing to specify a path to get there. For example, the user might open a list of locations in the structure and select one to jump directly there. Screen readers including JAWS and VoiceOver implement an analogous form of navigation within webpages. Instead of manually stepping through the page to find a specific piece of content, users can open a menu with a list of locations in the page. These locations are defined in HTML using ARIA landmark roles, which can designate parts of the DOM as distinct sections when read by a screen reader. When a screen reader user open the list of landmarks and selects a landmark, their cursor moves directly to that element.

Considerations: Boundaries & Cognitive Load. Screen reader users only read part of the visualization at a time, akin to a sighted user reading a map through a small tube [28]. How do they keep track of where they are? In our co-design process, we found it easiest for a user to remember their location relative to a known starting point, which is corroborated by literature on developing spatial awareness for blind people [63, 40, 17]. Hajas noted the prevalence of the Home and End shortcuts across applications for returning to a known position in a bounded space (e.g. the start/end of a line in a text editor). We also found that grouping data by category or interval was helpful for maintaining position. Hajas noted that exploring data within a bounded region was like entering a room in a house. In his analogy, a house with many smaller rooms with doors is better than a house with one big room and no doors. Bounded spaces alleviate cognitive load by allowing a user to maintain their position relative to entry points.

Comparing navigation techniques, Hajas noted that spatial felt “shallow but broad” while targeted felt “deep but narrow.” While he expressed a personal preference for deep-narrow structures, he nevertheless “would not give up [spatial navigation] because it makes me believe I’m actually interacting with a visualization.” This insight demonstrates the value of offering multiple complementary navigation techniques. Moreover, while targeted navigation facilitates quick searching and doesn’t require the user to maintain a mental map to find specific data points, structural and spatial exploration enable more open-ended data exploration. It also provides a mechanism for establishing common ground with sighted readers (e.g., allowing both blind and sighted readers to understand a line segment as being “above” or “higher” than another).

3.3 Description

When a user navigates to a node in a structure, the screen reader narrates a description associated with that node. For example, when navigating to the chart’s legend, the screen reader output might articulate visual properties of the chart’s encoding: “Category O has color encoding green; X has color encoding orange” (Figure 1). Or, if that visual semantic content isn’t relevant to understanding the data, it might ignore the color: “each datum belongs to either Category O or X.” The content, composition, and verbosity of the description can affect a user’s comprehension of the data. Designers must consider context & customization when describing charts.

Content. Semantic content is the meaningful information conveyed not only through natural language utterances, but also through the visualization (a graphical language [8]). Because graphics convey myriad different kinds of content, the challenge of natural language description is to convey information that is not only commensurate with what the chart expresses via graphical language, but also useful to its readers. Accessible chart description guidelines from WGBH [26], W3C [60], and others [35] offer prescriptions for conveying specific content for blind readers (such as the chart’s title, axis encodings, and noteworthy trends). Lundgard and Satyanarayan expand the scope of these guidelines with a more general conceptual model of four levels of semantic content: chart construction properties (e.g., axes, encodings, marks, title); statistical concepts and relations (e.g., outliers, correlations, descriptive statistics); perceptual and cognitive phenomena (e.g., complex trends, patterns); and domain-specific insights (e.g., socio-political context relevant to the data) [42].

Decoupling a chart’s semantic content from its visual representation helps us better understand what data representations afford for different readers. For instance, Lundgard and Satyanarayan find that what blind readers report as most useful in a chart description is not a straightforward translation of the visual data representation. Specifically, simply listing the chart’s encodings is much less useful to blind readers than conveying summary statistics and overall trends in the data [42]. As Hajas noted, “I want to see the global trend, which is why sighted people rely on visualization.” For instance, for a stock market chart the reader “might see the overview from first to last data points, and then zoom into an outlier in the middle.” These findings suggest opportunities interleave different kinds of content at different levels of a hierarchical structure to yield richer, more useful screen reader navigation. For example, injecting summary statistics (say, the existence of outliers within a particular subcategory of the data) higher up in the chart’s tree structure (e.g., at the legend encoding node) might afford “scent” for “information foraging” [47], or further exploration down a particular branch (data subcategory) of the tree. Or, if navigating in a targeted fashion, the user might be afforded the option to directly navigate to outliers without traversing the tree.

Composition. The usefulness of a description depends not only on the content conveyed by its constituent sentences, but also on its composition: how those sentences are ordered in relation to each other. For example, during our co-design process, Hajas found that when navigating a chart’s tree structure, the screen reader output could quickly become redundant, affecting how quickly and efficiently he could pick out the meaningful information at each node. For instance, the utterance “Category: O, Point 3 of 15, x = 5, y = 12” and the utterance “x = 5, y = 12, Category: O, Point 3 of 15” afford significantly different experiences for a user who wishes to quickly scan through individual data points. In the first utterance, the reader immediately receives content that helps to situate them in a broader data context, namely data labeled as Category: O at the legend node. In the second utterance, the reader immediately receives datum-specific content that helps to rapidly explore the fine-grained details within that data context. Whether a reader prefers one compositional ordering to another will depend on the task they are attempting to accomplish. As Hajas noted “I like the label at the beginning of the information, saying at which level of the tree I am at. It is important for knowing where I am. It is also great that this information is only spoken out when I change level, but not when I navigate laterally.” These compositional choices are highly consequential for readers’ experience, when they must repeatedly read nearly-identical utterances while navigating a structure.

Verbosity. Whereas composition refers to the ordering of content, verbosity refers to how much content the screen reader conveys. More content is not always better. As Hajas noted of Apple’s Data Comprehension feature [20]: “It can sometimes be too much information all at once, if it starts reading out all of the data. This is very difficult if you’re interested in some data points that are in the middle. It is very play-or-stop.” Depending on the screen reader software, a user may be afforded control over how much content is conveyed. For instance, JAWS offers high, medium, and low verbosity levels [52]. At higher verbosity the screen reader announces more structural, wayfinding content (e.g. the start and end of regions). For data tables, verbosity configurations can affect whether the table size is read as part of the description, and whether row and column labels are repeated for every cell. Descriptions of nodes in an encoding structure might analogously include information about the path from the root — for example, by reminding the user that they are reading Y-axis intervals. These repetitions can help users remember their location within a structure, but additional verbosity is less efficient for comprehending the data quickly.

Considerations: Context & Customization. Apart from its constituent parts (content, composition, verbosity), a description’s usefulness also depends on the context in which it is read: namely, the reader’s task or intent, and familiarity with the data interface. The same description might be useful in some situations, but relatively useless in others. A reader’s information needs are fundamentally context-sensitive. For example, as Hajas noted, when reading a news article, it may be satisfactory to accept a journalist’s description of the data on good faith. But, when reviewing scientific research, “I don’t necessarily want to just believe what is said in the text, I want to check and double-check the authors’ claims. Go down to the smallest numbers in the analysis. I want to be able to look at the confusion matrix and see if they made a mistake or not.” This targeted verification requires a description to afford users with precise look-up capabilities, in contrast to descriptions that may be generated when browsing or exploring the data.

This context-sensitivity reveals an important aspect of usability: a user’s familiarity (or lack thereof) with the data interface. Wayfinding content (e.g., “Legend. Category: O.”) can help a user remember their location in a structure, and may be useful while they assemble a mental map of the visualization. But, as they become accustomed to the interface and visualization, such descriptions may prove cumbersome. Because user needs depend on their task, preferences, and familiarity, interfaces might afford personalization and customization to facilitate context-sensitive description.

4 Example Gallery

A graphic depicting five example structural and navigational schemes generated as part of our co-design process, and applied to diverse chart types. A long description of this graphic is available at the following link. Long Description
Figure 2: Example structural and navigational schemes generated as part of our co-design process, and applied to diverse chart types.

5 Evaluation

To evaluate our contribution, we conducted 90-minute Zoom studies with 13 blind and visually impaired participants. Participants were asked to explore three prototype accessible screen reader experiences, shown one after another each with a different dataset. The goal of our evaluation was not to determine which particular combination of design elements was “best,” but rather to be exploratory — to compare the relative strengths and advantages of instantiations of our design dimensions, and understand how they afford different modes of accessible data exploration.

5.1 Evaluation Setup & Design

Following Frøkjær and Hornbæk’s Cooperative Usability Testing (CUT) method [23], Zong and Lee conducted each session by alternating between the role of guide (i.e., talking to the user and explaining the prototype) and logger (i.e., keeping track of potential usability problems, interpreting the data to prepare for becoming the guide). We began each session with a semi-structured interview to understand participants’ current experiences with data and the methods they use to make inaccessible forms of data representation usable (script included in supplementary material). The rest of the session focused on each of the three prototypes in turn, with each condition split into two phases: interaction and interpretation. In the interaction phase, Zong or Lee guided participants through the prototypes and asked participants to use them and comment on their process, in the style of Hutchinson et al.’s technology probes [33]. Then, the authors switched roles and began a cooperative interpretation phase, where the authors and participants engaged in a constructive dialogue to jointly interpret the usability problems and brainstorm possible alternatives to the current prototype. In this method, participants influence data interpretation, allowing for more rapid analysis than traditional think-aloud studies as some analysis is built into each evaluation session with instant feedback or explanation from participants [23].

Prototypes. The in-depth nature of our cooperative interpretation sessions required us to balance the total number of prototypes evaluated (so that participants would have time to thoroughly learn and interact with each one) with a time duration appropriate for a Zoom session (limited to 90 minutes to avoid exhausting participants). Accordingly, we selected the following three prototypes, each representing a different aspect of our design dimensions:

  • table: An accessible HTML data table with all rows and three columns from the classic Cars dataset, in order to compare our prototypes with existing accessibility best practice.

  • multi-view: Becker’s barley yield trellis display [7] as shown in Fig. 2a. This prototype features local and lateral structural navigation via the arrow keys and with the shift modifier respectively, as well as spatial navigation via WASD.

  • target: A single-view scatterplot, illustrated in Fig. 1, depicting the Palmer Penguins dataset [32]. In addition to structural and spatial navigation, targeted navigation is available via three dropdown menus corresponding to the structural levels.

table is our control condition, as it follows existing best practice for making data accessible to screen readers. multi-view enables us to study how users move between levels of detail, and whether they could navigate and compare small multiple charts. Finally, target allows us to compare how and when our participants use the three different styles of navigation (structural, spatial, and targeted). We presented the prototypes in this sequence to all participants to introduce new features incrementally.

Participants. We recruited 13 blind and visually impaired participants through our collaborators in the blind community and through a public call on Twitter. Each participant received $50 for a 90-minute Zoom session. We provide aggregate participant data following ethnographic practice to protect privacy and not reduce participants to their demographics [51]. Half of our participants were totally blind (n=7), while others were almost totally blind with some light perception (n=4) or low vision (n=2). Half of them have been blind since birth (n=7). Participants were split evenly between Windows/Chrome (n=7) and Mac/Safari (n=6). Windows users were also split evenly between the two major screen readers (JAWS, n=3; NVDA, n=4), while all Mac participants used Apple VoiceOver. These figures are consistent with recent surveys conducted by WebAIM which indicate that JAWS, NVDA, and VoiceOver are the three most commonly used screen readers [61]. Demographically, 70% of our participants use he/him pronouns (n=9) and the rest use she/her pronouns (n=4). One participant was based in the UK while the rest were spread across eight US states. Participants self-reported their ethnicities (Caucasian/white, Asian, and Black/African, Hispanic/Latinx), represented a diverse range of ages (20–50+) and had a variety of educational backgrounds (high school through to undergraduate, graduate, and trade school). Nine participants self-reported as slightly or moderately familiar with statistical concepts and data visualization methods, two as expertly familiar, and one as not at all familiar. Five participants described data analysis and visualization tools as an important component in their professional workflows, and 8 interacted with data or visualizations more than 1–2 times/week.

5.2 Quantitative Results

To supplement the cooperative interpretation sessions, participants rated each prototype using a series of Likert questions. We designed a questionnaire with six prompts measuring a subset of Brehmer and Munzner’s multi-level typology of abstract visualization tasks [12]. This framework, however, required some adaptation for non-visual modes of search. In particular, searching with a screen reader requires a sequential approach to data that is at odds with the “at-a-glance” approach sighted readers take to browsing and exploring data. As our prototypes focus on navigation through charts, we collapsed the location dimension of Brehmer and Munzner’s search decomposition resulting in two prompts that jointly measure lookup-locate and browse-explore. We formulated additional questions to measure Brehmer and Munzner’s discover and enjoy tasks as well as more traditional aspects of technology acceptance including ease-of-use and perceived usefulness [37]. Participants responded on a five point scale where 1 = Very Difficult/Unenjoyable and 5 = Very Easy/Enjoyable.

Table 1. Rating scores for each prototype (Table, Multi-view, Targeted) on a five point Likert scale where {1} = Very Difficult (Very Unenjoyable) and {5} = Very Easy (Very Enjoyable). Median scores are shown in boldface, averages in brackets, standard deviations in parentheses.
Prompt: When using this prototype ... Task Table Multi-view Targeted
How enjoyable was it to interact with the data? enjoy 3 [3.31] (0.95) 4 [3.77] (1.01) 4 [3.54] (0.97)
How easy was it to generate and answer questions? discover 4 [3.15] (1.34) 3 [3.00] (1.08) 3 [3.23] (1.17)
If you already knew what information you were trying to find, how easy would it be to look up or locate those data? lookup-locate 3 [3.31] (1.32) 4 [3.77] (1.17) 4 [3.38] (1.19)
If you didn't already know which information you were trying to find, how easy would it be to browse or explore the data? browse-explore 2 [3.00] (1.68) 2 [2.69] (1.11) 3 [3.00] (1.29)
How easy was it to learn to use? ease-of-use 4 [4.15] (0.99) 3 [2.69] (0.75) 3 [3.15] (1.34)
How useful would it be to have access to this interaction style for engaging with data? perceived usefulness 4 [4.15] (0.80) 4 [4.00] (0.82) 4 [4.15] (1.07)

Table 1 displays the questionnaire prompts, their corresponding tasks, and statistics summarizing the participants’ ratings. A Friedman test found a significant rating difference for the ease-of-use of the prototypes χ2(2,N=13)=15.05,p<0.01, with a large effect size (Kendall’s W=0.58). Follow-up Nemenyi tests revealed that multi-view was more difficult to use than table with statistical significance (p<0.01), but target was not. Additional tests for the other prompts found neither statistically significant differences, nor large effect sizes, between the prototypes. However, median scores (which are more robust to outliers than means [45]) suggest that participants generally enjoy interacting with multi-view and target more, and found them easier to lookup or locate data with. Moreover, target had the highest median score for affording browse or explore capabilities. Conversely table was easiest to learn to use, and generally made it easy to discover, or ask and answer questions about the data. Notably, in response to the question How useful would it be to have access to this interaction style for engaging with data? participants on average ranked all prototypes as more-than-useful (med=4,μ4). These statistics provide only a partial picture of participants’ experiences with the prototypes [4]. Thus, we elucidate and contextualize reasons behind their scores through qualitative analysis.

5.3 Qualitative Results

After the interviews, we qualitatively coded the notes taken by the logger with a grounded theory approach [15]. We performed open coding in parallel with the interviews (i.e., coding Monday’s interviews after finishing Tuesday’s interviews). We then synthesized the codes into memos, from which we derived these themes.

Tables are familiar, tedious, but necessary. Every participant noted that tables were their primary way of accessing data and visualizations. While tables are an important accessible option, participants overwhelmingly reported the same problems: they are ill-suited for processing large amounts of data and impose high cognitive load as users must remember previous lines of the table in order to contextualize subsequent values. As P2 reported, “if I’m trying to get a general sense of the table, I’ll just scroll through and see what values there are. But there’s 393 rows, so I’ll never scroll through all of it…I can’t really get a snapshot.” P11 said that “Finding relationships can be tricky if you’re in a table, because you’ve got to either have a really good memory or just get really lucky. […] If I didn’t know what I was looking for, forget it.” At most, participants tabbed through 20–30 rows during our sessions, but did so only because of the questions we posed (e.g., “is there a relationship between horsepower and mileage?”) and noted that if they encountered this table outside of the study, they would tab past a few rows to check for summary statistics and then move on.

While it is not enjoyable to explore or build a mental model of data with static tables, participants still emphasized their necessity because of the format’s familiarity: “in terms of accessibility, tables are infinitely more useful because there is a standard way of navigating them in whatever your preferred screen reader is. With different representations, a blind person may not be trained to interpret it” (P2). This builds on prior literature [54] and echoes testimony from participants who had some difficulty with the new prototypes; they reported that they lacked expertise and therefore found it difficult to work with non-tabular data (P8, 10). In other words, to maximize accessibility, it is crucial to include a table view of the data in addition to other forms of novel interaction.

Prior exposure to data analysis and representations increases the efficacy of spatial representations. Participants who had experience conducting data analysis or reading tactile graphs/maps were able to easily develop a spatial understanding of how each prototype worked. Five participants (P2–4, 11, 13) made direct connections between the multi-view and target prototypes, and the tactile graphs they encountered in school. Three participants (P2, 11, 12) found their software engineering experience made it easier to understand and navigate the prototypes’ hierarchical structure. Previous literature on tactile mapping has also shown how developing tactile graphical literacy is crucial for building spatial knowledge, but they emphasize that it is not a sufficient for being able to conduct and understand data analysis. [28, 25] Since our participants already had an existing spatial framework, it became easier to explain how a prototype might work using their prior experience as a benchmark, which has been corroborated by similar studies in tactile cartography. [63, 2, 55] Importantly, our participants were able to find specific origin points that they could return to in order to navigate the different branches of the tree, which would be further aided with help menus and mini-tutorials to understand the keyboard shortcuts (P2). Being able to shift between origin points is especially important for switching between graphs or between variables. By contrast, participants who had more difficulty with the prototypes (P8, 10) pointed to their lack of experience working with non-tabular data. P10 reported that being able to mentally visualize data points within a grid was a specific challenge. “I suspect that this might be understandable to someone who’s done this before,” he said, “I don’t do well with these charts unless they’re converted back into tables.”

Structure: Hierarchical representations make it possible to effectively convey insights with minimal cognitive load. While static tables are the most common accessible option to interactive visualizations, eight of our participants (P2–5, 7, 10, 11, 13) expressed a desire to filter and sort the data so that they could begin to explore possible trends without wading line by line. Sorting and filtering a table is one way to look for trends but, to get a summary view of the data quickly, a system must provide snapshots in smaller intervals so that users can easily construct a larger picture or choose specific slices of the data to explore further (i.e., “details on demand”). [57, 38] With multi-view and target P4 said, “I always want more layers and details, but some charts had too much…This was a happy medium between having the information I wanted and presenting it in a way that I can keep up with.” P5 also noted that he liked “having the ability to scroll through at a higher level and then drill down deeper if that’s of interest.” By giving users a way to quickly skip through the data across specific axes, they are able to rapidly generate a broader mental image of each graph and drill down further to collect more details. “When I was working with the table, I [started building] a table in my head,” P2 shared. “I had a rough representation of it as a scatter plot. But here, I know how to drill down and up between different layers of data grids, so that I can get the overall picture… [It gives me] different ways of thinking.” Being able to control the parts of the data that were most important to them was also an issue of trust, as it also provided a way for users to reach conclusions for themselves rather than rely on the interpretation of others: “It’s hard to mix…doing your own analysis and be given a text description that you have to just trust” (P12). In their own workflows, these participants reported downloading static tables to further examine and manipulate with Excel, which they would use to create summary statistics or intervals to move more quickly through the data.

Navigation: Reading a visualization with a screen reader entails constant hypothesis testing and pattern-making. Since screen reader users parse data iteratively, nine of our participants (P1–5, 7, 8, 11, 13) described reading a visualization as a process of slowly building up a mental model and constantly testing it to see where the patterns may no longer hold. “I’m going row by row, not memorizing exact numbers but building a pattern in my head, and looking at the other rows to test my theory,” reported P3. In other words, our participants engaged in a continuous state of updating and validating [46] their mental images as new data challenged the existing patterns they have pieced together. multi-view and target accelerated this process, as participants were able to more rapidly identify specific components that they wanted to test. For example, P2 intentionally moved quickly across each level of the structure hoping to find its “edges,” or the minimum and maximum limits of each axis and grid. “Visually, it might look like I’m doing a lot of jumping around,” he said, “[but it’s] because I’m trying to build the picture in a way that makes sense for me.” Similarly, P5 started building his mental model of the visualization by drilling up and down the grid to create a spatial image of the data: “I’m thinking more in spatial terms just because [this] is a new method of navigating to me. […] I’m moving through the grid…I’m thinking of drilling down into that square to get more information.”

target made it especially easy for participants to test their hypotheses by giving them direct access to components that might break their hypotheses. P5 reported that it allowed him to “navigate to areas…that I’m interested in, skipping over stuff that’s not of interest,” and P4 likened it to “[being] able to go directly to what you want in a grocery inventory rather than going through each item one by one.” The ability to use structural, spatial, and target navigation in both multi-view and target respectively facilitated the hypothesis-testing and pattern-making behaviors that our participants were accustomed to with static tables, and gave them an additional mental model for working with the data. As P1 noted, these prototypes gave her a richer understanding of the data by helping her piece together “both the picture and the mathematical pattern,” whereas table afforded only the latter.

Description: Cursors and roadmaps are important for understanding where you are. Being able to capture both a high-level overview of the information while preserving the ability to drill down into the data is a crucial component to accessing interactive visualizations [54]. To navigate between these two levels, however, our participants emphasized the importance of markers to help them understand where they could move. target addressed this with dropdown menus that allowed participants to navigate to any part of the visualization, explore, and then return to where they had started. In the words of P4, “[This] mode is freedom for the user. Being able to jump around and move in real time as you would with your hand gives you a new way of exploring the information.” multi-view approached this issue by allowing participants to move throughout the grid. “With the table, I was trying to hold the numbers in my head and I wasn’t trying to visualize it or anything,” said P3. “With [multi-view], I can sort of think about it more like a visualization since I can move up and down, left and right. Even though I can use the arrows in the table, it just doesn’t feel the same. I’m still feeling around and seeing what I can find.” Without these navigation tools, P7 noted that “It’s too easy to get lost …I don’t know how to backtrack.” To orient herself, P13 would first test to see if she was at the corner cells in the visualizations (e.g., “Am I in the upper left or the bottom right cell here?”) so that she could contextualize her position within the visualization and return to a point of origin. “I know that I must be at the bottom left cell here because I can’t go to the left,” P13 said, “but being able to know where that is beforehand would be very helpful.”

6 Discussion and Future Work

In this paper, we explore how structure, navigation, and description compose together to yield richer screen reader experiences for data visualizations than are possible via alt text, data tables, or the current ARIA specification. Our results suggest promising next steps about accessible interaction and representation for visualizations.

6.1 Enabling Richer Screen Reader Experiences Today

Although our design dimensions highlight a diverse landscape of screen reader experiences for data visualizations, our study participants attested to the value of following existing best practices. Namely, alt text and data tables provide a good baseline for making visualizations accessible. Thus, visualization authors should consider adopting our design dimensions to enable more granular information access patterns only after these initial pieces are in place.

Existing visualization authoring methods, however, are likely insufficient for instantiating our design dimensions or producing usable experiences for screen reader users. In particular, it currently falls entirely on visualization authors to handcraft appropriate structures, navigational techniques, and description schemes on a per-visualization basis. As a result, besides being a time-consuming endeavor, idiosyncratic implementations can introduce friction to the reading process. For instance, per-visualization approaches might not account for an individual user’s preferences in terms of verbosity, speed, or order of narrated output — three properties which varied widely among our study participant in ways that did not correlate with education level or experience with data. Thus, to scale and standardize this process, some responsibility for making visualizations screen reader accessible must be shared by toolkits as well. For example, our prototypes suggest a strategy for translating visualization specifications into hierarchical encoding structures (i.e., encoding channels as individual branches, and using visual affordances such as axis ticks and grid lines to populate the hierarchy levels). If toolkits provide default experiences out-of-the-box, visualization authors can instead focus on customizing them to be more meaningful for their specific visualization, and screen reader users have a stronger guarantee that the resultant experiences will be more usable and respectful of their individual preferences.

Current web accessibility standards also present limitations for realizing our design dimensions. For instance, there is no standard way to determine which element the screen reader cursor is selecting. Where ARIA has thus far focused on annotating documents with the semantics of a pre-defined palette of widgets, future web standards might instead express how elements respond to the interaction affordances of screen readers. For example, ARIA could offer explicit support for overview/detail hierarchies and different levels of description detail that can be progressively read according to user preferences.

6.2 Studying and Refining the Design Dimensions

Our conversations with study participants also helped highlight that design considerations can differ substantially for users who are totally blind compared to those who have low-vision. For example, partially-sighted participants used screen magnifiers alongside screen readers. As a result, they preferred verbose written descriptions alongside more terse verbal narration. Magnifier users also wished for in situ tooltips, which would eliminate the need to scroll back and forth between points and axes to understand data values. However, promisingly, we found that using a screen reader and magnifier together affords unique benefits: “I would have missed this point visually if I solely relied on the magnifier because the point is hidden behind another point” (P12). Future work should more deeply explore how accommodations might complement and conflict when designing for different kinds of visual disability.

Similarly, in scoping our focus to screen readers and, thus, text-to-speech narration, we refrained from considering multi-sensory modalities in our design dimensions. Yet, we found that most participants had previous experience with multi-sensory visualization, including sonification (P5, 7, 9, 13), tactile statistical charts (P2–4, 10, 11, 13), and haptic graphics (P3, 4, 11, 13). Some participants reported that a combination of modalities would further enhance their experience — for example, getting a sonic overview of a line chart before reading more detailed text descriptions. Other participants, however, cautioned that adding multiple modalities can create additional confusion. For example, P7 noted that “There’s often a lack of explanation about how to map between sound and text.” Based on this testimony, it is unlikely that “sensory modalities” are merely an additional, independent dimension within our framework. Rather, future work must unpack the affordances of individual modalities, how they interact with one another, and how they impact the design of structure, navigation, and description.

6.3 What are Accessible Interactions for Data Visualizations?

In visualization research, we typically distinguish between static and interactive visualizations, where the latter allows readers to actively manipulate visualized elements or the backing data pipeline [65, 29]. Screen readers, however, complicate this model: reading is no longer a process that occurs purely “in the head” but rather becomes an embodied and interactive experience, as screen reader users must intentionally perform actions with their input devices in order to step through the visualization structure. While some aspects of this dichotomy may still hold, it is unclear how to cleanly separate static reading from interactive manipulation in the context of screen reader accessible visualizations, if these notions are conceptually separable at all. For instance, Hajas likened the navigation our prototypes afforded to “shifting eye gaze, shifting focus of perceptual attention. When I navigate a visualization, naturally I would say ‘I’m looking at this figure’ and not that ‘I’m interacting with this figure’.” Analogously, recent results in graphical perception find that sighted readers do not simply “see” visualizations in a single glance but rather perform active visual filtering operations [9]. However, when using the binary tree prototype (Fig. 2e), Hajas noted a more distinct shift from reading to interacting. He said, “it gave me the impression that I’m not just looking selectively, but I focus and zoom into the data,” analogous to zoom interactions that change the viewport for sighted readers. Better characterizing the shift that occurs with this prototype, and exploring accessible manipulations of visualizations that allow screen reader users to meaningfully conduct data analysis, are compelling opportunities for future work.

Acknowledgements

We thank Matthew Blanco, Evan Peck, the NYU Digital Theory Lab, and the MIT Accessibility Office. This work was supported by NSF awards #1942659, #1122374, and #1941577.

References

  • [1] S. G. Accelerator (2018) SAS Graphics Accelerator Customer Product Page. Link Cited by: §2.
  • [2] F. K. Aldrich and L. Sheppard (2001) Tactile Graphics In School Education: Perspectives From Pupils. British Journal of Visual Impairment 19 (2), pp. 69–73. ISSN 0264-6196, Link, Document Cited by: §3, §5.3.
  • [3] N. Amick and J. Corcoran (1997) Guidelines for Design of Tactile Graphics. Technical report American Printing House for the Blind. Link Cited by: §3.
  • [4] R. P. Bagozzi (2007) The Legacy of the Technology Acceptance Model and a Proposal for a Paradigm Shift.. Journal of the Association for Information Systems 8 (4). ISSN 1536-9323, Link, Document Cited by: §5.2.
  • [5] C. M. Baker, L. R. Milne, R. Drapeau, J. Scofield, C. L. Bennett, and R. E. Ladner (2016) Tactile Graphics with a Voice. ACM Transactions on Accessible Computing (TACCESS) 8 (1), pp. 3:1–3:22. ISSN 1936-7228, Link, Document Cited by: §3.
  • [6] S. Barrass and G. Kramer (1999) Using Sonification. Multimedia Systems 7 (1), pp. 23–31 (en). ISSN 1432-1882, Link, Document Cited by: §2.
  • [7] R. A. Becker, W. S. Cleveland, and M. Shyu (1996) The Visual Design and Control of Trellis Display. Journal of Computational and Graphical Statistics 5 (2), pp. 123–155 (en). ISSN 1061-8600, 1537-2715, Link, Document Cited by: 2nd item.
  • [8] J. Bertin (1983) Semiology of Graphics. University of Wisconsin Press. , Link Cited by: §3.3.
  • [9] T. Boger, S. B. Most, and S. L. Franconeri (2021) Jurassic Mark: Inattentional Blindness for a Datasaurus Reveals that Visualizations are Explored, not Seen. In IEEE Transactions on Visualization & Computer Graphics (Proc. IEEE VIS), (en). Link, Document Cited by: §6.3.
  • [10] M. Bostock, V. Ogievetsky, and J. Heer (2011) D3: Data-Driven Documents. IEEE Trans. Visualization & Comp. Graphics (Proc. InfoVis). Link, Document Cited by: §2.
  • [11] A. Boxhall, J. Craig, D. Mazzoni, and A. Surkov (2022) The Accessibility Object Model (AOM). (en-US). Link Cited by: §2.
  • [12] M. Brehmer and T. Munzner (2013) A Multi-Level Typology of Abstract Visualization Tasks. In IEEE Transactions on Visualization & Computer Graphics (Proc. IEEE VIS), (en). Link, Document Cited by: §5.2.
  • [13] A. Brock, P. Truillet, B. Oriola, and C. Jouffrais (2010) Usage Of Multimodal Maps For Blind People: Why And How. In ACM International Conference on Interactive Tabletops and Surfaces (ISS), ITS ’10, New York, NY, USA, pp. 247–248. , Link, Document Cited by: §3.
  • [14] M. Butler, L. M. Holloway, S. Reinders, C. Goncu, and K. Marriott (2021) Technology Developments in Touch-Based Accessible Graphics: A Systematic Review of Research 2010-2020. In ACM Conference on Human Factors in Computing Systems (CHI), pp. 1–15. , Link Cited by: §3.
  • [15] K. Charmaz (2006) Constructing Grounded Theory. Sage Publications, London ; Thousand Oaks, Calif (en). , Link Cited by: §5.3.
  • [16] J. Choi, S. Jung, D. G. Park, J. Choo, and N. Elmqvist (2019-06) Visualizing for the Non‐Visual: Enabling the Visually Impaired to Use Visualization. Computer Graphics Forum (EuroVis) 38 (3), pp. 249–260 (en). ISSN 0167-7055, 1467-8659, Link, Document Cited by: §2.
  • [17] P. Chundury, B. Patnaik, Y. Reyazuddin, C. Tang, J. Lazar, and N. Elmqvist (2021) Towards Understanding Sensory Substitution for Accessible Visualization: An Interview Study. IEEE Transactions on Visualization and Computer Graphics (Proc. IEEE VIS), pp. 1–1 (en). ISSN 1077-2626, 1941-0506, 2160-9306, Link, Document Cited by: §3.2, §3.
  • [18] A. Cornwall and R. Jewkes (1995) What Is Participatory Research?. Social Science & Medicine 41 (12), pp. 1667–1676 (en). ISSN 0277-9536, Link, Document Cited by: §3.
  • [19] S. Costanza-Chock (2020) Design Justice: Towards an Intersectional Feminist Framework for Design Theory and Practice. MIT Press, Cambridge, MA (en). Link Cited by: §3.
  • [20] S. Davert and A. Editorial Team (2019) What’s New In iOS 13 Accessibility For Individuals Who Are Blind or Deaf-Blind. Link Cited by: §2, §3.3.
  • [21] L. de Greef, D. Moritz, and C. Bennett (2021) Interdependent Variables: Remotely Designing Tactile Graphics for an Accessible Workflow. In ACM Conference on Computers and Accessibility (SIGACCESS), ASSETS ’21, New York, NY, USA, pp. 1–6. , Link, Document Cited by: §3.
  • [22] E. Dimara and C. Perin (2020) What is Interaction for Data Visualization?. IEEE Transactions on Visualization and Computer Graphics (Proc. IEEE VIS) 26 (1), pp. 119 – 129. Link, Document Cited by: §1.
  • [23] E. Frøkjær and K. Hornbæk (2005) Cooperative Usability Testing: Complementing Usability Tests With User-Supported Interpretation Sessions. In ACM Extended Abstracts on Human Factors in Computing Systems (CHI), CHI EA ’05, New York, NY, USA, pp. 1383–1386. , Link, Document Cited by: §5.1.
  • [24] M. Fujiyoshi, A. Fujiyoshi, H. Tanaka, and T. Ishida (2018) Universal Design Tactile Graphics Production System BPLOT4 for Blind Teachers and Blind Staffs to Produce Tactile Graphics and Ink Print Graphics of High Quality. In Computers Helping People with Special Needs, K. Miesenberger and G. Kouroupetroglou (Eds.), Vol. 10897, pp. 167–176 (en). , Link Cited by: §3.
  • [25] A. J. R. Godfrey and M. T. Loots (2015) Advice From Blind Teachers on How to Teach Statistics to Blind Students. Journal of Statistics Education 23 (3), pp. null. ISSN null, Link, Document Cited by: §3, §5.3.
  • [26] B. Gould, T. O’Connell, and G. Freed (2008) Effective Practices for Description of Science Content within Digital Talking Books. Technical report The WGBH National Center for Accessible Media (en). Link Cited by: §1, §2, §3.3.
  • [27] A. Hamraie (2013) Designing Collective Access: A Feminist Disability Theory of Universal Design. Disability Studies Quarterly 33 (4) (en). ISSN 2159-8371, 1041-5718, Link, Document Cited by: §3.
  • [28] L. Hasty, J. Milbury, I. Miller, A. O’Day, P. Acquinas, and D. Spence (2011) Guidelines and Standards for Tactile Graphics. Technical report Braille Authority of North America. Link Cited by: §2, §3.2, §3, §5.3.
  • [29] J. Heer and B. Shneiderman (2012) Interactive Dynamics for Visual Analysis. Communications of the ACM 55 (4), pp. 45–54 (en). ISSN 0001-0782, 1557-7317, Link, Document Cited by: §1, §6.3.
  • [30] T. Higgins (2019) Supreme Court Hands Victory To Blind Man Who Sued Domino’s Over Site Accessibility. CNBC (en). Link Cited by: §1.
  • [31] Highcharts (2021) Accessibility Module. (en). Link Cited by: §2, §3.
  • [32] A. M. Horst, A. P. Hill, and K. B. Gorman (2020) Palmerpenguins: Palmer Archipelago (Antarctica) Penguin Data. Link Cited by: 3rd item.
  • [33] H. Hutchinson, W. Mackay, B. Westerlund, B. B. Bederson, A. Druin, C. Plaisant, M. Beaudouin-Lafon, S. Conversy, H. Evans, H. Hansen, N. Roussel, and B. Eiderbäck (2003) Technology Probes: Inspiring Design for and with Families. In ACM Conference on Human Factors in Computing Systems (CHI), New York, NY, USA, pp. 17–24. , Link, Document Cited by: §5.1.
  • [34] L. Jackson (2019) Disability dongle: A well-intended, elegant, yet useless solution to a problem we never knew we had. Disability dongles are most often conceived of and created in design schools and at IDEO.. Tweet. Link Cited by: §3.
  • [35] C. Jung, S. Mehta, A. Kulkarni, Y. Zhao, and Y. Kim (2021) Communicating Visualizations without Visuals: Investigation of Visualization Alternative Text for People with Visual Impairments. In IEEE Transactions on Visualization and Computer Graphics (Proc. IEEE VIS), Vol. PP (eng). Link, Document Cited by: §3.3.
  • [36] H.G. Kaper, E. Wiebel, and S. Tipei (1999) Data Sonification And Sound Visualization. Computing in Science Engineering 1 (4), pp. 48–58. ISSN 1558-366X, Document Cited by: §2.
  • [37] E. Karahanna and D. W. Straub (1999) The Psychological Origins of Perceived Usefulness and Ease-of-use. Information and Management 35 (4), pp. 237–250. ISSN 0378-7206, Link, Document Cited by: §5.2.
  • [38] N. W. Kim, S. C. Joyner, A. Riegelhuth, and Y. Kim (2021) Accessible Visualization: Design Space, Opportunities, and Challenges. Computer Graphics Forum (EuroVis) 40 (3), pp. 173–188 (en). ISSN 0167-7055, 1467-8659, Link, Document Cited by: §2, §3.1, §5.3.
  • [39] W. G. Koch (2012) State of the Art of Tactile Maps for Visually Impaired People. In True-3D in Cartography: Autostereoscopic and Solid Visualisation of Geodata, M. Buchroithner (Ed.), Lecture Notes in Geoinformation and Cartography, pp. 137–151 (en). , Link, Document Cited by: §3.
  • [40] J. Li, S. Kim, J. A. Miele, M. Agrawala, and S. Follmer (2019) Editing Spatial Layouts through Tactile Templates for People with Visual Impairments. In ACM Conference on Human Factors in Computing Systems (CHI), Glasgow, Scotland Uk, pp. 1–11 (en). , Link, Document Cited by: §3.2.
  • [41] A. Lundgard, C. Lee, and A. Satyanarayan (2019) Sociotechnical Considerations for Accessible Visualization Design. In IEEE Transactions on Visualization & Computer Graphics (Proc. IEEE VIS), Link, Document Cited by: §2.
  • [42] A. Lundgard and A. Satyanarayan (2021) Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content. In IEEE Transactions on Visualization & Computer Graphics (Proc. IEEE VIS), pp. 11. Link, Document Cited by: §2, §3.1, §3.3, §3.3.
  • [43] R. A. Martínez, M. R. Turró, and T. G. Saltiveri (2019-06) Accessible Statistical Charts For People With Low Vision And Colour Vision Deficiency. In ACM International Conference on Human Computer Interaction, Interacci&#xf3;n ’19, New York, NY, USA, pp. 1–2. , Link, Document Cited by: §3.
  • [44] MDN Contributors (2021-11) ARIA - Accessibility. (en-US). Link Cited by: §2.
  • [45] M. R. Morris, J. Johnson, C. L. Bennett, and E. Cutrell (2018) Rich Representations of Visual Content for Screen Reader Users. In ACM Conference on Human Factors in Computing Systems (CHI), Montreal QC Canada, pp. 1–11 (en). , Link, Document Cited by: §2, §5.2.
  • [46] T. Munzner (2009) A Nested Model for Visualization Design and Validation. IEEE Transactions on Visualization and Computer Graphics (Proc. IEEE VIS) 15 (6), pp. 921–928. Note: Conference Name: IEEE Transactions on Visualization and Computer Graphics ISSN 1941-0506, Link, Document Cited by: §5.3.
  • [47] P. Pirolli and S. Card (1999) Information Foraging. Psychological Review 106 (4), pp. 643–675. ISSN 1939-1471(Electronic),0033-295X(Print), Document Cited by: §3.1, §3.3.
  • [48] V. Potluri, T. E. Grindeland, J. E. Froehlich, and J. Mankoff (2021) Examining Visual Semantic Understanding in Blind and Low-Vision Technology Users. In ACM Conference on Human Factors in Computing Systems (CHI), Link, Document Cited by: §2.
  • [49] Sarah L. Fossheim (2020) How (not) to make accessible data visualizations, illustrated by the US presidential election.. (en). Link Cited by: §1, §2.
  • [50] A. Satyanarayan, D. Moritz, K. Wongsuphasawat, and J. Heer (2017) Vega-Lite: A Grammar of Interactive Graphics. In IEEE Transactions on Visualization & Computer Graphics (Proc. IEEE VIS), Link, Document Cited by: §2, §2, §4.
  • [51] B. Saunders, J. Kitzinger, and C. Kitzinger (2015) Anonymising Interview Data: Challenges And Compromise In Practice. Qualitative Research 15 (5), pp. 616–632 (en). ISSN 1468-7941, Link, Document Cited by: §5.1.
  • [52] F. Scientific (2021) JAWS Web Verbosity. Link Cited by: §3.3.
  • [53] P. Sengers, K. Boehner, S. David, and J. ’. Kaye (2005) Reflective Design. In ACM Conference on Critical Computing: Between Sense and Sensibility, CC ’05, New York, NY, USA, pp. 49–58. , Link, Document Cited by: §3.
  • [54] A. Sharif, S. S. Chintalapati, J. O. Wobbrock, and K. Reinecke (2021) Understanding Screen-Reader Users’ Experiences with Online Data Visualizations. In ACM Conference on Computers and Accessibility (SIGACCESS), ASSETS ’21, New York, NY, USA, pp. 1–16. , Link, Document Cited by: §1, §1, §2, §3.1, §5.3, §5.3.
  • [55] L. Sheppard and F. K. Aldrich (2001) Tactile Graphics In School Education: Perspectives From Pupils. British Journal of Visual Impairment 19 (3), pp. 93–97 (en). ISSN 0264-6196, Link, Document Cited by: §5.3.
  • [56] A. Shew (2020-03) Ableism, Technoableism, and Future AI. IEEE Technology and Society Magazine 39 (1), pp. 40–85. ISSN 1937-416X, Document Cited by: §3.
  • [57] B. Shneiderman (2003) The Eyes Have It: A Task by Data Type Taxonomy for Information Visualizations. In The Craft of Information Visualization, B. B. Bederson and B. Shneiderman (Eds.), Interactive Technologies, pp. 364–371 (en). , Link, Document Cited by: §1, §5.3.
  • [58] W3C (2018) WAI-ARIA Graphics Module. Link Cited by: §2.
  • [59] W3C (2018) Web Accessibility Laws & Policies. (en). Link Cited by: §1.
  • [60] W3C (2019) WAI Web Accessibility Tutorials: Complex Images. Link Cited by: §1, §2, §3.3.
  • [61] WebAIM (2021) Screen Reader User Survey #9 Results. Link Cited by: §5.1.
  • [62] M. Weninger, G. Ortner, T. Hahn, O. Druemmer, and K. Miesenberger (2015) ASVG Accessible Scalable Vector Graphics: Intention Trees To Make Charts More Accessible And Usable. Journal of Assistive Technologies 9, pp. 239–246. Document Cited by: §3.1.
  • [63] J. W. Wiedel and P. A. Groves (1969) Tactual Mapping: Design, Reproduction, Reading and Interpretation. Technical report Technical Report D-2557-S 1969, Department of Health, Education, and Welfare, Washington, D.C. (en). Link Cited by: §3.2, §3, §5.3.
  • [64] K. Wu, E. Petersen, T. Ahmad, D. Burlinson, S. Tanis, and D. A. Szafir (2021) Understanding Data Accessibility for People with Intellectual and Developmental Disabilities. In ACM Conference on Human Factors in Computing Systems (CHI), (en). Link, Document Cited by: §2.
  • [65] J. S. Yi, Y. a. Kang, J. Stasko, and J.A. Jacko (2007) Toward a Deeper Understanding of the Role of Interaction in Information Visualization. IEEE Transactions on Visualization and Computer Graphics (Proc. IEEE VIS) 13 (6), pp. 1224–1231. Note: Conference Name: IEEE Transactions on Visualization and Computer Graphics ISSN 1941-0506, Link, Document Cited by: §1, §6.3.
  • [66] W. Yu, R. Ramloll, and S. Brewster (2001) Haptic Graphs For Blind Computer Users. Lecture Notes in Computer Science 2058, pp. 41–51 (en). Link, Document Cited by: §2.