A11yNYC Jan 22 2019 – Time Traveler’s Guide to Accessibility Mechanics – Léonie Watson


>>So welcome, everyone, to our second Accessibility
New York City Meetup of 2019! So our typical schedule is to have our meetup
on the first Tuesday of every month. We are very excited to have Leonie Watson
today from Bristol, United Kingdom. So we are having our event on a different
schedule. That’s why we’re having two this month, because
we’re so excited to have her here. Because it’s been such a great opportunity. So we’re happy to see a lot of people here. Always be aware for our events we’re gonna
have the captioning available, so up on the screen with the presentation we always have
CART. Communication Access Realtime Translation. Today we have Mirabai Knight from White Coat
Captioning, providing the captions for us. Thanks to her. MIRABAI: My pleasure! (applause)>>Applause. So we’ll always have that at our event. We also have Joly MacFie here doing our live
stream, so if you’re ever not able to come to our events, we always livestream the event,
thanks to Joly’s work with the… Help me out with the proper way to say the
new working group. The Accessibility Special Interest Group at
the Internet Society. So we always appreciate being able to have
this stream. Any of the talks, if you haven’t been here
before, or you’ve come back, we have all of the talks from the meetup archived on our
YouTube channel. My name is Thomas Logan. I don’t think I introduced myself. I’m one of the organizers here. If you haven’t met me, I would love to meet
you after the event. We also have all of our other organizers here
today as well. So we have Cameron, Shawn, and Tyson in the
back. And Tyson is also here — works for Thoughtbot,
which is the space that we’re in today. So we always say thank you to Thoughtbot for
being our event host. We’ve had a lot of great events here, and
continue to be appreciative of Thoughtbot for their sponsorship. We also have Level Access and Adobe as sponsors
now for our meetup event, and we thank both of those organizations for helping to make
sure that we can always put on an accessible and inclusive event. With that being said, I think we’re now ready
just to hand it over to Leonie and learn about the Time Traveler’s Guide to Accessibility
Mechanics. LEONIE: Thank you, Thomas! Hello! And thank you very much for turning out twice
in a month. I hadn’t realized this was the case. So it’s really very sweet of you all to come
along to this talk. My name is Leonie. Until very recently, I worked for the Pastiello
group, known as TPG. Now I’m proud and somewhat terrified to tell
you that I’m the owner of a new group called TetraLogical, working in accessibility but
with a focus on emerging technologies like VR, voice assistant applications, as well
as some traditional consultancy in accessibility. Today I’d like to talk to you about accessibility
mechanics. The past, the present, and the future. And why it’s important for all of us who particularly
code for the web to be time travelers, to take the best of the past, the present, and
the future, to make things as accessible as we possibly can. So for browsers, for browser engines or organizations
that are responsible for browser engines, at least in the case of Microsoft, until not
very much longer… But these will all be familiar to you. They’re the applications that are the popular
tools, certainly in the Western part of the world, for consuming content on the web. Many of you might also be familiar with the
assistive technologies known as screen readers, and the ones that run on desktop platforms. If you’re not, a screen reader is now available
on all common desktop platforms, as well as mobile, so on Windows, there’s Narrator, that’s
integrated, on MacOS, there’s VoiceOver, that’s integrated, on Linux, you can sometimes get
Orca integrated if you use the Ubuntu distribution. Otherwise it’s available as a download. On Windows, you also have the Open Source
free NVDA screen reader and the proprietary JAWS screen reader. And a screen reader is a piece of technology
that converts what’s on screen visually into synthetic speech or refreshable Braille, so
someone like me who is blind and can’t see the screen can understand the content. So let’s look at the past and particularly
at the relationship between these two things: The browser and the screen reader. Because it turns out they actually have a
very, very interdependent relationship. What used to happen back in the ’90s, when
Windows was the only platform that had a screen reader worth its salt, the browser would take
an HTML document and it would parse it and it would display it visually if you included
form fields or other things you could interact with. It would provide the necessary interaction. And then the screen reader would come along
and do more or less exactly the same thing, completely independently. It would take the HTML and it would parse
it, but this time, instead of displaying it visually, it would create something that we
now know as the virtual buffer, an alternative version of the HTML document specifically
for the screen reader and the screen reader user to be able to interact with. On the one hand, this was quite useful, because
it let the browser do what browsers did best. It let the screen reader do what screen readers
did best, and in both cases, to do what they thought best for their particular audiences. It did have some fairly fundamental problems,
though. When you ask two people to go away and interpret
the same piece of information, almost invariably, they will disagree on how it should be done. The other problem was that with this particular
relationship at the time, if the browser crashed, it took your screen reader out with it, or
vice versa. If the screen reader crashed, it took your
browser out with it. Pretty much nothing but a reboot would cause
things to settle back into happy working mode. So it wasn’t a particularly brilliant way
for things to happen, but it did do the trick. And it enabled people who use screen reading
technology to consume web content in a reasonable useful way.>>Posted in blank (inaudible) on January
14th 2017, on the web for the (inaudible) to be highlighted visually but providing an
alternative for screen reader users has often involved something of a lack (inaudible) to
solve this problem. LEONIE: So in the ’90s, what screen readers
used to do for the most part was to read content top to bottom almost like today you would
read a text document. There wasn’t much in the way of useful navigation
available to screen reader users or anything very much other than an ability to read text. By the 2000s, things had started to change,
and the browsers had implemented support for things known as the platform accessibility
APIs. And screen readers had begun using these APIs
to ask the browser for accessibility information. So instead of going off and doing their own
thing separately, the browser now parsed the HTML document and produced the visual display,
the interaction, and also made accessibility information available to screen readers when
they asked for it. There are platform accessibility APIs on every
platform. Windows actually has three, currently. It has UAA — sorry, UIA, MSIA, and MI Accessible
II. The first two are its own. The second one is what enables applications
like Chrome and Firefox to be accessible. MacO has XAPI, and Linux also has its own
APIs. And these exist within the operating system
to let the screen reader ask for accessibility information about almost anything that’s on
screen. So it might see a button on screen and ask
the API: Give me all the information you’ve got about this object on the screen. It will get some useful information in return. When it comes to the browser and the HTML
elements that we know all about, there’s a lot of that information there. So, for example, if we use something like
the main element, the browser will expose a number of different pieces of information
about it. This is supposed to represent the main content
area of the page. The screen reader will ask the browser when
it finds this element: What is this thing? What is its role? Its purpose? The screen reader will say something like
this to the user.>>Main region. LEONIE: It just announces quite simply that
they’ve hit the region of the page that represents the main content. Something similar happens with the nav element. The browser says to the screen reader: This
element has a role of navigation, and the screen reader can pick that up and tell the
user that’s what they’re dealing with.>>Navigation region. LEONIE: And we can see more of this if we
look at something like the OL element. A number of things happen here. The screen reader can find out from the browser
that the OL itself is a list. But the browser is a bit smarter. It counts up the number of list item elements
inside the list and it tells the screen reader how many of them there are. So a screen reader user gets an immediate
sense of the fact that they’re dealing with a list and that it’s got three items in it.>>List of three items. One, do this. Two, do that. Three, do something else. List end. LEONIE: So you heard the one, two, three,
because it’s an ordered list and the screen reader picks that up from the browser. What you might have missed is that it said
list of three items, so the screen reader user is immediately aware of the size and
quantity of the list they’re dealing with. Headings have the same thing. They have a role of heading that the screen
reader user can pick up from the browser, and it also picks up the number of the heading
through another property that the browser makes available. In this case, the level of the heading in
question. It also works with form elements, so, for
example, we could take check box, an input with a type of check box, and associated label
elements using the 4nid attribute pairing. The browser connects up all this information. It associates the label with the input and
makes that information available to the screen reader user, it uses the type attribute to
tell the screen reader user that they’re dealing with a check box, as opposed to perhaps a
radio button or other form of input, and of course, the screen reader can come along and
use all that information to help the user. Maybe. Or not. Okay. What it can do, though, is to tell the user
that they’re dealing with a check box, and as the check box is checked, the browser will
also send a notification to the screen reader…>>Check box not checked. Checked. LEONIE: That the check box had been checked! There we go. Awesome. So all that information keeps coming from
the browser to the screen reader for the asking. Lastly, I’ll take the favorite of all accessibility
talks, the button element. And this unsurprisingly has a role of button. Something that’s a little bit different here
is that the text inside the button elements provides what’s known as the accessible name
for the button. This is how the screen reader differentiates
one button on the page from any other buttons that might be there. With a bit of luck, this will play. Hmm. Nope. Or it’s gonna pause and do what it did before. Hang on. Bear with me a second. Okay. That’s not gonna work. Never mind.>>Play button! LEONIE: It’s gonna do this to me all the way
through, isn’t it? Sorry. We come on now to the present. And things have moved on a bit from the days
where we could build or did build web technologies, websites, using only the elements that were
prescribed in HTML. For some time now, the best part of a decade
or more, we’ve been creating things that don’t exist naturally in HTML. We’ve been inventing controls that we use
very routinely in software applications, but we’ve been inventing them or coming up with
creative ways to implement them in the browser context. Tabs and tab panels are a notable example. We’ve been using mostly div and span elements
to do this. We’ve been using them as our building blocks
for the custom UI controls that we create. But there is a bit of a problem. div and span elements don’t have any really
useful accessibility information. When the browser sees one, it doesn’t convey
any information to the screen reader user at all. So if we see something…>>Bold. Italic. LEONIE: Like this tool bar, all the screen
reader is aware of is the text inside of the div and span elements. It might be styled to look like a tool bar. Might be JavaScripted to behave like one,
but as far as the screen reader is concerned, all the browser has given it is just the two
pieces of text on the page. The rest of it is utterly oblivious to the
screen reader. Fortunately, we can do something about that. We can use ARIA, accessible rich internet
applications to polyfill the rich accessibility information when we need it. ARIA 1.1 is the current recommendation from
the W3C. ARIA 1.2 is well on its way to heading down
the path to become an official standard. And it gives us a bunch of different attributes
that we can use to add accessibility information to things like div and span elements when
we need to fill it in. The role attribute has more than 70 different
values in the ARIA 1.2 specification. And with the role attribute, we can take something
like a div or a span, use the role attribute, and depending on the value we give it, we
can fool the browser into thinking that it’s a completely different sort of a thing. And there are roles for almost every kind
of UI control you can think of. For buttons and radio buttons, for tabs and
tab lists, for progress bars and tool bars, for tables, cells, and the list goes on. As I say, pretty much every kind of control
and interaction point that you can think of… By now, there is a role in ARIA to let you
reproduce that accessibility information. So if you’re using a div and a span that doesn’t
have a role of its own, you can use the role attribute with one of these values to make
sure it has the right accessibility role information. There are also 45 or more ARIA prefixed attributes,
and these can be used to add different kinds of accessibility information or properties
or characteristics. Things like ARIA pressed for buttons, ARIA
expanded, things that expand or collapse. ARIA required for things that must be filled
in, in a form. ARIA invalid for those that have been filled
in with incorrect information. And so these too we can use when we’re creating
custom components to fill in the accessibility information that otherwise just wouldn’t be
there. So we can take… I will say this… A truly terrible example. Please don’t ever do this. It does happen in the wild, though — a span
element pretending to be a button. As it stands at the moment, we take a span
element with the word “play” in the middle to simulate perhaps a play button on a video
player, but because there’s no accessibility information in the browser, the screen reader’s
got nothing to go on except for the word “play”, so we have to set about fixing this particular
problem, and we start with the role attribute, and in this case, a value of button. If we add that to our span, the browser in
its accessibility tree will now present this piece of HTML as though it were a button. The accessibility tree is the structure that
the browser creates to hold all the accessibility information about the elements on a page. It’s completely separate to the document object
model, the DOM that we use for interacting with things like scripting, so it’s only in
the accessibility sense that this thing is now pretending or being presented as a button. In every other respect, it’s still just plain
old span. We can add in the tab index attribute, because
one of the other things that a span doesn’t do is any form of interaction. You can’t focus on it with a keyboard. If you click on it with a mouse, nothing much
happens either. So we start off using tab index with a value
of 0 to make sure that somebody using a tab key can focus on the button, which is obviously
pretty important if you want to be able to interact with it. We then have to provide the interaction. Developers often are quick to put in the mouse
interaction, the mouse functionality, but it’s important to add in the keyboard functionality
as well. The expected behaviors of a button, of course,
are to be able to press them with the space or enter key, so the JavaScript that we use
to create functionality for the button has to listen to both of those things. For when someone presses the space key or
the enter key, and it has to trigger the functionality when either is detected. And so now we have another demo, which is
almost certainly not gonna work, but if we do all of this, our span should actually behave
something like this…>>Play button. LEONIE: There we go! So we had a play button. Which is pretty much what it says on the tin. It’s a button with play as its label. And when activated, it looks like it’s pressed. Except… You may have noticed that the screen reader
didn’t communicate that piece of information. It said there was a button there, but it didn’t
acknowledge that it was either pressed or unpressed or anything of the sort. The solution in this case comes with one of
those ARIA prefixed attribute, the ARIA pressed attribute, and if we add that to the button
with a value of false, initially, the screen reader will now determine that it’s a slightly
different kind of button. It’s a toggle button, i.e., one that can be
toggled on or off. If someone hits it with the space key, mouse,
taps on it, we toggle it to pressed equals true, and the browser will let the screen
reader know that it’s currently in the pressed state. We can tie in the functionality of the button
into that attribute. So we can make sure that as someone interacts
with the button, the value of the ARIA pressed attribute is changed appropriately. Toggles on, toggles off, toggles to true,
toggles to false. Something nice you can do if you’re building
something like this is attach that functionality and the CSS together. So if you make the CSS change the visual appearance
based on the value that the ARIA pressed attribute has, you get quite a neat effect. It’s particularly useful if you are wanting
to bugfix your code. A lot of accessibility, particularly the accessibility
that we use to help screen reader users and ARIA, is invisible if you’re developing. Unless you’re testing it with a screen reader,
you don’t notice all the accessibility things that are going on with the browser and the
screen reader. But if you tie the visual appearance into
the ARIA state, if you press your button and it doesn’t look like it’s been pressed, the
first place you need to go looking is your accessibility information to see if the ARIA
pressed attribute is in the state it should be. So it’s a useful way to hook things together
a little bit more usefully into a visual paradigm as well as an accessibility one. What we do with this now is we get a slightly
different experience for the screen reader user. Fingers crossed.>>Play toggle button not pressed. Pressed. Not pressed. LEONIE: So now it’s announced as a toggle
button, as I mentioned, but also the current state of the button is communicated to the
screen reader user, whether pressed or not pressed, so now we’ve got a much more complete
experience, compared to the visual experience or the mouse user’s experience. One other thing I will note, because it’s
just the way things are, rather than doing all that tab indexing and JavaScripting around,
we would all be much better off if we just used the ARIA pressed attribute on the button
element. If we use the button element, we get the keyboard
focus for free. We only have to provide the JavaScript for
the mouse, because the browser will hook it into the keyboard interaction for you automatically. We don’t have to worry about the role equals
button either. The only thing we have to add in is ARIA pressed
to get that state change information. So fewer moving parts. Much less prone to breaking. So using the button element is always much
better than trying to fake it with a span. We also now in the present day are moving
into an area where web components and custom elements are becoming more and more popular. One of the leading energy suppliers, British
Gas, in the United Kingdom, has just released a brand-new version of its homepage that’s
almost entirely populated with web components and custom elements. So this is technology that is now increasingly
hitting production mainstream. If you’re not familiar with it, a custom element
is an element that does something that doesn’t exist natively in HTML. So we can create elements that do things that
we haven’t previously been able to do, unless it was by using div and span elements and
a whole bunch of scripting and ARIA to make things work. So we start off and we extend the button element. In this case, we’re going to extend HTML elements
to create something called a T button or a toggle button. And we can put that together. There’s a point in the code where you attach
something known as the shadow root. That may not be a familiar term to you. But for those of you who are developers, you’ll
be more familiar with the concept than you might think. When you use something like the audio or video
element, we just put open video element, give it a source, close the video element. But when someone uses it in the browser, the
browser has obligingly populated the page with all the user controls you need to play,
pause, change the volume, change the incremental timer, all of those things. We don’t have to do that as developers. The browser takes care of it. And that’s the basic concept of a shadow root. What we put in the HTML actually has much
more going on under the hood that isn’t immediately obvious from the HTML that we write. And making a custom element is very, very
similar. When we come to write the HTML code for our
element, we’ll just put a very simple one line of HTML in. We’ll see in a minute. But there’s a lot more going on under the
hood that won’t ever be visible to the end user or even to someone having a look at the
code of the website. So we can just see the constructor there. And how it’s put together. And where we’ve got the shadow root being
attached. This is a very simplified example, because
you can’t squish too much code on a slide, unfortunately, without it getting unreadable. We can take the code from our previous custom
toggle button example, so we can put exactly the same JavaScript-driven functionality in
to use the ARIA pressed attribute on our toggle button, and remember, all this is happening
tucked away, out of sight of all of the code of the web page. So it’s a much neater way, from a coding point
of view, to use HTML with custom elements than to have to put all this into the HTML
page itself or its associated documents. So that’s what we would put in our HTML. We would just open a toggle dash button element,
put the word play in the middle, pretty much the same as we did with the button element. With custom elements, you have to have a hyphen
or a dash in the middle of the name somewhere. It’s the easy way to distinguish them from,
quote, “real” HTML elements or not. But from a user’s point of view or from a
developer’s point of view, this is all code we would need to put in our HTML page to produce
our toggle button. So all of the JavaScript is hidden out of
the way, as is most of the rest of the code. What would happen when a user comes along
and uses this with a screen reader is something like this.>>Play toggle. LEONIE: It’s not feeling like it. Essentially it’s much like the custom button
from before. It gets announced as a toggle button and its
pressed state is announced.>>Button not pressed. Pressed. Not… LEONIE: Okay. Let’s go for it. From the implementation point as a developer,
we’ve got much simpler code to write, and of course, we can reuse these elements as
much as we want across the website. All you need to do is just write the toggle
button code in the HTML. There is a problem, though. All of the code that we wrote inside our custom
element is hidden out of the way, much like it is for an audio or video element, except
for the ARIA. If you go and run this in your browser and
you go and then inspect the code, you’ll still see the ARIA pressed attribute present on
the toggle button, even though it was never actually applied by you in the HTML at all. And this is what’s known as leaking HTML attributes. In this respect, it’s perhaps not a major
problem. But it does make for kind of untidiness, and
developers, if nothing else, like tidy code. We like things that are nice and clean and
orderly, with everything in its place and everything having a place. And so this leakage of accessibility information
when we write custom elements is a bit irksome from a development point of view. And so we move on to the future. And work that is going on to try and solve
the problem of ARIA leakage when we write custom elements. And also, a potential solution to making it
easier and perhaps more interesting for developers to be able to deal with the browser more directly
in terms of the accessibility information. And it comes in the form of something known
as the accessibility object model. It’s an experimental JavaScript API that’s
being developed by people from Apple, Google, and Mozilla. The GitHub URL is up on the screen. And I’m sorry. I don’t recall it to read it out. But it’s being developed very much in the
open, and I want you to bear that in mind, as we go on through this, because commenting
on standards like this that are developed at the W3C is incredibly important from all
of us who ever has occasion to think we might want to use these standards. It can’t really be emphasized enough how useful
it is for specification editors to hear from people who will ultimately be using the standards
that they create, to make sure they’re heading in the right direction. Not doing anything crazy or unhelpful. So there are lots of phases to what’s happening
with the AOM. The first one is that you’ll be able to use
ARIA, element, and attribute reflection. This is also being featured in the ARIA 1.2
specification. So it’s the thing we’re most likely going
to see has some browser support in the very near future. And it’s quite a simple change, but it really
simplifies the way we can set accessibility information using JavaScript. At the moment, what you have to do is write
the kind of quite clumsy code — something like button.setattribute, and then give it
the name of the attribute and the value of the attribute as arguments. Now, it’s not a big problem. It’s not too unwieldy. But again, developers like things neat and
tidy, and what this phase of the AOM will let us do is use a more conventional common
JavaScript mode of doing things. So you’ll be able to do something like: Button,
attribute names, in this case, button.role maybe equals button. There’s another example up on screen, which
is: Button, as in the thing we want to apply the accessibility setting to, ARIA pressed
equals true. You’ll notice if you’re familiar with ARIA
and recall the ARIA prefix that I was mentioning earlier, that in the AOM, all the dashes in
the ARIA-something attributes has vanished. So ARIA-pressed becomes just ARIApressed,
one word. That’s a slight oddity in the change of the
syntax. But this in theory will make it easier to
write JavaScript that sets accessibility settings and has them recognized by the browser. So far, this is looking like a positive thing. Phase 1b will let authors, when they’re particularly
writing things like custom elements, set some accessibility characteristics at the time
the custom element is created. And these are immutable. They’re not gonna be changeable. Once the element has actually been shown up
in the browser, you won’t be able to do anything to change the values that are set at this
time. So this is much like the browser — already
provides all that accessibility information we saw at the start. Things like the role and other bits and pieces. It will also be possible to change some other
accessibility characteristics in response to user interactions. So ARIA-pressed is another good example of
this. When the user presses a button, we want to
be able to change the ARIA-pressed state, and that’s gonna become a lot easier if this
phase of the AOM goes ahead. Phase II. There will be the opportunity to respond to
new user events. Things like increment and decrement, page
scrollup, page scrolldown. These are particularly being added in not
so much for desktop use but for mobile use. Being able to increment something or decrement
something is an extremely common pattern on a touchscreen device, as is scrolling pageup,
pagedown. But there are no specific events to deal with
this. Saying key events and mouse click events — and
all sorts of other events now — these were missing. So it’s going to introduce some events we
can respond to in our JavaScript. Phase III is where things really start to
get interesting. I say interesting both in quotes and out of
quotes. Interesting in the sense of quite exciting. It will let us add virtual accessibility information
to the accessibility tree in the browser. Up until this point, the browser is the only
thing that hasn’t been able to get its hands on the accessibility tree. This for the first time will give us as developers
JavaScript access to that information. And the ability to add new information into
the accessibility tree. So it will be possible to add something into
the accessibility tree that otherwise doesn’t exist on the page itself. And this is where the “interesting” with quote
marks comes in. On the one hand, this could be a horror story
waiting to happen. We could very well end up with a time where
there are two completely separate versions of a piece of content. One in the accessibility sense, and one in
the… What everybody else gets sense. My personal thought is that if you’ve gone
to the trouble of learning all your ARIA, all your accessibility, how will these mechanics
work, you’re probably not going to be the sort of person that’s going to go down the
route of developing two completely different and independent implementations of the same
thing. I do think it might well have some uses, though. One of the things I really dislike but still
find necessary is the need to use CSS to hide some information offscreen, usually, for the
benefit of screen reader users. It doesn’t happen so often these days, but
there are still use cases for doing that. The problem, of course, is that if you are
someone who doesn’t use the stylesheets provided by the website, i.e., doesn’t have the stylesheets
that hide that text out of everybody’s view, you’re gonna get all these peculiar little
messages that don’t mean anything to you, because you’re a low vision person who prefers
a different set of stylesheets. So it seems a little bit hacky, a little bit
dirty. If these come into being, we could use it
to provide those little snippets of information that are aimed precisely as screen reader
users, without risking them getting in the way of anybody else’s experience. And that has some merit. Phase IV is the final phase of the AOM, and
it will really just tie things up. One of the most interesting things is it will
let, through JavaScript, it will let us walk the entire accessibility tree, so we can do
this with the DOM already, and in fact, we do it to find out how many elements of X variety
there are on the page, and we do some interaction based around that. This will enable us to do something very similar. So it would be possible to create quite screen
reader-like functionality as part of the scripting for a web page. So if you wanted to add in some particular
functionality — I’ll use heading navigation as an existing example. By walking the accessibility tree, a screen
reader is able to tell how many headings there are on the page and provide the user with
the functionality to jump quickly from one to the next using a shortcut. With this ability to walk the accessibility
tree using JavaScript, developers will effectively be able to create interactions, shortcuts,
bits and pieces like that for themselves. And again, that could go either way. It could get really complicated or it could
be really useful. The truth will be: If we get this far, it
will probably be a little from column A and a little from column B. Before I just finish
up with some final thoughts on why we need to be a time traveler, there is one important
thing I want to say about the AOM. Actually two. One is that as it stands at the moment, there
isn’t any browser support for any of this. Not even behind the flag. As I said, it’s very likely we will see browser
support for the first, the attribute reflection, probably within the next few months, I should
think. As for the rest of it, it’s still very much
a work in progress, and there are a lot of questions that the editors and the community
working with them need to answer. And that’s where input from people like yourselves
becomes incredibly important. One of the biggest questions from my point
of view is that of privacy. Because there’s a very interesting dilemma
coming our way. Well, coming my way, certainly. And that’s that: Once we allow developers
access to the accessibility tree, it gives them the ability to identify that the person
using the website not only has a disability but very specifically is a screen reader user. And that’s because the browser doesn’t create
the accessibility tree unless it sees an assistive technology running. And now, that might be any assistive technology. A screen reader, screen magnifier, or speech
recognition tool. But the only assistive technology that really
supports ARIA in any fashion is a screen reader. And that’s what the accessibility object model
is all about. It’s a different way to use ARIA to change
the accessibility tree. So we have a big question around how much
of our privacy are we willing to sacrifice in order to be given an accessible experience. I’m pretty clear on that. I’m not willing to give away that much of
my private information. I’ll quite happily stand here and talk to
you about being a blind person, but that’s my choice to do that in the time and place
I see fit. I don’t want every advertising person that
has JavaScript and accessibility chops to be able to find out amongst goodness knows
what else they already seem to know about me that I’m blind and I use a screen reader. That really, really makes me unhappy as a
prospect. The editors of the AOM are very well aware
of this, and they’re trying very hard to find solutions, but it’s a difficult problem. And this is where input from as many people
as possible becomes incredibly important to try and answer these very big and really quite
difficult questions. But back to the talk. And why it’s important to be a time traveler. Quite simply, we have a lot of good accessibility
things we can call on. We have the native accessibility of HTML in
the original sense. We have the ability to add in accessibility
now with ARIA in many different versions. And as we move into an area where we get to
be able to play a little bit more directly with the accessibility relationship between
the browser, the website, and the screen reader, we have even more exciting possibilities to
call on. But if we don’t take the best that we’ve got
from the past and the present and the future, if we don’t time travel a little bit with
our accessibility, then we’re really missing a trick. Because none of these solutions in and of
themselves — none of them in their own right — enables us to make the technologies that
we want to create today as accessible as we need them to be. So thank you very much. (applause)>>We’ll now open it up for questions. If anyone has a question, just raise your
hand, and I’ll bring the microphone over to you.>>Shawn here. So in your presentation, you started off with
HTML and how that worked and how screen readers worked and kind of each level of technology
added on… But what ended up kind of happening (inaudible)
so you have HTML, and then we get the ARIA on top. Now you have HTML and ARIA. In strange, sometimes wonderful ways. Now you have the AOM which seems like it could
add another layer. Now you have HTML with ARIA and AOM. So how is that being taken into account, the
design of AOM? LEONIE: Sorry, with the design of…>>AOM? LEONIE: Is the layering being taken into account?>>The interplays of each of them, as developers
learn and unlearn the right ways of doing these things. LEONIE: That’s a very good question. The editors do include people like Alex Boxell
from Google, who are very well versed in accessibility. So they are thinking about the complexity. Like a lot of people who are very focused
on trying to find a solution to a problem, it’s perhaps true that they don’t necessarily
consider all of the angles or haven’t so far, and that again brings me back to the point
as to why it’s important to hear from the likes of us, who have to go away and use these
things eventually. But I think the layering thing is interesting. I think it’s almost like a transfer of power
or a transfer of responsibilities. You know, back at the beginning, the screen
reader did its thing. The browser did its thing. Now what we’re getting with AOM is that we’re
throwing the developer into that relationship as well. So we used to have to rely on the browser
and the screen reader to do it. Now with the browser and the screen reader
— we can kind of inject ourselves into that process. And what’s the phrase? With great power comes great responsibility. That remains to be seen. And as I said, I’m pretty sure we’ll see a
lot of good coming out of something like this, if we can fix the big questions like privacy,
but the complexity will almost certainly mean we end up with some pretty disastrous outcomes
as well, I expect. That’s what happened with ARIA. So… It’s… Yeah.>>Next question? THOMAS: This is Thomas. I had a question on just the custom element
concept, which currently exists in HTML. So if you built a custom element, right now
the solution is you would add the ARIA attributes on to the sort of top level element, and that
is removing the tidiness, but is still something possible. Or are there… When you get into complex custom elements,
not a way to actually make them accessible today? LEONIE: No, it’s more that when you are writing
your custom element, when you’re actually creating the hidden code part of it, if you
use ARIA anywhere in that shadow element, shadow part of the custom element, it will
show up in the rendered HTML of the page. So you don’t have to apply the ARIA direct
to your toggle button as it was in my example when you write toggle button into your code. Just the fact that the ARIA attribute was
used in the shadow DOM, the hidden away part of that element, means for the moment the
browser will cough it up and add it to the HTML toggle element whether you like it or
not. And that’s where the messiness gets. You’ve tried to contain everything else inside
your custom elements. The stylesheets are contained, the behavior
is contained. Nothing else escapes outside of what you’re
trying to create. But the accessibility does leak outside, and
that’s what they’re trying to solve. THOMAS: And as far as us as a community, you
mentioned contributing or having comments onto the GitHub sort of working group. That’s a place to add comments. Is there any other things as far as evangelizing
to the browser manufacturers or letting them know… I mean… Recommendations on that? LEONIE: So the GitHub repo is definitely the
place to put your comments in. That’s where they’re paying attention, people
working on the spec. As I said, it’s members of Mozilla, Apple,
and Google who are working on this. Because it solves this problem quite neatly,
and Google is quite invested in custom elements with its Polymer web component library, so
they really want to solve this. To that extent, the browsers are pretty well
involved already. What I don’t think we have enough involvement
from is our community. As I say, people who are ultimately gonna
be using this, when it becomes supported. So yeah, the GitHub repo is definitely the
place to ask questions. Though they’re a really friendly bunch, the
editors who work on this. Ask questions. They will try to help. THOMAS: Thank you. Got a question up front.>>I’ve got a silly question. LEONIE: I like those.>>Last time I had my code linted for accessibility,
I got hit for having an emoticon in my code. It said that’s not accessible. Remove it. So what are your thoughts on emoticons or
emoji? More
or less happy face or sad face instead of words in place. I had some linting tools that would catch
all the low hanging fruit and it was like… Don’t use it. That one just bummed me out. LEONIE: That’s an interesting one. Actually, the Unicode emoji now are reasonably
well supported, certainly by different screen readers. Only the very newest — generation 5 or 6,
I think — aren’t terribly well supported. So now as a rule if you use one of those emoji
in your code the screen reader at least will recognize it. I guess what it was being failed on was probably
no accessible name, something like that. If you want to take a very robust approach
to it, the way to do it is a little bit… Hacky. But it does the trick. Wrap your emoji in a span, give the span a
role of image, and then use ARIA label to give it an alternative text. That should do the trick. If you come and find me after the talk, I
blogged about this a little while ago. I can point you at an article about that. But yep, that should do it.>>Thank you!>>Hi. Chuck. We’ve been doing a lot of development with
accessibility development, coming across issues with ReCaptcha. We’re working with a company called A360,
who is doing our audits, and they’re like… Well, it’s the best thing out there. But it’s still not… It’s still not great. So especially as a blind user, what’s the
frustration factor with ReCaptcha? Is there anything out there that you’ve noticed
that is kind of better for you as a user but still provides the security level needed for
spam? LEONIE: So version three of ReCaptcha, if
you’re not familiar with it, doesn’t require any user interaction at all. It’s not too bad as a first attempt. If you trip it, that’s when it gets complicated,
no matter what form, because then it will start asking you to recognize pictures of
things and other bits and pieces. And I saw a great tweet a couple of days ago,
which is… Am I the only one that finds it worrying that
Google are developing self-driving cars and they’re also the ones identifying me to identify
traffic lights by Captcha. Geniuses of the web in less than 160 characters. So yeah. ReCaptcha3, providing you don’t trip it, is
actually pretty unintrusive. It’s intrusive in the sense that I still don’t
trust anything that recognizes I’m using a keyboard and possibly other bits and pieces. But perhaps that’s just my sense of privacy. Going back to Recaptcha2, which is the one
that asks you to check a box to confirm you’re not a robot, I don’t find those too problematic
providing I know they’re there. That’s the biggest catch for me. I tend to find I’ve submitted a form and only
when it doesn’t work do I notice a little message that says: Convince me you’re not
a human. I’m not a typical screen reader user, though,
so I wouldn’t necessarily take my word for the average experience. It’s with the old school proper Captchas that
of course you run into trouble, generally speaking. I don’t know about anybody here on the visual
ones, but I can never solve the audio ones. I can either understand what they’re saying
but I can’t get to the edit box fast enough to type them, or I can get to the edit box
fast enough to type them, but I can’t understand a word of what’s being spoken to me. So it’s kind of degrees of awful. I actually am still yet to be convinced that
Captchas are necessary at all, but sadly a lot of people believe they’re necessary to
stop bots. I think things like checking for frequency
of form submissions, so if someone is trying to bash an account registration or a login,
until they think they’ve found a genuine email address or something like that… You can protect against that by checking for
the frequency that the server is being pinged for a download of the page. So I think there are better ways we can solve
this problem. But… I’m not an expert in that field. And clearly a lot of people think otherwise. So… Yeah. ReCaptcha3 is probably your best bet at the
moment if CAPTCHA is absolutely necessary.>>I just want to follow up on the CAPTCHA,
because I do work with low vision and blind users, working producing digital talking books
and Braille and large print, et cetera. And I have an experience I would love to share. This happened last week. I had a user trying to use CAPTCHA, they’re
a low vision user, tried to scale up on the screen, everything got blurred, so they went
to the audio version. And unfortunately this user speaks multiple
languages. When you go to the audio version, you would
hear multiple languages. This person could understand all of them,
except for: Listen to the English side of the audio and fill in with what was available. It timed out. They would hear Arabic in the background,
French in the background, and the user spoke these languages. So it created more confusion and more of a
barrier to access. The other one I remember from a couple of
years back was when Google would begin autofilling user name and password fields, this user would
go to, say, Gmail, try to login, the field was already filled, but was never told that
the user name or the password field had already been filled in. So they were trying to type in their password
on top of having their password already filled in, and they were being denied access. So I just want to share that experience. LEONIE: Yeah, good examples.>>So my name is Jake. I have a question kind of more about… So we’re looking at this from the approach
of the markup, the HTML that we’re producing to make things accessible. And I’m kind of curious what screen readers
are doing to make what’s being read out loud more predictable. And I’ll give a very concrete example of something. I work with forms a lot, and a very frustrating
user experience I saw was: The combination I think was JAWS and Chrome or some combination
of a screen reader and a browser, and there was a required field that was blank, and when
you were using the screen reader and you toggled into the required field, it would ramble off
the label required and then said invalid. Which is really jarring. Because it’s not invalid. You just haven’t proceeded to engage and enter
something. And then if you switch over to a different
screen reader, it didn’t have that problem. And I’m kind of wondering… Are the screen reader companies trying to
come together on some spec, on what should be read out loud, when you engage different
elements? Similar to how CSS, they have really strict
specs when they talk about… As esoteric as a bulleted list followed by
a paragraph, should it have margins — and they get crazy on those specs. I feel like that’s the other side of this. What are they doing to improve the standard? Which… I feel like there isn’t a standard. LEONIE: They’re not is the short answer. ARIA has a spec which I suspect may be what’s
causing the problem that you just described, in which whatever screen reader this was… I see this as quite a common pattern, actually. People will put ARIA invalid=true on an empty
form field before anyone has gotten as far as filling anything in, and it’s extremely
jarring. So it might be if you pick that up, you might
solve the problem in the short term, if you remove that until it’s needed. Two larger questions, though: No, the screen
readers aren’t working together to find any commonality. What I will say is that most users don’t switch
from one screen reader to another. So they don’t know about these differences. It’s a bit like back in the days when we had
a lot of browser differences in CSS as you mentioned. I remember working with clients… But logo is 5 pixels to the right in IE whatever
it is, than in Firefox. And people would get incredibly stressed by
this, and you had to sort of point out… Yeah, but most IE users aren’t using Firefox
and most Firefox users aren’t using IE. Provided it looks okay in both, everybody
should be happy. It’s the same with screen readers. If you test with a lot of different screen
readers and browsers it’s possible to get very worried by discrepancies. Possibly like the one you just described they’re
more of a problem, but users are oblivious to these differences. So they’re just used to the experience they
get that they used most of the time. The other problem is it’s not just the screen
reader. It’s so fundamentally related to what’s wrapped
up with the browser. So trying to find some consistency is pretty
hard, unfortunately.>>I want to… Because that’s what I was taking away. I was looking at these quirks, for lack of
a better word, and I was in the mindset that someone just gets used to that quirk the first
time they go to a form. It blurts out invalid on an empty field and
they move on. LEONIE: Eventually. To varying degrees you do learn a degree of
obliviousness to it. Same with pronunciation. I get people saying… How should I write content that’s accessible
to screen readers? Write content that’s readable to everybody
else and leave us to deal with the peculiarities that screen readers chuck or way. JAWS has an open issue tracker now on GitHub. If you come find me afterwards and give me
a way of contacting you, I’ll drop you a link. Or you should be able to Google it. NVDA has a tracker too. It sounds like you picked up a JAWS Chrome
bug there. It might be worthwhile seeing if you can get
it fixed.>>This is Cameron. The conversation about screen readers and
its nuances and quirks and inefficiencies, et cetera… I mean, it kind of dovetails with a lot of
developments in voice assistants that we talked about earlier, and also with being able to
augment content with the accessibility object model. Do you see these changes impacting the usage
of screen readers as they stand? Do screen readers feel like a satisfactory
solution to the problems that we’re describing? And how do we do better? Maybe in the context of voice assistants or
maybe otherwise. LEONIE: That’s an interesting one. I don’t think that voice assistants are going
to replace screen readers on mobiles or laptops or whatever any time soon. For the simple reason that voice interaction
is very much in the here and now. You ask it a question. You get an answer. You maybe get given two or three options that
you can choose from. And you can remember two or three options. And you choose one and it goes on down the
conversational path. That’s great for very simple, one-track interactions. But a lot of the stuff that we do on the web,
on our desktops and laptops, is a lot more complex. It involves being aware of maybe two or three
pieces of information at different places on the same web page. If you’re searching for a flight, for example,
you might be filtering based on price or class of ticket or something like that. Trying to do that successfully at the moment
through voice interaction for anyone is incredibly difficult, because the mode of conversation
isn’t actually particularly well suited to more complicated tasks. Certainly not at the moment. That might change. I’m not sure we can do much to change how
conversation works and how capable humans are at conversing, though. Conversations generally are quite simple,
one-track things. As to whether screen readers are the right
solution… That’s a good question. I often hear people say… You know, if we were to do this all again
today, we probably wouldn’t create screen readers, but I don’t think I’ve ever heard
anybody say what they would create if they didn’t. Whether that’s because it’s really hard to
think of something… Because screen readers are just so big and
ubiquitous and so much there… It’s like sometimes when you’re staring at
a wall you’ve painted blue and trying to think what it would look like if it was red, it’s
just really difficult. So I don’t know is the short answer to that. I suspect we could come up with an easier
solution. Screen readers are difficult. But again, don’t ask me. I really couldn’t tell you what that might
look like.>>Okay, thank you.>>So I worked at Opera on what we would create
instead of screen readers. And that’s a fairly convincing reason why
we don’t. But the general thing of… What should we tell people about when not
to interfere with what screen readers do? And the same thing with… Again, an example from Opera. Opera had navigation like screen readers now
have, when screen readers didn’t have it, and it was fantastic. But not everyone has that. So how do we deal with that piece of… Some people are actually still living in the
past. With technology that’s not quite up to date. And how do we deal with the… Yeah, we have a great way of adding things
with the accessibility model. Should we add them? And are they actually gonna help? Or are they gonna get in the way? LEONIE: They may well get in the way, for
all the reasons that I said in the talk. The interesting thing about the accessibility
model is that it actually… From the consumer point of view, from the
screen reader’s point of view, it changes nothing. It still gets the information from the browser
in exactly the way it has done for the past 15 years and will more or less do exactly
the same thing it has always done with it. So that part of the relationship won’t change
at all. So in terms of catching up from what technology
you’ve got, what browser you’ve got, or whatever, your browser is probably the only really important
piece of that puzzle. The screen reader, providing it’s set up to
ask the browser for information, will have some success in doing it. There was a very early version of the AOM
that did have some support in browsers, and certainly the screen readers I tested it in
— Mac VoiceOver, NVDA, and JAWS, picked up the information without batting an eye. So the hope is that the technology won’t be
quite as much of a problem in this case. As for the rest of it… All hell might break loose quite rapidly. So we’ll see.>>Hey, my name is Nick. For the past ten years, we’ve been working
with this ideology of “mobile first”, and not only does it affect the way we think,
but it affects our workflow as well. Do you envision AOM being a new workflow,
where it’s kind of accessibility first? Is that something that we can work on first,
and then work on these other layers that we talk about the complexity of these multiple
layers? LEONIE: I think it has to work in tandem with
whichever “first” happens to be flavor of the month at the time. So you can’t separate the presentation of
the content from its accessibility. So if you’re thinking mobile first and you
need to create something custom, you’ve got to make sure that’s accessible. If the AOM gives you a way to do that, you’ve
got to tie the two together from the very beginning. Same if you take a back step and people are
still doing kind of desktop whatever first. If I have to put a first, actually, I’d make
it a content-first, and then hook everything else around it, but yeah. We have a number of whatevers first choices
to make, and from a code point of view, mobile first is still incredibly popular. But you can’t separate it from the accessibility,
really. If one happens without the other, you’re gonna
run into problems.>>Any other questions? All right. Well, I think we’d just like to say thank
you again very much to Leonie. This is an awesome talk. (applause) LEONIE: Thank you very much! If you have any other questions, come and
find me afterwards, or my contact info is up on the last screen, and I’ll share the
slides on Twitter in a moment. Please ask me questions any time this evening
or ahead.>>And I’d like to just plug her website. Tink.uk. I’ve been reading Leonie’s posts for many,
many years, and it’s always on some of the latest technologies that are coming out. There’s really interesting thoughts there,
much like we had tonight. But I would say there’s a whole history of
awesome articles on that site. So definitely check that out. Again, I want to thank everyone for coming
tonight. Thank our event sponsors at Thoughtbot. The Internet Society of New York, and the
Accessibility Special Interest Group, Mirabai Knight from White Coat Captioning, Level Access,
and Adobe. We really appreciate being able to have this
event every month and have it accessible. As always, if you’re interested in communicating
about another event or another accessibility-related piece of information, if you have job opportunities
or other events, we have Tanya now in the back here is our volunteer. But we want to start putting those more out
into the community, and helping people that are part of our community learn about other
events that are happening in the city. So feel free to get her contact information. We’ll be also posting that on our Meetup site. With that being said, also we’re looking for
presentations. Also reach out to us. We’re always looking to get scheduled ahead. We won’t be having an event in February, due
to having two in January. But our next event will be in March, and Cameron?>>We have a new email address. It’s [email protected] So if you need to reach us, you can email
that email address, and it will go out to the organizers, and you’ll be able to act
on that.>>Is that dot com?>>[email protected] And for the job posts, ideas, events, you’ll
be able to message us there and we’ll get back to you.>>Thank you. Goodnight! (applause) We have some time in the space, so if you
want to have conversations…

One Comment

Add a Comment

Your email address will not be published. Required fields are marked *