Written by Joshue O Connor Thursday, 31 March 2011 13:34
I recently attended the 26th International Technology & Persons with Disabilities Conference in San Diego, CA. CSUN (or
#csun11 if you do Twitter) has long been one of the worlds biggest and best Assistive Technology conferences attracting many diverse groups with interests such as the Web, AT, who come together to share research findings.
Many hot topics were discussed like HTML 5, new development tools were discovered old friends met and new friends made!
Apart from being held in a fantastic location, going to CSUN is a great opportunity to meet with your peers in an atmosphere conducive to inspiration and collaboration. I met face to face with many people with an interest and professional involvement in accessibility, who I felt I had known for years, from various email lists or via following them on Twitter.
I was at CSUN presenting a paper on CFITs work as a part of the VICON project, which is all about virtualising the user testing process. For more on the VICON project, see the VICON website.
I have listed some of the interesting sessions that I attended below and have included some comments and notes about them.
Global Public Inclusive Infrastructure (GPII)
The first presentation I saw was about the Global Public Inclusive Infrastructure (GPII). The idea behind the GPII is to ensure that everyone who faces accessibility barriers due to disability, literacy or aging, regardless of economic resources, can access and use the Internet and all its information, communities, and services for education, employment, daily living, civic participation, health, and safety.
The idea is to create an infrastructure that supports both commercial and non-commercial efforts to:
- make accessibility more affordable - to users, industry, government, and other stakeholders
- reach more of the people who need it than the estimated 3 - 15% we reach now
- make it all simpler
- serve disabilities and aging groups we don't now serve or serve well
The GPII aims to build access that will work with the new technologies that are coming (that won't work with many of our current access strategies) and finally, to build something that can be replicated locally in other countries. These could be countries that don't have good access technologies or infrastructure today allowing these countries to provide access to their citizens and visitors as well.
GPII is a part of the ‘Raising the Floor’ initiative.
During the presentation, while I was impressed with the scope of the vision of both initiatives (but I will comment solely on the GPII), I was unsure about its conceptual focus. As an idea I felt that the GPII breaks down at a certain point as I felt it over emphasised itself as an Assistive Technology framework. From what I saw I don’t think it is. This is because I got the impression the initiative seemed to be more about creating a ubiquitous framework for customisation and personalisation of user interfaces.
While this is an excellent idea, (taking existing UI implementations and tweaking them by providing larger text size, improved colour contrast as needed via personal profiling) I felt that this has little to do with Assistive Technology and more to do with Universal Access via user profiling. I feel it is important to get these things right, because in my opinion, at a certain point the abstractions/language breaks down.
There was unfortunately no time for questions and I came away from the presentation wondering: Will the framework provide ‘push’ Assistive Technology and/or personalisation over the wire? How would this model really work? Most users of AT already have what they need (though it is often expensive), they (or the Government and Health Services) have paid, usually a lot of money, for these Assistive Technologies. How will the GPII framework compete/enhance established business models?
A couple of areas where I can see the GPII being very successful is in the area of so called ‘developing countries’. If ‘push’ (I won’t call it AT anymore) personalised Universal Access could be made available over the wire in poorer countries, without high cost, this would be fantastic. It would be an initiative that I would whole heartedly support.
We shall see how the GPII works over the next couple of years, and I don’t mean to be critical of the venture, but I do think in order to correctly achieve its goals it shouldn’t too closely align itself as an AT service. A fact of life is that if it did manage to present itself as a universal access framework, this may help it gain more traction as it could be something that everyone can benefit from, not just people with disabilities.
Comparison Study of PDF Accessibility Checking Tools
This presentation was about the findings from a comparison study of PDF Accessibility Evaluation tools. How these tools compared to manual PDF accessibility testing, and which was the best. It was presented by Christy Blew (University of Illinois at Urbana Champaign) and Jon Gunderson (University of Illinois).
This was a very interesting presentation as it highlighted both what is good and bad about the current batch of available PDF accessibility testing tools. Some of the main tools that were mentioned were:
- Adobe Acrobat 9/Acrobat Pro 9 and 10
- EGovMon Web tool
- PDF A11y Checker – PAC
- CommonLook by NetCentric
The test files were generated from Word 2003/2010 (built in Word 2010), used and then they were then retagged PDF and compared with a no-tag PDF.
Some interesting observations were:
- Using ‘Print to PDF’ takes out the accessibility stuff, structured content, alt etc.
- Title element was added to PDF.
- Use of lists was hit and miss, limited will say tagged as list.
- Error tools say what it cant find not what it does find.
- Tables – won’t tell the number of tables etc or col/row spans. Acrobat Pro would pass tables with no THs etc.
So how do PDF checking tools stack up?
The findings from this research were interesting
- Very basic testing is fine, but anything beyond that isn’t.
- The Quick check feature isn’t great.
- No tools can truly test the accessibility of a PDF.
- All require manual follow up or testing with a screen reader.
- There is little consistency between tools.
- Each tool can report false positives for an element.
- Acrobat Pro offered batch testing option which is very helpful for testing a large number of PDF files at any one time.
- Acrobat and CommonLook seemed to be the best, for both accessibility checking and fixing/repair.
New Accessibility Testing tool for Script Heavy Applications
Some of the more unusual stuff that FireEyes can do is:
- Reduce error reproducibility by recording error scripts in a single location, that show the exact sequence of steps required to trigger the problem, and lets you share the report with others so they can see exactly how it happened.
- It automatically builds a single report that covers multiple pages or entire use cases and provides a comprehensive list of issues to fix by giving you a transcript of the entire session that you view as a single, drillable report .
- It includes custom rules for evaluating dynamic content and WAI-ARIA compliance. For example, FireEyes can detect content insertions, track focus changes, and inform the developer of situations that may warrant the use of ARIA-live, ARIA roles, keyboard accessibility, and focus management.
- It includes a reading-order analysis with and without CSS, DOM mutation tracking, report filtering, interactive issue remediation, issue retesting, report exporting and script recording and playback. I haven't fully checked it out just yet, but from what I have seen and heard there is a buzz about it and it looks very useful.
HTML 5 Accessibility session
There was a interesting panel discussion on HTML 5 Accessibility which had Rich Schwerdtfeger (IBM), John Foliot (Stanford Uni), Steve Faulkner (TPG) and Cynthia Shelly (Microsoft) on it. Many aspects of HTML Accessibility were discussed such as
video in HTML 5 and WAI-ARIA.
I have written about WAI-ARIA (or just ARIA) before on the CFIT website so I won’t go into it in too much detail here, suffice to say that it is a fantastically useful addition to the web developers toolkit allowing them to built accessible Rich Internet Applications. More about WAI-ARIA can be found here.
What I really enjoyed from the CSUN demonstration was John Foliots talk on HTML 5 Video. In this presentation he talked about Caption/Subtitle Formats. It was very interesting to get a good overview of how captioning and subtitling will work in HTML5. Even though I am member of the HTML 5 working group, it is impossible to be involved in all aspects of the spec, so its great to get a chance to sit down and have it all explained to you by something who has expertise in the area of
video in HTML 5.
John explained about some early video formats and their limitations. Also of interest was the battle over the various formats that can be used on the web for
Firstly, there is the codec issue: (The codec is important as it is what encodes and/or decodes a digital data stream or signal.).
The elephant in the room is H.264 which is a common standard for video compression used in high definition video. It is used for Blu-Ray movies and Apple has a lot invested in H.264, the Flash player also supports it. However, H.264 requires a hefty licensing fee for vendors to use, so for browsers like Mozillas Firefox and Opera this wasn’t something they were going to support, preferring as they do the more open formats of Ogg Theora and VP8 (WebM).
IE9 does support H.264 out of the box (with recent support, for VP8 (WebM) via a plugin) as does Safari and Chrome. Chrome is the only browser that supports all three of the major codecs (Ogg Theora, H.264 and VP8 (WebM)) although support for H.264 is to be removed from Chrome as the browser maker claim they wish to support only “open innovation”.
Then there is the burning issue of formats for Captioning and Audio descriptions of
First there was WebSRT, which is not very capable of supporting accessibility well. Other formats came along such as Web Video Time Text (.vtt), from the WHATWG which according to John is still evolving and has some positive things going for it, from an accessibility perspective.
Then there are other possible formats such as Time Text Markup (.xml .dfxp). People who make movies such as the ‘Society of TV and Motion Pictures Engineers ‘ have their own preferences for SMPTE-TT.
In terms of where the ‘accessibility captioning format wars’ (what a mouthful!) are at. Browsers are not supporting them natively, but .srt/.vtt is supported with plugins etc.
So formats aside, how will
video work in HTML5? How will authors make their content accessible. No doubt there will be authoring tools that will make the process more easy in time but lets have a look at some of the code.
video element has a child called the
Both the @kind attribute and the @srclang are also very important. The
Accessible JQuery Widgets Presentation
Another highlight for me from CSUN, was a presentation given by Hans Hillen of the Paciello Group (TPG), who showed us a fantastic array of accessible JQuery Widgets from his work with the AEGIS project. These were demonstrated using the JAWS and NVDA screen readers and were impressive because they showed how it is possible to create widgets and controls using the JQuery slider that had great functionality for both sighted and non-sighted users of AT.
Some of other interesting presentations I saw at the conference were ‘Unifying Video and Audi Descriptions’ which was given by IBM Research (Japan) . Their research showed that Audio descriptions (with TTS) could increase comprehension by up to 30%. They had also developed an Authoring tool – called Script editor. Where both captions and audio descriptions should be supported on a single view. Version two of the tool should be out soon.
IE9 Accessibility and Performance
So what had Microsoft to say about IE9? To find out, I went to a presentation on accessibility and the newly launched browser. It looks like a fast impressive browser with good support for open standards. IE9 is a hardware accelerated browser, using the GPU/CPU which greatly increases performance putting it up there with some of the Webs best fastest browsers, such as Opera.
IE 9 also supports fallback content in the
canvas which is a hot topic at the moment. How this will work is currently a being developed but it's great that Microsoft made this move in the first place.
Some other announcements at the Microsoft event was made by screen reader GWMicro’s WinEyes 7.7 which now supports UI Automation (which is a more powerful accessibility API than MSAA which is now much older but still in use).
The Web Accessibility Game Plan
And finally, it was great fun to have an open session about the 'Accessibility Gameplan' that was hosted by Jared Smith of WebAIM. Jared has long been a skilled advocate of all things related to Web Accessibility and Inclusive Design. He has continually helped to develop a global sense of community through the excellent WebAIM mailing list, and to provide practical advice for authors and developers through the fine tutorials available on WebAIM. So it was great to attend a session where Jared was chairing an engaging panel including Sandi Wassmer, John Foliot, and Jennison Asuncion.
It was great fun, as Jared said later in his blog post about the 'Accessibility Gameplan' session #csun11', Twitter was on fire with over 300 tweets during the session!
So there are some highlights from my experience at CSUN, at the end of the conference I took part in the HTML Accessibility Task Force Face to Face meeting where we discussed issues such as
video in HTML, ARIA lexical processing and we ate expensive sandwiches.
I have to say I had a great time, met up with some great people (you know who you are) and it was totally worthwhile traversing the planet just to see y'all!