1.0 has progressed to schema.org (see below). This is call for celebration.
As we reach this milestone, Gerardo gave a bit of history on the inception of the project.It was funded by Gates through the end of September 2013, as a followon to their LRMI work, since accessibility was deemed out of scope in LRMI. There are some remaining funds, which Gates is allowing to be used past the end of the project. These funds will be used in three ways:
- Getting the materials all up to date for 1.0 on the W3C site and A11ymetadata,org. Matt Garrish will be back as the editor for this, on the specification, best practices and other content.
- Resolving the remaining items, which is primarily accessMode and the search-engine friendly proposals and the ongoing issue there, as we move towards a 1.1 release. Chuck Myers will be driving this part of things.
- Update reference implementations (martin and Adam from Benetech will be driving these)
- Bookshare content with the new property names
- Learning Registry integration, with an update of the JSON-LD data from Bookshare and Kahn academy accessibility. This content can be seen through the Free.ed.gov UI to the learning registry data
- Update of the Google Custom Search engine that has been used with this reference content
- Work with Wordpress developers to bring accessibility metadata to one or more wordpress schema.org/ microdata plugins
We'll make as much progress as we can until funding runs out, but then we'll need to be finding a structure for sustainability. There were three proposals discussed for the future beyond Benetech's drive, with none of these mutually exclusive:
- Running the process through the W3C Semantic Web Interest Group (SWIG) (which is to say, under the same auspices that we've been doing this work in w3.org wikis to date). Note that the SWIG is open to all, not just W3C members.
- Through Access for All (this is a good group to be included, but the specification work should be done in a publicly available forum)
- Just running as a maintenance group with the Google group, bringing material to the public vocab list as appropriate.
Since #3 is really the union of #1 and #2 (the AfA folks can join this open activity), we'll continue down that path. The only real difference is that work will not move back to being the effort of just AfA, as it is now an open schema.org specification. Activity will take place only on the a11ymetadata google group as we develop efforts, and then bring it to the public vocabs list when we have a concrete proposal. In other words, we'll not be bothering the public vocabs list until there is a proposal for 1.1.
There was discussion of submitting content as notes to the Semantic Web activity, but that this would only need to be done if we had some contention in the group. We decided that we would continue to do updates on the wiki, and run discussion through the existing a11ymetadata google group. New proposals and issue tracking would be written on the W3C public vocabs wiki.
The a11ymetadata.org evangelism and marketing site will continue to be run by Benetech.
- The following is a review of events as the current properties moved into schema.org (V1.0) over the past month
- We recommended to move forward with accessibilityFeature, accessibilityHazard, accessibilityAPI and accessibilityControl, which went live on schema.org on December 4.
- We decided to eliminate the distinction between app and content and all the proposed properties will be part of CreativeWork.
- We are not going to include ATCompatible in 1.0and have dropped this from further consideration... at one l;evel it was too detailed in its meaning, and at another too restrictive in just being a boolean. Even though it conveyed value, it was unlikely to be understood and used, so we dropped it.
- We are going to wait for the soon to be published BibEx workExample and exampleOfWork properties instead of using hasAdaption and isAdaptationOf. See notes later in next steps on liaison.
- We are going to hold on accessMode. We will explore using the encodingFormats property. For example a book may have encodingFormats of text/html and image instead of accessModes of textual and visual. I just realized we need to think through of other HTML visual/non-textual elements, such as canvas.
- Finalizing any wiki cleanup
- Matt and Madeleine still have some concerns regarding the noX terms on accessibilityHazard, so we should discuss how to resolve (proposal is top add Hazard to the end of the terms)
- Matt's discussion on the utility of the various transform properties and extensions (clarifications that came up as he did more writing on this)
- Cleaning up the issue list and removal/archive of items before 1.0
- Establishing a process for maintaining, discussion and governance of the terms on the wiki
- We will use the vocabs mailing list for maintaining terms and publish stable versions as Interest Group Notes in the Semantic Web Interest group (you may need to click on a highlevel menu first) and the home page of the SWIG.
- We will use the WebSchemas/Accessibility wiki for maintaining terms
- We need to clean up the crosswalk between ONIX and Schema.org. In this process, we identified that we may need a accessFeature term of mediaOverlays for ONIX accessibility item 20 and that ONIX accessibility items 11 and 13 combined would map to structuralNavigation. Matt will reach out to with Graham Bell for his input. Note that some of this has been done, but needs another set of eyes.
- accessMode, augmented access mode and the various proposals for the available access modes - starting the process with use cases
- is/hasAdaptation, the referral of this to the bibex group and how to best handle the various relationships. We should give them guidance on this
As we move forward on accessMode and its properties, it's useful to have all of the use cases identified so that we
There are six use cases below, and more are requested. There will be a use case page on the W3C wiki. The goal for the use case is to be a bit more specific in the use case. One should propose a set of data, a type of query that would be done, and the expected results. The use cases below will be turned into something more detailed on the site.
- a person looking for a specific media type (video) with a accessibiliyFeature (caption) - media type+accessibilityFeature does the job well
- a person driving in a car who can only work with audio content or content that can be expressed in audio - needs augmentedAccessMode to easily determine useful content, but may still have a bias to content that fits by accessMode ahead of augmentedAccessMode
- a teacher who is leading a class, where some of the students have disabilities. They would like to find resources for the class, finding, if possible, the best resources that can be used by the most students - another case for augmentedAccessMode for search hits, but then seeing the accessMode and accessibilityFeatures so that they can make the best decision
- When a book is made available, it may be available with different accessModes and accessibilityFeatures (the current bookshare is a very good example... a book can be available in braille (brf), Daisy with images, Daisy w/o images and Daisy Audio, all as adaptations of the source book). In this case, the various accessibilityFeatures may be delivered in specific packages, rather than being available on the source file.
- When a book is available on O'Reilly or Amazon, it may be available in a few formats from the offering page (paper, epub, mobi, pdf), hardbound, paperback, audioCD or AudioService (Audible). Sometimes these are available for access directly from the page (O'Reilly). Other cases (Amazon), has links over to that page.
- accessModes for epub3 files, which can contain multiple renditions. For example (see draft epub3 content below), a single epub file can have multiple renditions... a graphic novel, a text novel and a dramatic audiobook reading. These need to be made available to the user by accessMode.
Also, note that we may consider that some media types have a default accessMode. For example, a video would default to having auditory and visual. An audio file, such as an MP3 would default to just auditory. There are a small set of these that would be known, for the most common defaults.