The Core Accessibility API Mappings Recommendation describes how user agents should expose semantics of web content languages to accessibility APIs. This helps users with disabilities to obtain and interact with information using assistive technologies. Documenting these mappings promotes interoperable exposure of roles, states, properties, and events implemented by accessibility APIs and helps to ensure that this information appears in a manner consistent with author intent.
The purpose of these tests is to help ensure that user agents support the requirements of the Recommendation.
The general approach for this testing is to enable both manual and automated testing, with a preference for automation.
In order to run these tests in an automated fashion, you will need to have a special Assistive Technology Test Adapter (ATTA) for the platform under test. We will provide a list of these for popular platforms here as they are made available.
The ATTA will monitor the window under test via the platforms Accessibility Layer, forwarding information about the Accessibility Tree to the running test so that it can evaluate support for the various features under test.
The workflow for running these tests is something like:
- Start up the ATTA for the platform under test.
- Start up the test driver window, select the core-aam tests to be run, and click "Start"
- A window pops up that shows a test, the description of which tells the tester what is being tested. In an automated test, the test will proceed without user intervention. In a manual test, some user input or verification may be required.
- The test runs. Success or failure is determined and reported to the test driver window, which then cycles to the next test in the sequence.
- Repeat steps 2-4 until done.
- Download the JSON format report of test results, which can then be visually inspected, reported on using various tools, or passed on to W3C for evaluation and collection in the Implementation Report via github.
Remember that while these tests are written to help exercise implementations, their other (important) purpose is to increase confidence that there are interoperable implementations. So, implementers are the audience, but these tests are not meant to be a comprehensive collection of tests for a client that might implement the Recommendation.
As tests are run against implementations, if the results of testing are submitted to test-results then they will be automatically included in documents generated by wptreport. The same tool can be used locally to view reports about recorded results.