Volume II Number 3, August 1995

Conference Report: Access to GUIs

Doug Wakefield
Access to GUIs
Vallombrosa Conference Center
Menlo Park, California
May 23 - May 26, 1995

SETTING ACCESSIBILITY STANDARDS FOR COMPUTER SYSTEMS

Is it possible to establish objective performance standards to assess a computer system's ability to provide access to graphics-based applications for people who are blind? This was one of the main issues debated at the Access to GUIs conference held in Menlo Park, California, in May of this year. The focus of the conference was access to graphical user interfaces (GUIs) by people who are blind or visually impaired. Sponsored by the Western Blind Rehabilitation Center of the Department of Veterans Affairs, Stanford University's Project Archimedes, and Sensory Access Foundation, the three day series of meetings was attended by vendors of access equipment as well as representatives from universities and corporations. This report discusses the action taken by the Center for Information Technology Accommodations in the area of testing the accessibility of computer systems purchased by the U.S. government.

The topic of standards for accessible systems is nothing new. However, in the past, these discussions were often quite narrow in scope, usually focusing exclusively on access software rather than on the accessibility of complete systems. The degree to which a computer system is accessible, however, depends upon the hardware platform, the applications being run and the type of adaptive system being used to provide access to the system. As the computing community moves increasingly toward graphical systems, access for people who are blind is becoming more dependent on the total design of computer systems.

The U.S. government purchases computer systems which are intended to perform very specific tasks. When an agency, such as the Social Security Agency or IRS, wants to purchase new equipment, it distributes a request for proposals (RFP) and contractors propose various systems to meet the requirements set forth therein. Often, RFPs contain references to accessibility by people with disabilities. To meet these requirements, large contractors often collaborate with smaller accessibility vendors. In the past, problems have arisen due to the fact that many contracts are awarded without the accessibility of a system being tested or proven. As a result, agencies often purchase systems which are inaccessible and fail to live up to expectations. Both the agency's administration and the disabled employee suffer when this occurs.

The lack of testing is the main motivation for the Center for Information Technology Accommodations at the General Services Administration to begin spearheading performance benchmarking that can be tested for all government purchases. This proposal was presented at the GUI conference. Below is a summary of the Center's approach and reactions at the conference.

SETTING BENCHMARKS FOR TESTING

The Center for IT Accommodation is proposing that government purchasing decisions be based on performance testing results from an independent facility. This testing process would be used as a way of measuring a system's ability to meet the specific requirements set forth in an agency's RFP as well as for small purchases from the non-mandatory contract schedules. Rather than being concerned with just access, the benchmarking process would look at the entire system.

Hardware Considerations--When a computer system is equipped with hardware or software to make it useful for people with disabilities, it always involves the addition of either input or output devices that must operate concurrently with all other computer functions while remaining "transparent" to those functions. For this reason, adaptive technologies operate most efficiently on systems that "tolerate" a wide variety of add-ons. Because adaptive systems replace or enhance either the basic input or output schemes of the computer, they therefore utilize computing resources such as hardware interrupts, I/O addresses, and extra memory. The resources utilized by the normal operations of the computer and those used by an adaptive system often conflict, leaving the system either inoperative or inaccessible. For these reasons, a system's hardware can be tested and judged as to its accessibility on a wide range of hardware considerations ranging from the number of ISA slots to the use of proprietary plugs and connectors.

SOFTWARE CONSIDERATIONS

When testing a system's ability to be made "accessible" it is necessary to take into consideration both the operating system software and the specific applications required by a purchaser. Over the last decade, the manufacturers of adaptive equipment have concentrated on designing systems to work in the DOS environment. There is also some work being carried out by a few companies that has resulted in various access systems for the Macintosh. With the growing popularity of MicroSoft Windows software, Some of these same manufacturers have ported their technology to the Windows environment. Almost no research or development has taken place to make UNIX systems accommodate adaptive systems, but there are exceptions. A project based at Georgia Tech has been exploring ways to make X-Windows under UNIX usable by blind and visually impaired people. A group of companies including Sun, IBM, and DEC have formed a consortium to develop software standards for applications running under X-Windows so they can be made accessible. As a result of the variations in operating systems' accessibility, it is necessary that any testing program assure that functioning access systems exist for the operating system under consideration.

Software Applications

The accessibility of any system depends a great deal on the applications being run. Like hardware components, those software programs that attempt to be unique in their approach to such things as the method of screen writes or keyboard handling often run the risk of being inaccessible. The reason for this is quite simple. The producers of access systems have to predict how software programs will operate. The access packages for Windows assume that applications are following the programming conventions of Windows. When an application employs a unique approach to the way it interacts with the Windows environment or the computer, there is no way for access programmers to predict what the application is attempting to do.

Also, an application's accessibility for people who have a visual disability depends on how well the programmers of that application have used alternative methods from the graphical interface to present information in a fashion that can be accessed by a screen reading software package. Currently, both Microsoft Inc. and companies working on the X-Windows applications are working to develop accessibility guidelines for others to follow. These guidelines will be used as a measure of systems' accessibility.

Access Applications Considerations

A distinction should be drawn between evaluating an access package's ability to provide access and its user interface. User interfaces are often very subjective in their appeal. While some users may prefer an approach that allows all functions to be initiated from the keyboard, other users may prefer separate keypads or other devices so there is no chance for keyboard conflicts between the access program and the applications being accessed. When evaluating an access product's performance, emphasis should be on what information the product can provide rather than on how it provides that information.

Conference reactions

Many concerns about this proposal were raised at the GUI conference. In general there does seem to be widespread agreement among all the participants that some type of standards need to be established. However, the concerns expressed included:

Who is going to establish the accessibility standards for adaptive equipment?

How will testing of systems be carried out, i.e., will real users be involved?

Is there any mechanism that can ensure either vendors or agencies will insist that benchmark testing to be done?

How do you take the subjective aspects of access out of the testing process?

Finally, who'll pay for the testing - the purchaser or seller?

The Center for IT Accommodations is moving ahead on this project with assistance from conference attendees. The National Software Testing Laboratory is one source for continued discussion on this issue. This laboratory already conducts similar performance benchmarks, exclusive of accessibility, for the Canadian government. Discussion with Canadian counterparts to CITA regarding enhancing this existing program to address access is likely.

What benefit will all this have for the non-government employee? The government is the single largest purchaser of computing systems in the country. Government buyers today, at all levels, are easily persuaded to the requirements and benefits of technology that accommodates all users. No one wants to be responsible for a failure in this area. They are looking for ways to buy technology with greater confidence that their investments will be accessible to all users. If developers know that the accessibility performance of their products will be published in BYTE magazine and readily available worldwide on the Web to all buyers, including Federal, state, and local government buyers, then it should raise the overall level of accessibility awareness among all companies, and all consumers will benefit.

Wakefield, D. (1995). Conference report: Access to GUIs. Information Technology and Disabilities E-Journal, 2(3).