Assistive Interfaces For The Visually Impaired Using Force Feedback Devices And Distance Transforms
ABSTRACT
Over the last few decades, the natural evolution of user interface models has popularized a standard model that is based almost exclusively on visual metaphors. This process has left visually impaired users unable to use computers and to access new technologies. Some actions have been made to reverse this scenario, most of them based on adapting the existing models instead of creating specific solutions for the visually impaired community. The development of applications for such users requires the use of new technologies, tools, and communication media.
This article proposes the use of force feedback devices in the planning and implementation of assistive user interfaces, which would help blind users perform simple 2D interaction tasks. By exploring the sense of touch, such devices can be used to improve the efficiency of communication between the user and the interface. This work also investigates the use of distance transforms as a powerful mechanism to support many 2D interaction tasks.
1 INTRODUCTION
It is clear that, due to technological advances in our society, computers are essential tools nowadays. Before the appearance of graphical user interfaces (GUI's) and the popularization of Windows operating system occurred in the 1980's and '90s, both computer applications and the operating system itself (DOS) were based solely in text interfaces. Those interfaces did not represent barriers to the visually impaired (NCD, 1996). However, since the advent of graphic interfaces, the applications have become more and more graphical and interactive (Myers & Rosson, 1992).
Unfortunately, the appearance of graphical interfaces has brought several difficulties to the visually impaired, causing a severe crisis in that community (Boyd et al, 1990). Graphical interfaces, which are based almost entirely on visual metaphors and direct manipulation of objects, have left the visually impaired community behind in the evolution of computation. During the 1980's migration from DOS to Windows systems in offices and companies, many visually impaired people were dismissed from their jobs (NCD, 1996). Indeed, it is extremely relevant to invest in efforts that allow visually impaired people to access computers as efficiently as possible.
1.1 Interface and Interaction Styles
According to Moran (1981), "the user interface must be understood as part of a computational system with which a person enter[s] in contact physically, perceptively and conceptually". In computer science, the term interface usually denotes the hardware and software components that allow users to interact with the computational system. Interaction style is a generic term that refers to the way that users communicate or interact with the computational system (Preece et al, 1998). The most common styles found in interfaces are menus, direct manipulation, form-fills, and natural languages (Preece et al, 1998; Shneiderman, 1992). Interfaces based on direct manipulation are characterized by interpreting user actions (like move, select, drag and drop, etc.) on visual representations of interface objects (e.g., icons), using an input device, typically a mouse (Shneiderman, 1983).
Besides the well-defined interaction styles mentioned above, it is possible to include the WIMP (Windows, Icons, Menus, Pointers) style. This term is usually related to graphical user interfaces (GUI´s). These interfaces use visual representation of widgets like windows, buttons, and icons. Users execute action on these representations using an input device. WIMP is not a unique interaction style. It employs several basic styles, mentioned above, especially menus, direct manipulation, form-fills, and natural languages.
WIMP interfaces are easy to learn and use. Also, most applications have a similar appearance, which helps beginners work with them[mmc1]. However, there are critical issues, especially related to accessibility to the visually impaired.
1.2 The Visually Impaired and the Interface
Unfortunately, many advances brought by the WIMP style did not take into account the particular necessities of the visually impaired (Sjöstrom, 2002). Most computer applications were developed with sighted people in mind (Kamel & Landay, 2000).
Researchers have tackled the accessibility issues of WIMP interfaces using techniques mainly based on voice synthesis and other audible feedbacks (Dosvox, 1998; James, 1998; Kamel et al, 2001; Kennel, 1996; Mynatt, 1997; Mynatt & Edwards, 1992; Nomad, 1999; Pitt & Edwards, 1996; Raman, 1996; Savidis, 1996; Vanderheiden, 1996; Zajicek et al, 2000). These technologies were quite efficient before the advent of GUI's, due to the simplicity of translating a textual interface (usually sequential) into voice. However, this process is not so efficient in a graphic context (Christian, 2000). Also, voice-based interfaces are not adequate in tasks that involve spatial localization of objects (Kamel & Landay, 2000), despite some efforts to map spatial structures of graphical objects and to interface widgets into non-verbal sound feedbacks (Kamel et al, 2001; Mynatt, 1997; Mynatt & Edwards, 1992).
A possible solution to this critical problem may involve using other technologies, especially haptic technologies, because they are particularly useful to the visually impaired. However, until now there has been a lack of research on interfaces for the blind (McLaughlin et al, 2002). Also, most interfaces for the visually impaired are designed for low vision users, not for those who are blind (Fraser & Gutwin, 2000; Jacko, 1998; Jacko et al, 2000; Zajicek et al, 1999; Zajicek, 2000; Zajicek et al, 2000).
Some interfaces employ other types of feedbacks, for instance, tactile feedback, but these other types usually act as a complement to the visual feedback (Rorh et al, 2000). Most of such works are based on a multimodal interface (haptic and visual), but put too much information on the visual channel and, in consequence, reduce the importance of the haptic channel (McLauglin et al, 2002). In this case, it is nearly impossible for a blind user to access such interfaces.
Indeed, there is a lack of tools specifically designed for the blind, tools based mainly on haptic perception. Haptic perception, combined with hearing, is a very important perceptual mechanism for the blind and must be intensively explored (Yu et al, 2000).
2 BACKGROUND
An individual is considered visually impaired when, despite any optical (glasses, contact lenses) or surgical corrections, there is a severe loss of visual acuity (capacity to perceive details in an image) or loss of visual field (capacity to see simultaneously in several directions). The degree of visual impairment varies from total blindness (absence of light perception) to low vision (an individual with some useful remaining vision).
Recently there is a tendency towards developing products to be utilized by as broad a group of people as possible (Vanderheinden, 2000). In this article, however, the focus is on user interfaces for blind people.
2.1 Mental Model
Despite not having the visual channel, visually impaired people acquire information about the environment by using other senses, especially touch. By using touch, a visually impaired person can perceive shape, size, texture, spatial position, etc. This process, although hard to accomplish, inefficient, and slow (when compared to visual exploration), allows visually impaired people to obtain a good understanding of environment (Smith, 2001).
The evidence above leads psychologists to discuss the existence of mental models, that is, spatial structures linked to a set of possible operations that can be used by individuals (sighted and blind) to record information about the environment.
Preece et al (1998) state that people build mental models to support their external actions and also believe that mental models represent, among other information, the spatial arrangement of a set of objects in the world. In other words, mental models are closely related to images, but there is a difference: an image is a static representation of a given scene, while mental models offer a dynamic representation.
2.2 Haptic Perception
Haptic means directly related to the sense of touch. In humans, this sense has two independent components: cutaneous and kinetic (Klatzky & Lederman, 2000; Oakley et al, 2000). This rich set of sensorial mechanisms allows people to assess an object's dynamic and material properties, verify and monitor activities in progress, build mental models for invisible parts of a system, etc. Many experiments have been done to discover the strengths and weaknesses of the human haptic system (Klatzky et al, 1985; Klatzky et al, 1993; Lederman & Campbell, 1982; Reed, 1996).
2.3 Exploratory Procedures
There are a set of well-defined hand and finger movements intuitively used by people to perceive many physical properties that are identifiable by touch (Lederman & Klatzky, 1987). These patterns, called exploratory procedures, are classified as lateral motion, pressure, static contact, unsupported holding, enclosure, and contour following (see Figure 1).
(a) Lateral motion |
(b) Pressure |
(c) Static contact |
(d) Unsupported holding |
(e) Enclosure |
(f) Contour following |
Figure 1: Exploratory Procedures (Lederman & Klatzky, 1987)
Sighted and blind people use the above patterns in a very similar fashion. The graphical interface for blind people proposed in this article is based on the exploratory procedures mentioned above, especially those related to the perception of shape (contour following), texture (lateral motion), and hardness (pressure).
3 HAPTIC TECHNOLOGIES
A very significant part of our capacity to establish cognitive models of objects in the physical world is closely related to the sense of touch (Massie, 1998). This is the only sense that simultaneously allows input and output, that is, the interaction is bi-directional. Ordinary interfaces use only one direction when interacting with users (Figure 2a), but haptic interfaces can make good use of the bi-directional feature of the human touch, increasing the bandwidth in transferring information between interface and user (Figure 2b).
Experiments have shown that, by using touch, the error ratio decreases when using an interface (Oakley et al, 2000) and the performance increases in poor vision conditions (Fraser & Gutwin, 2000). These results show clearly that, when the user has some kind of visual impairment, the efficiency of an interface can be made higher by using other senses.
In contrast with common interactive devices, like mice, keyboards, and joysticks, haptic devices are able to function as an input device (giving the spatial cursor position) and as an output device, applying forces and vibration to some part of the user's body. This feature allows a user to receive information from and interact with an application at the same time.
(a) Unidirectional interaction |
(b) Bidirectional interaction (force feedback) |
Figure 2: Types of Interaction (Mensvoort, 2002)
Forces and vibrations can be computed by using physically-based models and sent to mechanical actuators on haptic devices giving users the illusion of touching virtual objects. Depending on the type of actuators, haptic devices can be classified as force feedback or tactile devices.
Force feedback devices have mechanical actuators that apply forces on the user's body, especially hands or fingers. Such devices can block user's movements, that is, they can control the cursor position while interacting with the user. Also, force feedback devices can simulate many effects and physical properties, like turbulence, viscosity, hardness, impact, and so on. Such devices are closely related to the kinetic component of the human haptic system. There are a lot of force feedback devices: SideWinder Joystick (Chen & Marcus, 1998), WingMan Mouse (Wingman, 1999), PHANToM (Massie, 1993), CyberForce (CyberForce, 2002), PenCat/Pro (Hayward, 2001), Pantograph (Ramstein & Hayward, 1994) and The Moose (O´Modhrain & Gillespie, 1997).
Tactile feedback devices have mechanical actuators that apply vibrations on the user's body. Such devices can simulate tactile sensations, especially textures, but cannot avoid cursor movement on the computer screen. Tactile feedback devices are closely related to the cutaneous component of the human haptic system. Some examples of such devices are: Tangible Mouse (TM, 2002), SmartFinger (Ando et al, 2002), GUIDE (GUIB, 1998), Braille displays (NIST, 2002a; NIST, 2002b; Ramstein, 1996; RNIB, 2002; Roberts et al, 2000), and tactile graphical displays (Fricke & Bäring, 1992; Kowalik & Postawka, 1994).
3.1 Limitations of Haptic Technologies
There are many serious problems with 2D haptic representation of more complex geometric models, such as images and spatial maps (Lederman & Campbell, 1982; Lederman et al, 1985; Eriksson, 1999). Such problems are basically due to the limitations of haptic devices and interfaces.
In general, haptic devices can only produce a force vector at a single point[mmc2]. Therefore, tactile sensations are applied to a single point on the user's body at a time. However, humans are able to process touch sensations at multiple points simultaneously. Thus, the overall perception of any haptic representation is dramatically reduced because tactile exploration is sequential and rather slow (Lederman & Campbell, 1982). This is particularly true when important perceptual properties, like shape and textures, are poorly represented or even omitted due to technological restrictions (Klatzky et al, 1985). Braille cells, for instance, are quite efficient at representing text characters but are inadequate for representing geometric and spatial properties. Even for text reading, Braille method is about three or four times slower than the traditional method for sighted people (Ramstein, 1996). Also, Braille is not widely used by blind people (Yu et al, 2000).
On the other hand, when grooves or ridges are used to represent boundaries, they are sometimes difficult to perceive by touch. Such representations rely almost exclusively on perception acquired by cutaneous sensors located at the tip of user's fingers. Lederman & Campbell (1982) showed that the pure translation of visual information into groove/ridge representation is not sufficient in some cases. Translating simple geometric objects (lines, polygons, circles, etc.) into visual-tactile representation seems to be quite adequate for the visually impaired. However, more complex shapes (like images and spatial maps) cannot be directly translated into groove/ridge representations of their boundaries.
Boundary representation gives little information to the cutaneous (small changes in 2D pressure) and kinetic (small 2D translations) components of human touch. Two-dimensional haptic interfaces transmit to users only restricted touch sensations. Because of that, achieving perception of spatial properties becomes a slow and complex task (Lederman & Klatzky, 1987). Thus, boundary representation, by itself, can be rather inefficient in modeling structurally complex objects (Klatzky et al, 1985).
4 APPLICATIONS OF HAPTIC TECHNOLOGIES
There are many examples of general haptic applications: surgery simulation (Machado et al, 2000; Salisbury & Srinivasan, 1997), medical training (Aviles & Ranta, 1999; Brewster, 2001; Reinig et al, 1996; Reinkensmeyer et al, 2000), scientific visualization (Branco & Encarnação, 2000; Durbeck et al, 1998), and geometric modeling (Avila & Sobierajski, 1996; Sensable, 2002).
In the particular case of applications for the visually impaired, haptic technologies seem to be very attractive in developing educational (Brewster, 2001; McLaughlin et al; 2001; Willians II & Seaton, 2000) and leisure programs (Johansson & Linde, 1999; Sjöström & Rassmus-Gröhn, 1999; Sjöström, 2001), drawing systems (Kamel & Landay, 2000; Kurze, 1996), representation of geometric objects (Ramloll et al, 2000; Sjöström, 2002; Van Scoy et al, 2000; Yu et al, 2000; Ramloll et al, 2000), and systems for orientation and mobility (Kawai & Tomita, 1996; May, 2000; Morris & Joshi, 2002; Ross & Blasch, 2000).
Some works mentioned above are similar to what will be proposed here, in particular, the MultVis project (Yu et al, 2000; Ramloll et al, 2000) and the experiments done by Sjöström (2002) with haptic devices (essentially the PHANToM). However, there are two important differences:
- Most of the scientific works in haptic interfaces use a PHANToM to represent haptic properties. There is little research on 2D force feedback devices; and
- Until now, no works have employed distance transforms to compute reactive forces. Usually, this is done by computing the cursor depth in virtual objects, i.e., applying the Hook's Law.
5 THE PROPOSED INTERACTION MODEL
This work proposes the utilization of non-visual, touch-based assistive technologies to establish a more powerful communication channel between the user and the computational interface. In other words, it purports substituting touch for vision in every interaction task with the user. In general, the following main ideas will be proposed in this work:
- Specification of a low level interface model that supports the perception of as many haptic properties as possible with the current technological limitations; and
- Validation of the techniques implemented in this work regarding its usability by the visually impaired.
The following properties will be taken into account in the assistive interface model proposed here: texture (material properties), exact shape (geometric properties), and global shape (spatial properties). It is expected that this interface could be used by the visually impaired to haptically interact with virtual graphic objects. As touch is used as a primary feedback mechanism, this interface should be fully usable by blind people. A more detailed discussion about the proposed interaction model can be found in Carneiro (2003)
5.1 Interacting with Textures
Texture is a surface property that can be easily identified by touch. Representing surfaces by haptic devices, especially force feedback devices, is a complex task (McGee et al, 2001). The complexity occurs because force-based haptic devices are closely related to the kinetic component of human touch, while texture perception is related to the cutaneous component.
To stimulate the perception of virtual material properties, this work proposes the simulation of textures based on the representation of roughness, one of the most common surface properties[mmc3]. Despite not being very accurate, it is possible to represent surface roughness using force feedback devices (McGee et al, 2001).
Figure 3 shows an overview of the haptic texture model implemented in this work. A virtual object can have different textures for its border and interior. When a user handles a virtual textured object, the computational interface signals three different events (enter, move, and leave). Then, these events will be managed by the corresponding texture model, providing the correct feedback to the user. By using this simple event-driven model, it is possible to implement almost any kind of texture, depending on the haptic device capabilities currently in use.
Figure 3: Generic Texture Model
5.2 Interacting with Geometric Objects
Haptic representation of virtual geometric models is limited by the currently available haptic technologies. The most used technique for interacting with geometric objects is based on boundary representation (Ramloll et al, 2000[mmc4]). This technique is derived from the contour following exploration procedure.
To stimulate geometric perception (that is, exact shape) during a haptic interaction, attracting forces will be employed to convey shape information to the kinetic component of the human touch system. This helps users to identify the object's shape, while keeping the cursor near the object's border. Also, texture forces will be employed to convey material information to the cutaneous component of the human touch system. This helps users identify whether they are touching the interior, exterior, or the object's border (Figure 4).
Figure 4: Reactive Forces
The authors believe that the two mechanisms described above increase the user's perception of virtual geometric properties. This happens because attracting and texture forces feed the human haptic system with orthogonal information.
The use of forces to help users follow the object's borders is a straightforward approach because humans (sighted or blind) do this naturally to figure out the shape of a real object. Vertices, edges, and faces are the most important tangible properties; once they are identified, it is possible to easily recognize shapes without the aid of vision. The authors chose attracting forces for this project, rather than repulsing forces. Attracting forces stimulate users to explore the object's contour. In contrast, repulsing forces stimulate users to explore the object's interior (Sjöström et al, 2000). This is not desired because the main goal is to identify the exact shape of an object.
Figure 5 shows how the attracting forces are computed. In the figure, the object (a polyline) is represented by its border. Up to a certain maximum distance (D, the shadowed region) the cursor, at position p, is pulled to p´, the nearest point on the object's border. The magnitude of the attracting force (F) is proportional to the distance from the cursor to the object's border (d). The values of d and p´ are computed using the distance transform of the considered object.
In fact, this approach resembles the behavior of an ordinary spring in the sense that, while interacting with a virtual object, it gives to the user the impression of attracting forces as if there were a virtual spring connecting the virtual object's border and the cursor's position. In other words, if the user moves away from the object's border, the attracting force increases up to a defined limit (at distance D from the object's border). If the user tries to move even further, the attracting force stops acting. In this case, it is assumed that the user did not want to follow the object's contour. Then, the cursor will move freely to the empty space, until it is pulled to another object again (if the distance from the cursor to the object is less than or equal to D).
Figure 5: Border Attraction Reactive Model
The authors believe that the most adequate mathematical model to support a spatial haptic interface, and particularly to implement the attraction model presented here, is distance transform. This model makes the computation of forces easier and also offers a means of balancing the spatial resolutions of the actual haptic device with those of the correspondent 2D image that represents the virtual objects.
5.3 Perception of Spatial Properties
In this proposed interface, spatial properties are related to the perception of relative disposal of objects (to the left, right, in front of, behind) and the perception of size (length, area, or volume).
Regarding the perception of relative disposal, it is expected that the active manipulation, that is, user's movement applied to the haptic device, will be sufficient to transmit to the user the notion of relative disposal. In this case, however, it is necessary to put the haptic device in an absolute coordinate system. Input devices in a relative coordinate system (especially a mouse or trackball) are of no use to the visually impaired, because it is impossible for them to track the cursor position on the computer's screen (Fraser & Gutwin, 2000).
The perception of size is closely related to the enclosure exploratory procedure, which cannot be directly simulated by using the existing point-force haptic devices. However, the authors expect that the contour-following exploratory procedure, fully implemented in this work, will be sufficient to convey the perception of size.
6 THE PROTOTYPE OF AN ASSISTIVE INTERFACE
Based on the general ideas presented in the last section, a prototype of an assistive interface for the visually impaired was implemented. This prototype simulates kinds of non-visual properties related to material, geometrical, and spatial characteristics of virtual objects. The Microsoft's Sidewinder Force Feedback 2 joystick was chosen as the haptic device to implement the ideas of this work. The authors believe that this device will be able to convey texture, shape, and spatial information to the visually impaired.
Forces were computed with the aid of distance transform, by using the Mauch's algorithm (Mauch, 2000). This method is efficient in the case of piece-wise linear geometric objects. Three simple objects (square, circle, and triangle) and three different textures were implemented. The textures are:
- Bump. Simulates (by using vibration) a surface with bumps distributed regularly;
- Friction. Simulates inertial movement of the cursor, as if it were attached to an object of some mass;
- Vibration. Vibrates continuously while the cursor is inside this texture.
The implementation of this prototype was done in C++ and uses Microsoft's DirectX/DirectInput library (DirectX, 2002). Also chosen was the TeCGraf's IUP/LED interface system (Levy et al, 1996). The implementation follows an interaction model based on direct manipulation of objects in a graphics canvas (Carneiro et al, 1997). This model offers an abstraction layer for 2D interaction tasks, which helps the implementation of the non-visual techniques proposed here. This prototype was used to test the ideas of this article, as will be discussed in the next section.
7 FIRST EXPERIMENTS
To validate the interaction techniques presented here, some simple experiments were done with users from Benjamin Constant Institute (IBC), a center maintained by the Brazilian federal government, located at Rio de Janeiro, which specializes in education of the visually impaired.
7.1 User Selection
A group of 15 users, with ages ranging from 15 to 38 years old, was chosen to take part in the tests. Intentionally, most of them were blind or almost blind. The chosen group included people blind from birth, those who became blind a few years ago, and those who had been blind for several years.
Users were classified in three different categories: students, athletes, and rehabilitants. Students are enrolled in the primary school[mmc5] at the IBC. Athletes play some kind of sport, like soccer or swimming, at IBC, usually for exercise or fun. Rehabilitants are looking to adapt themselves to blindness. Usually they take part in rehabilitation courses, like mobility and spatial orientation.
7.2 Geometric Tests
These tests evaluate whether the proposed interface allows users to identify simple geometric shapes, like squares, circles, and triangles. The user is given three sheets of Styrofoam containing grooved line drawings of those objects. After the user feels each object, the prototype program presents the same shapes to the user. Then, the user has to indicate the correct sheet of Styrofoam[mmc6].
7.3 Spatial Tests
These tests evaluate whether the proposed interface allows users to identify the position of one object in relation to another and to identify relative sizes. First, a circle is drawn at the screen's center and a square is drawn around the circle (close to the screen's border). After feeling the two objects, the user has to tell the position of the square in relation to the circle.
In another situation, a circle is drawn on the left and a square on the right. The circle is bigger than the square. After feeling the two objects, the user has to tell which on is bigger.
7.4 Texture Tests
These tests evaluate whether the proposed interface allows users to identify different textures. First, the computer's screen is filled with a virtual texture. Then, the user feels it for some time. After that, four different textures (including the one felt previously and a blank texture) are placed at each corner of the computer's screen. The user has to choose which corner the first texture is in. The same process is repeated using another two textures.
In these tests, the authors decided to measure not only the user's capacity to discriminate textures, but also the limitations of the haptic device considered in this work. To do that, two similar textures were chosen.
7.5 Applying the Tests
Initially, the tests were applied to two sighted (but blindfolded) users simply to verify that the instructions and duration of the tests were acceptable. Later, two pilot-tests were performed with blind people at the same place where the final tests would be applied. It was necessary to make small, but important, modifications in the original tests to correct some language mistakes, avoid redundancies, reorder some questions, etc.
The tests took place in a reserved room at Benjamin Constant Institute. In this room, there was a table, two chairs, a notebook computer (running the testing program), the force-feedback joystick, a microphone, and a video camera. Only the user and the evaluator were present in the room.
The tests were divided into three parts. Initially, the users listened to a recording, which gave general instructions about the tests. After that, the user had some time to try the interface and discover how it works. Then, the test was applied. After the test, there was a quick interview with the user.
During the tests, all of the user's actions on the computer's screen were captured using a screen capture program (with audio). Some parts of the tests were also recorded using a video camera (with the user's permission). The complete test session was supposed to take one hour, at most.
In total, twelve simple tests were applied: four geometric tests (G1 to G4), five spatial tests (S1 to S5), and three texture tests (T1 to T3), as shown bellow:
- G1: Identify a square shape;
- G2: Same for circle;
- G3: Same as G1;
- G4: Same for triangle;
- S1: Identify position of square to the right of circle;
- S2: Same for square in front of circle;
- S3: Same for square to the left of circle;
- S4: Same for square to the right and behind of circle;
- S5: Identify the bigger object (circle or square);
- T1: Identify bump texture;
- T2: Same for friction texture;
- T3: Same for vibrating texture.
8 RESULTS
Figure 6 shows the percentage ratio of correct answers in the geometric, spatial, and texture tests applied to the group of fifteen selected users.
Figure 7 shows the percentage ratio of correct answers in the geometric tests G1 to G4. Figure 8 shows the percentage ratio of correct answers in the spatial tests S1 to S5. Figure 9 shows the percentage ratio of correct answers in the texture tests T1 to T3.
Figure 6: Overall test results
Figure 7: Geometric test
Figure 8: Spatial tests
Figure 9: Texture tests
9 DISCUSSION
As shown in Figure 6, the average ratio of correct answers in all tests was 71%. Because of this favorable result, the authors believe that the users have understood correctly how the non-visual interaction task mechanisms (perception of geometric, spatial and texture properties) work in general.
In the geometric tests, 13 users followed all object's borders correctly. There were problems in only 4 of 60 geometric tests (6 and 7% in total, two different users). In this case, all given answers were wrong. This signals that, if the object's contour is not followed correctly, the object's shape cannot be identified, as shown by Lederman & Klatzky (1987).
During the interview, it was possible to note that 73% of users could identify the object's vertices by means of the joystick's resistance. Object's edges were identified by means of force-feedback or joystick's vibration. Therefore, it is assumed that the interface gave users enough feedback to allow them to follow the object's contours because almost all users identified the main object's features, that is, vertices, edges, and shape.
The authors conclude that the non-visual interaction technique used to identify geometric shape was efficient in the cases studied here, which involved interaction with simple geometric objects (squares, circles, and triangles).
In the spatial tests, it was noted that users had more trouble in S2 and S4 (see Figure 8). This may be due to ergonomics: the horizontal user's arm position [mmc7]makes left/right movements easier (user's elbow and forearm do not move). In S5, the high number of right answers (73%) shows that there was little problem determining size. In 65 of 75 spatial tests (87% in total), users appeared to move the cursor between the objects easily. Most of them probably understood how to navigate between the objects.
These results signal that this prototype interface could offer users a better level of feedback. It would be interesting to investigate other modalities of feedback, for instance, sound feedback. This would probably improve the user's spatial perception.
In texture tests, there were some misconceptions in perceiving textures individually: T1 had 47% right answers, while T3 had 93%. The test's order probably paid an important role in the result (users had more time to feel T3). A new test, changing the texture's order, would be necessary to prove this hypothesis. From interviews, it was possible to note that T1 was mixed up with T3.
Also, it was noted that, despite all problems with textures, most users could realize how virtual textures work, because they could describe textures with a reasonable precision. The most common descriptions were:
- T1: artificial grass, carpet, rough terrain, full of bumps, asphalt, paving stone[mmc8];
- T2: swamp, thick[mmc9], soft, sand, high grass, mass, greasy glass;
- T3: sandpaper, vibrant, shaking.
10 CONCLUSION
This project has shown that the use of software and hardware haptic technologies is a very important issue to the visually impaired community. Until recently, this research field has not been explored enough.
In general, this work demonstrates non-visual interaction techniques and a low-level assistive interface model which virtually simulates tangible properties. The research presented here is based on results from cognitive psychology, computer graphics techniques, and user interface and applied mathematical models. By using such concepts, it was possible to implement a prototype of an assistive interface for blind people. This interface is based on unique features of the human touch system and supports different (and complementary) levels of perception mechanisms. Also taken into account were typical perception mechanisms used by the visually impaired, for example, building mental models based on the spatial structure of the objects being handled.
Preliminary tests with the prototype show that, despite using low-cost conventional technologies, it is possible to convey to blind users virtual perception of many non-visual properties like shapes, spatial arrangements, and textures. These results may contribute to reducing the gap between the visually impaired and the actual graphical user interfaces.
11 ACKNOWLEDGMENTS
This work was developed at Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Pure and Applied Mathematics Institute (IMPA), and State University of Rio de Janeiro (UERJ). The authors thank the active participation of many teachers, staff, and students from Benjamin Constant Institute in this research.
REFERENCES
Ando, H., Miki, T., Inami, M., & Maeda, T. (2002). SmartFinger: Nail-Mounted Tactile Display. ACM SIGGRAPH, 2002. Retrieved November, 13, 2004, from http://www.star.t.u-tokyo.ac.jp/projects/smartfinger/.
Avila, R. S., & Sobierajski, L. M. (1996). A Haptic Interaction Method for Volume Visualization. Visualization `96 Proceedings, pp. 197-204, IEEE CS Press, 1996.
Aviles, W., & Ranta, J. (1999). A Brief Presentation on the VRDTS - Virtual Reality Dental Training System. Proc. Fourth PHANToM Users Group Workshop, MIT, 1999.
Boyd, L. H., Boyd, W. L, & Vanderheiden, G. C. (1990). The Graphical User Interface: Crisis, Danger, and Opportunity. Journal of Visual Impairment and Blindness, 1990 (December): p. 496-502.
Branco, P., & Encarnação, M. (2000). Volume Exploration Guided by Haptic Sensing. The International Network of Institutions for advanced education, training and R&D in Computer Graphics technology, systems and applications (INI-GraphicsNet), Computer Graphik topics, Issue 4, 2000.
Brewster, S. (2001). The Impact of Haptic ´Touching´ Technology on Cultural Applications. Proc. of EVA2001. (Glasgow, UK), Vasari UK, s28 pp1-14. Retrieved November, 13, 2004, from http://www.dcs.gla.ac.uk/~stephen/papers/ EVA2001.pdf.
Carneiro, M. M., Gattass, M., Levy, C. H., & Russo, E. M. R. (1997). Interact: um modelo de interação para interfaces 2D por manipulação direta. SIBGRAPI'97, X Simpósio Brasileiro de Computação Gráfica e Processamento de Imagens, 1997 (in Portuguese). Retrieved November, 13, 2004, from http://www.tecgraf.puc-rio.br/~mmc/ART35.ps.gz.
Carneiro, M. M. (2003). Interfaces Assistidas para Deficientes Visuais utilizando Dispositivos Reativos e Transformadas de Distância. PhD Thesis, Informatics Department, Pontifical Catholic University of Rio de Janeiro (PUC-Rio), 2003 (in Portuguese). Retrieved November, 13, 2004, from http://www.tecgraf.puc-rio.br/~mmc/tese/.
Chen, E., & Marcus, B. (1998). Push All the Right Buttons with the Force Feedback Pro, Microsoft Interactive Developer, 1998. Retrieved November, 13, 2004, from http://www.microsoft.com/mind/0698/forcefeedback.htm.
Christian, K. (2000). Design of Haptic and Tactile Interfaces for Blind Users. Department of Computer Science, University of Maryland. Retrieved November, 13, 2004, from http://www.otal.umd.edu/UUGuide/kevin/.
Cyberforce (2002). World's first "desktop" whole-hand and arm force feedback device. Immersion Corp., 2002. Retrieved November, 13, 2004, from http://www.immersion.com/3d/products/cyber_force.php.
DirectX (2002). Microsoft DirectX. Microsoft Corp., 2002. Retrieved November, 13, 2004, from http://www.microsoft.com/ windows/directx/default.asp.
Dosvox (1998). Projeto DOSVOX. Grupo de Computação Eletrônica, Universidade Federal do Rio de Janeiro, 1998 (in Portuguese). Retrieved November, 13, 2004, from http://intervox.nce.ufrj.br/dosvox/.
Durbeck, L. J. K., Macias, N. J., Weinstein, D. M., Johnson, C. R., & Hollerbach, J. M. (1998). SCIRun haptic display for scientific visualization. Phantom Users Group Meeting, Dedham, MA, September 1998. Retrieved November, 13, 2004, from http://www.sci.utah.edu/publications/ldurbeck/pug98.pdf.
Eriksson, Y. (1999). How to make tactile pictures understandable to the blind reader. 65th IFLA Council and General Conference, Bangkok, Thailand, August 20 - August 28, 1999. Retrieved November, 13, 2004, from http://www.ifla.org/IV/ifla65/65ye-e.htm.
Fraser, J., & Gutwin, C. (2000). A framework of assistive pointers for low vision users, Proc. ACM Conference on Assistive Technologies (ASSETS), 2000. Retrieved November, 13, 2004, from http://hci.usask.ca/publications/2000/assistive-assets00/assistive-assets00.pdf.
Fricke, J., & Bäring, H. (1992). A Graphic Input/Output Table for Blind Computer Users. Proc. of 3rd Int. Conference on Computers for Handicapped Persons, pp.172-179, 1992.
Guib (1998). The GUIB project: Graphical User Interfaces for Blind People. Project Homepage, 1998. Retrieved November, 13, 2004, from http://phoenix.herts.ac.uk/SDRU/GUIB/guib.html.
Hayward, V. (2001). Survey of Haptic Interface Research at McGill University. Proc. Workshop in Interactive Multimodal Telepresence Systems. TUM, Munich, Germany, 2001, p. 91-98.
Jacko, J. A. (1998). Designing Interfaces for an Overlooked User Group: Considering the Visual Profiles of Partially Sighted Users. Proc. ACM Conference on Assistive Technologies (ASSETS), 1998.
Jacko, J. A., Barreto, A. B., Marmet, G. J., Chu, J. Y. M., Bautsch, H. S., Scott, I. U., & Rosa Jr, R. H. (2000). Low vision: the role of visual acuity in the efficiency of cursor movement. Proc. ACM Conference on Assistive Technologies (ASSETS), 2000.
James, F. (1998). Lessons from Developing Audio HTML Interfaces. Proc. ACM Conference on Assistive Technologies (ASSETS), 1998.
Johansson, A., & Linde, J. (1999). Using Simple Force Feedback Mechanisms as Haptic Visualization Tools. Second Swedish Symposium of Multimodal Communications, 1999. Retrieved November, 13, 2004, from http://www.tde.lth.se/home/ajn/publications/IMTC99.pdf.
Kamel, H., & Landay, J. (2000). A Study of Blind Drawing Practice: Creating Graphical Information Without the Visual Channel. Proc. ACM Conference on Assistive Technologies (ASSETS), 2000. Retrieved November, 13, 2004, from http://guir.berkeley.edu/projects/ic2d/pubs/ic2d-assets.pdf.
Kamel, H. M., Roth, P., & Sinha, R. R. (2001). Graphics and User´s Exploration via Simple Sonics (GUESS): Providing Interrelational Representation of Objects in a Non-visual Environment. Proceedings of the 2001 International Conference on Auditory Display, Espoo, Finland, July 29-August 1, 2001. Retrieved November, 13, 2004, from http://www.acoustics.hut.fi/icad2001/proceedings/papers/kamel.pdf.
Kawai, Y., & Tomita, F. (1996). Interactive Tactile Display System: A Support for the Visually Disabled to Recognize 3D Objects. Proc. ACM Conference on Assistive Technologies (ASSETS), 1996. Retrieved November, 13, 2004, from http://staff.aist.go.jp/y.kawai/Paper/assets1996.pdf.
Kennel, A. R. (1996). Audiograf: A diagram reader for the blind. Proc. ACM Conference on Assistive Technologies (ASSETS), 1996.
Klatzky, R. L., Lederman, S. J., & Metzger, V. (1985). Identifying objects by touch: An "expert system". Perception & Psychophysics, 37(4), 299-302.
Klatzky, R. L., Lederman, S. J.,& Matula, D. E. (1993). Haptic exploration in the presence of vision. Journal of Experimental Psychology: Human Perception and Performance, 19, 726-743, 1993.
Klatzky, R. L., & Lederman, S. J. (2000). Modality specificity in cognition: The case of touch. In H.L. Roediger, J.S. Nairne, I. Neath, and A.M. Suprenant (Eds.). The Nature of Remembering: Essays in Honor of Robert G. Crowder. Washington, D.C.: American Psychological Association Press.
Kowalik, R., & Postawka, I. (1994). The concept of a full screen tactile display (FSTD) driven by electrochemical reactions. Proceedings of the 4th international conference on Computers for handicapped persons, p.455-460, September 1994, Vienna, Austria.
Kurze, M. (1996). TDraw: A Computer-based Tactile Drawing Tool for Blind People. Proc. ACM Conference on Assistive Technologies (ASSETS), 1996.
Lederman, S. J., & Campbell, J. I. (1982). Tangible graphs for the blind. Human Factors, 24(1), 85-100.
Lederman, S. J., Klatzky, R. L., & Barber, P. O. (1985). Spatial and movement-based heuristics for encoding pattern information through touch. Journal of Experimental Psychology: General, 114, 33-49.
Lederman, S. J., & Klatzky, R. L. (1987). Hand movements: A window into haptic object recognition. Cognitive Psychology, 19(3), 342-368.
Levy, C. H., De Figueiredo, L. H., Gattass, M., Lucena, C., & Cowan, D. (1996). IUP/LED: a portable user interface development tool. Software: Practice & Experience 26 #7 (1996) 737-762. Retrieved November, 13, 2004, from http://www.tecgraf.puc-rio.br/iup/en/iup.ps.gz.
Machado, L. S., Moraes, R. M., & Zuffo, M. K. (2000). A Fuzzy Rule-Based Evaluation for a Haptic and Stereo Simulator for Bone Marrow Harvest for Transplant. Proceedings of Phantom Users Group Workshop, Aspen/CO - USA.
Massie, T. H. (1993). Design of a Three Degree of Freedom Force-Reflecting Haptic Interface. SB Thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, May, 1993.
Massie, T. H. (1998). Physical Interaction: The Nuts and Bolts of Using Touch Interfaces. SIGGRAPH'98 Course Notes, 1998.
Mauch, S. (2000). A Fast Algorithm for Computing the Closest Point and Distance Transform. Submitted for publication in the Jounral of SIAM SISC. Retrieved November, 13, 2004, from http://www.acm.caltech.edu/~seanm/ projects/cpt/cpt.pdf.
May, M. (2000). Accessible GPS Navigation and Digital Map Information for Blind Consumers. 13th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GPS), 2000. Retrieved November, 13, 2004, from http://www.senderogroup.com/icwc2000.htm.
McGee, M. R., Gray, P., & Brewster, S. (2001). Feeling Rough: Multimodal Perception of Virtual Roughness. In Proceedings of Eurohaptics Workshop, 2001, University of Birmingham. Retrieved November, 13, 2004, from http://www.dcs.gla.ac.uk/~stephen/papers/Eurohaptics2001_mcgee.pdf.
Mclaughlin, M. et al. (2001). Haptic Museum. Integrated Media Systems Center (IMSC), University of Southern California (USC). Retrieved November, 13, 2004, from http://imsc.usc.edu/research/NSF_year_five/HapticMuseum.pdf.
Mclaughlin, M. L., Hespanha, J., & Sukhatme, G. (2002). Introduction to haptics. In McLaughlin, M. L., Hespanha, J., & Sukhatme, G. (Eds.). Touch in virtual environments: Haptics and the design of interactive systems. Prentice-Hall, 2002. Retrieved November, 13, 2004, from http://vig.pearsoned.com/samplechapter/0130650978.pdf.
Mensvoort, K. Van (2002). What you see is what you feel: Exploiting the dominance of the visual over the haptic domain to simulate force-feedback with cursor displacements. Proceedings Designing Interactive Systems 2002, ACM Press, 345-348.
Moran, T. (1981). The Command Language Grammars: a representation for the user interface of interactive computer systems. International Journal of Man-Machine Studies 15:3-50, Academic Press, 1981.
Morris, D., & Joshi, N. (2002). Alternative sensory representations of the visual world. CS223b Final Report, Winter 2002, Stanford University.
Myers, B. A., & Rosson, M. B. (1992). Survey on User Interface Programming. Proc. ACM Conference on Human Factors and Computing Systems (CHI), 1992.
Mynatt, E., & Edwards, W. K. (1992). Mapping GUIs to Auditory Interfaces. The Fifth Annual Symposium on User Interface Software and Conference Proceedings (UIST´92), November, 1992. Retrieved November, 13, 2004, from http://www2.parc.com/csl/members/kedwards/pubs/merc-uist92.pdf.
Mynatt, E. (1997). Transforming graphical interfaces into auditory interfaces for blind users. Human-Computer Interaction, Vol. 12, Issue 1-2, p. 7-45, 1997.
NCD (1996). Guidance from the Graphical User Interface (GUI) experience: What GUI Teaches about Technology Access. National Council on Disability, Publicação Eletrônica, 1996. Retrieved November, 13, 2004, from http://www.ncd.gov/newsroom/publications/pdf/gui.pdf.
NIST (2002a). The NIST Rotating-Wheel Based Refreshable Braille Display. Retrieved November, 13, 2004, from http://www.itl.nist.gov/div895/isis/projects/brailleproject.html.
NIST (2002b). NIST 'Pins' Down Imaging System for the Blind. National Institute of Standards and Technology. Gaithersburg, USA. Sep. 13, 2002. Retrieved November, 13, 2004, from http://www.nist.gov/public_affairs/ factsheet/visualdisplay.htm.
Nomad Mentor (1999). Nomad Mentor Home Page. Quantum Technology, 1999. Retrieved November, 13, 2004, from http://www.quantech.com.au/products/quantum_products/tactile/nomad.htm.
Oakley, I., Mcgee, M., Brewster, S. A., & Gray, P. D. (2000). Putting the feel in look and feel. Proc. ACM Conference on Human Factors and Computing Systems (CHI), 2000.
O'modhrain, M. S., & Gillespie, R. B. (1997). The Moose: A Haptic User Interface for Blind Persons. Procedings of the Third WWW6 Conference, Santa Clara, California, 1997. Retrieved November, 13, 2004, from http://ccrma-www.stanford.edu/~sile/abstracts/www6.html.
Pitt, I. J., & Edwards, A. D. N. (1996). Improving the usability of speech-based interfaces for blind users. Proc. ACM Conference on Assistive Technologies (ASSETS), 1996.
Preece, J., Rogers, Y., Sharp, H., Benyon, D., Holland, S., & Carey, T. (1998). Human-computer interaction. Addison-Wesley, 1998.
Raman, T. V. (1996). Emacspeak - Direct Speech Access. Proc. ACM Conference on Assistive Technologies (ASSETS), 1996.
Ramloll, R., Yu, W., Brewster, S., Riedel, Burton, M., & Dimigen, G. (2000). Constructing Sonified Haptic Line Graphs for the Blind Student: First Steps. ACM Conference on Assistive Technologies (ASSETS), 2000. Retrieved November, 13, 2004, from http://www.dcs.gla.ac.uk/~stephen/papers/Assets2000.pdf.
Ramstein, C., & Hayward, V. (1994). The pantograph: a large workspace haptic device for multimodal human computer interaction. Proceedings of the CHI '94 conference companion on Human factors in computing systems, 1994, Boston, Massachusetts, United States.
Ramstein, C. (1996). Combining Haptic and Braille Technologies: Design Issues and Pilot Study. ACM Conference on Assistive Technologies (ASSETS), 1996.
Reed, C. M. (1996). The implications of the Tadoma method of speechreading for spoken language processing. Proc. of The Fourth International Conference on Spoken Language Processing (ICSLP), 1996. Retrieved November, 13, 2004, from http://www.asel.udel.edu/icslp/cdrom/vol3/1002/a1002.pdf.
Reinig, K. D., Rush, C. G., Pelster, H. L., Spitzer, V. M., & Heath, J. A. (1996). Real-time Visually and Haptically Accurate Surgical Simulation. Medicine Meets, Virtual Reality 4, IOS-Press, 1996. Retrieved November, 13, 2004, from http://www.uchsc.edu/sm/chs/research/research.html.
Reinkensmeyer, D., Painter, C., Yang, S., Abbey, E., & Kaino, B. (2000). An Internet-Based, Force-Feedback Rehabilitation System for Arm Movement after Brain Injury. Proceedings of CSUN's 15th Annual International Conference: Technology and Persons with Disabilities, Los Angeles, CA March 2000. Retrieved November, 13, 2004, from http://www.csun.edu/cod/conf/2000/proceedings/0080Reinkensmeyer.htm.
RNIB (2002). Electronic Braille Displays. Royal National Institute of the Blind (RNIB), 2002. Retrieved November, 13, 2004, from http://www.rnib.org.uk/xpedio/groups/public/documents/PublicWebsite/public_rnib002927.hcsp.
Roberts, J., Slattery, O., & Kardos, D. (2000). P-49.2: Rotating-Wheel Braille Display for Continuous Refreshable Braille. Information Technology Laboratory, National Institute of Standards and Technology (NIST), Gaithersburg, USA, 2000. Retrieved November, 13, 2004, from http://www.itl.nist.gov/div895/docs/ roberts_rotating_wheel_braille_display.pdf.
Ross, D. A., & Blasch, B. B. (2000). Wearable Interfaces for Orientation and Wayfinding. Conference on Assistive Technologies (ASSETS), 2000.
Roth, P., Petrucci, L. S., & Pun, T. (2000). From Dots to Shapes": an auditory haptic game platform for teaching geometry to blind pupils. ICCHP 2000 Proceedings, July 2000, Karlsruhe, pp. 603-610
Salisbury, J. K., & Srinivasan, M. A. (1997). PHANToM-Based Haptic Interaction with Virtual Objects. IEEE Computer Graphics and Applications, September/October, 1997.
Savidis, A., Stephanidis, C., Korte, A., Crispien, K., & Fellbaum, K. (1996). A generic direct manipulation 3D auditory environment for hierarchical navigation in non-visual interaction. ACM Conference on Assistive Technologies (ASSETS), 1996.
Sensable (2002). PHANToM, GHOST SDK, Free Form Modeling System. Sensable Technologies, 2002. Retrieved November, 13, 2004, from http://www.sensable.com.
Shneiderman, B. (1983). Direct Manipulation: A step beyond programming languages. IEEE Computer, Vol. 16, No. 8 (August 1983), pp. 57-69, 1993.
Shneiderman, B. (1992). Designing the user interface: strategies for effective human-computer interaction. 2nd Edition, Addison-Wesley, 1992.
Sjöström, C., & Rassmus-Gröhn, K. (1999). The sense of touch provides new computer interaction techniques for disabled people. Technology & Disability (IOS Press). Volume 10, Number 1, 1999.
Sjöström, C. (2001). Designing Haptic Computer Interfaces for Blind People. Sixth International Symposium on Signal Processing and its Applications (ISSPA) 2001, Kuala Lumpur, Malaysia, August 13 - 16, 2001. Retrieved November, 13, 2004, from http://www.certec.lth.se/doc/designinghaptic/.
Sjöström, C., Danielsson, H., Magnusson, C., & Rassmus-Gröhn, K. (2002). Haptic Representations of 2D Graphics for Blind Persons. Electronic Journal of Haptics Research (Haptics-E), 2002.
Smith, C. M. (2001). Human Factors in Haptic Interfaces. ACM, 2001. Retrieved November, 13, 2004, from http://www.acm.org/crossroads/xrds3-3/haptic.html.
TM (2002). Tangible Mouse. Fuji Xexox Co. Ltd., 2002 (in Japanese). Retrieved November, 13, 2004, from http://www.fujixerox.co.jp/tangible_mouse/.
Van Scoy, F., Kawai, T., Darrah, M., & Rash, C. (2000). Haptic Display of Mathematical Functions for Teaching Mathematics to Students with Vision Disabilities: Design and Proof of Concept. In Haptic Human-Computer Interaction, Proc. First International Workshop, Glasgow, UK, August/September 2000, S. Brewster, R. Murray-Smith (Eds.), Springer-Verlag, Berlin. Retrieved November, 13, 2004, from http://www.dcs.gla.ac.uk/~stephen/workshops/ haptic/papers/vanscoy.pdf.
Vanderheiden, G. C. (1996). Use of audio-haptic interface techniques to allow nonvisual access to touchscreen appliances. Human Factors and Ergonomics Society Annual Conference, 1996. Retrieved November, 13, 2004, from http://trace.wisc.edu/docs/touchscreen/chi_conf.htm.
Vanderheiden, G. C. (2000). Fundamental Principles and Priority Setting for Universal Usability. Proceedings of the 1st ACM Conference on Universal Usability (CUU 2000), Arlington, VA. Retrieved November, 13, 2004, from http:/trace.wisc.edu/docs/ fundamental_princ_and_priority_acmcuu2000/.
Williams II, R., & Seaton, J. (2000). Haptics-Augmented High School Physics Tutorials. Internation Journal of Virtual Reality, October, 2000. Retrieved November, 13, 2004, from http://www.ent.ohiou.edu/~bobw/PDF/IJVR2001.pdf.
Wingman (1999). Logitech's WingMan Force Feedback Mouse Now Shipping (Press release). Immersion Corp. Retrieved November, 13, 2004, from http://www.logitech.com/index.cfm/about/pressroom/information/RU/ EN,contentid=1094,year=1999.
Yu, W., Ramloll, R., & Brewster, S. (2000). Haptic graphs for blind computer users. Proceedings of the First Workshop on Haptic Human-Computer Interaction, 2000, p. 102-107. Retrieved November, 13, 2004, from http://www.dcs.gla.ac.uk/~stephen/ papers/HHCI-ray.pdf.
Zajicek, M., Powell, C., & Reeves, C. (1999). Evaluation of a World Wide Web scanning interface for blind and visually impaired users. Proc. HCI International '99, Munich, 1999. Retrieved November, 13, 2004, from http://www.brookes.ac.uk/schools/cms/research/speech/publications/65_hciin.htm.
Zajicek, M. (2000). Increased accessibility to standard Web browsing software for visually impaired users. International Conference on Computers for Handicapped Persons (ICCHP) 2000, Karlesruhe. Retrieved November, 13, 2004, from http://www.brookes.ac.uk/schools/cms/research/speech/publications/79_icchp.htm.
Zajicek, M., Venetsanopoulos, I., Morrissey, W. (2000). Web Access for Visually Impaired People Using Active Accessibility. Proc International Ergonomics Association/HFES 2000, San Diego. Retrieved November, 13, 2004, from http://www.brookes.ac.uk/schools/cms/research/speech/publications/72_iea00.htm.