Modality (human–computer interaction)
In the context of human–computer interaction, a modality is the classification of a single independent channel of sensory input/output between a computer and a human.[1] A system is designated unimodel if it has only one modality implemented, and multimodel if it has more than one.[1] When multiple modalities are available for some tasks or aspects of a task, the system is said to have overlapping modalities. If multiple modalities are available for a task, the system is said to have redundant modalities. Multiple modalities can be used in combination to provide complimentary methods that may be redundant but convey information more effectively.[2] Modalities can be generally defined in two forms: human-computer and computer-human modalities.
Computer–Human Modalities
Any human sense can be used as a computer to human modality. The following are examples of modalities and their implementations through which a computer could send information to a human:
- Uncommon modalities
- Gustation (taste)
- Olfaction (smell)
- Thermoception (heat)
- Nociception (pain)
- Equilibrioception (balance)
The modalities of seeing and hearing are the most commonly employed since they are capable of transmitting more information at a higher speed than other modalities, 250 to 300[3] and 150 to 160[4] words per minute, respectively. Though not commonly implemented as computer-human modality, tactition can achieve an average of 125 wpm [5] through the use of a refreshable Braille display.
Human–Computer Modalities
The computer can be equipped with various types of input devices and sensors to allow it to receive information from the human. Common input devices are often interchangeable if they have a standardized method of communication with the computer and afford practical adjustments to the user.
- Simple modalities
- Complex modalities
Benefits of Multimodal Systems
Multimodal systems give more affordance to users and can contribute to a more robust system. This also allows for greater accessibility for users who work more effectively with certain modalities.
There are six types of relations between modalities, and they help define how a combination or fusion of modalities cooperate to convey information more effectively.[6]
- Equivalence: information is presented in multiple ways and can be interpreted as the same information
- Specialization: when a specific kind of information is always processed through the same modality
- Redundancy: multiple modalities process the same information
- Complimentarity: multiple modalities take separate information and merge it
- Transfer: a modality produces information that another modality consumes
- Concurrency: multiple modalities take in separate information that is not merged
See also
- Multimodal interaction
- User-interface
- Multisensory integration
- NCCR IM2: Swiss project on Multimodal interaction
- ^ a b Karray, Fakhreddine; Alemzadeh, Milad; Saleh, Jamil Abou; Arab, Mo Nours (March 2008). "Human-Computer Interaction: Overview on State of the Art" (PDF). International Journal on Smart Sensing and Intelligent Systems. 1 (1). Retrieved April 21, 2015.
- ^ Palanque, Philippe; Paterno, Fabio (2001). Interactive Systems. Design, Specification, and Verification. Springer Science & Business Media. p. 43. ISBN 9783540416630.
- ^ Ziefle, M (December 1998). "Effects of display resolution on visual performance". Human factors. 40 (4): 554–68. PMID 9974229.
- ^ Williams, J. R. (1998). Guidelines for the use of multimedia in instruction, Proceedings of the Human Factors and Ergonomics Society 42nd Annual Meeting, 1447–1451
- ^ "Braille". ACB. American Council of the Blind. Retrieved 21 April 2015.
- ^ Grifoni, Patrizia (2009). Multimodal Human Computer Interaction and Pervasive Services. IGI Global. p. 37. ISBN 9781605663876.