Jump to content

User:Nik2608/Human–computer interaction

From Wikipedia, the free encyclopedia
This is the current revision of this page, as edited by Nik2608 (talk | contribs) at 07:13, 6 November 2023 (Added types of HCI). The present address (URL) is a permanent link to this version.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Article Draft

[edit]

Introduction


Over the past decade, there have been significant advancements in human-computer interaction (HCI) technologies. While these technologies are becoming more accessible to the public, not everyone has access to affordable HCI devices. The complexity of human involvement in interacting with machines is often invisible, but it is imperative to ensure that HCI design is useful. The existing interfaces differ in the degree of complexity, functionality, usability, and financial and economic aspects of the machine in the market. Therefore, in the design of HCI, the degree of activity that involves a user with a machine should be thoroughly thought out. User activity has three different levels: physical, cognitive, and affective. The focus is more on the physical aspect of interaction, and how different methods of interaction can be combined (Multi-Modal Interaction) and improved in performance (Intelligent Interaction) to provide a better and easier interface for the user.

The existing physical technologies for HCI can be categorized by the relative human sense that the device is designed for. These devices are based on three human senses: vision, audition, and touch. The recent methods and technologies in HCI are now trying to combine former methods of interaction with other advancing technologies such as networking and animation. These new advances can be categorized into three sections: wearable devices, wireless devices, and virtual devices. Some of these new devices have upgraded and integrated previous methods of interaction.

The current trend in HCI research is directed towards designing intelligent and adaptive interfaces that provide a more natural and satisfying experience for users. The definition of intelligence or being smart in this context is not precisely known, but it can be identified by the apparent growth and improvement in functionality and usability of new devices in the market.

To achieve this goal, interfaces are becoming more natural to use every day. For example, note-taking tools have evolved from typewriters to keyboards, and now to touch-screen tablet PCs that can recognize and transcribe handwriting or speech. One important factor in the new generation of interfaces is the differentiation between using intelligence in the creation of the interface (Intelligent HCI) or in the way that the interface interacts with users (Adaptive HCI).

Intelligent HCI designs incorporate some form of intelligence in perception and give responses to users, such as speech-enabled interfaces or devices that track users' movements or gaze and respond accordingly. On the other hand, adaptive HCI designs may not use intelligence in the creation of the interface but use it in the way they interact with users. An adaptive HCI might be a website that recognizes the user and keeps a memory of their searches and purchases, suggesting products on sale that it thinks the user might need.

Most of these kinds of adaptations deal with the cognitive and affective levels of user activity. A PDA (Personal Digital Assistant) or a tablet PC that has handwriting recognition ability and adapts to the handwriting of the logged-in user is an example of an interface that uses both intelligent and adaptive features to improve its performance.

Finally, it's worth noting that most non-intelligent HCI designs are passive in nature, i.e., they only respond when invoked by the user, while intelligent and adaptive interfaces tend to be active. Examples include smart billboards or advertisements that present themselves according to users' taste. In the next section, we'll discuss how the combination of different HCI methods can help make intelligent adaptive natural interfaces.


Types of HCI:

  1. Audio-Based
  2. Visual-Based
  3. Sensor-Based

The next section will talk about the three different types of HCI:

Visual- Based HCI


  1. Facial Expression Analysis: This area focuses on visually recognizing and analyzing emotions through facial expressions.
  2. Body Movement Tracking (Large-scale): Researchers in this area concentrate on tracking and analyzing large-scale body movements.
  3. Gesture Recognition: Gesture recognition involves identifying and interpreting gestures made by users, often used for direct interaction with computers in command and action scenarios.
  4. Gaze Detection (Eyes Movement Tracking): Gaze detection involves tracking the movement of a user's eyes and is primarily used to better understand the user's attention, intent, or focus in context-sensitive situations.


While the specific goals of each area vary based on applications, they collectively contribute to enhancing human-computer interaction. Notably, visual approaches have been explored as alternatives or aids to other types of interactions, such as audio- and sensor-based methods. For example, lip reading or lip movement tracking has proven influential in correcting speech recognition errors.

Audio - Based HCI


Audio-based interaction in human-computer interaction (HCI) is a crucial field focused on processing information acquired through various audio signals. While the nature of audio signals may be less diverse compared to visual signals, the information they provide can be highly reliable, valuable, and sometimes uniquely informative. The research areas within this domain include:

  1. Speech Recognition: This area centers on the recognition and interpretation of spoken language.
  2. Speaker Recognition: Researchers in this area concentrate on identifying and distinguishing different speakers.
  3. Auditory Emotion Analysis: Efforts have been made to incorporate human emotions into intelligent human-computer interaction by analyzing emotional cues in audio signals.
  4. Human-Made Noise/Sign Detections: This involves recognizing typical human auditory signs like sighs, gasps, laughs, cries, etc., which contribute to emotion analysis and the design of more intelligent HCI systems.
  5. Musical Interaction: A relatively new area in HCI, it involves generating and interacting with music, with applications in the art industry. This field is studied in both audio- and visual-based HCI systems.

Sensor-Based HCI


This section encompasses a diverse range of areas with broad applications, all of which involve the use of physical sensors to facilitate interaction between users and machines. These sensors can range from basic to highly sophisticated. The specific areas include:

  1. Pen-Based Interaction: Particularly relevant in mobile devices, focusing on pen gestures and handwriting recognition.
  2. Mouse & Keyboard: Well-established input devices discussed in Section 3.1, commonly used in computing.
  3. Joysticks: Another established input device for interactive control, commonly used in gaming and simulations.
  4. Motion Tracking Sensors and Digitizers: Cutting-edge technology that has revolutionized industries like film, animation, art, and gaming. These sensors, in forms like wearable cloth or joint sensors, enable more immersive interactions between computers and reality.
  5. Haptic Sensors: Particularly significant in applications related to robotics and virtual reality, providing feedback based on touch. They play a crucial role in enhancing sensitivity and awareness in humanoid robots, as well as in medical surgery applications.
  6. Pressure Sensors: Also important in robotics, virtual reality, and medical applications, providing information based on pressure exerted on a surface.
  7. Taste/Smell Sensors: Although less popular compared to other areas, research has been conducted in the field of sensors for taste and smell.

These sensors vary in their level of maturity, with some being well-established and others representing cutting-edge technologies.

References

[edit]
  • Karray, F., Alemzadeh, M., Abou Saleh, J., & Arab, M. N. (2008). Human-computer interaction: Overview on state of the art. International journal on smart sensing and intelligent systems, 1(1), 137