User:ScotXW/Graphical control element
Appearance

A graphical control element
- is an element of interaction that is part of the graphical user interface (GUI), such as a button or a scroll bar, of a program. A graphical control element is written as software and the rendered by another software. User interface libraries, such as e.g. GTK+ or Cocoa, contain a collection of graphical control elements and the logic to render these.
- is a subroutine written in some programming language such as C, C++ or Objective C). It has to be compiled to become executable. Also, there has to be a rendering engine, which generates the graphical representation. The graphical representation can be hard-coded or themable.
Utilization
Graphical control elements are collected in libraries such as GTK+, Qt, Cocoa, etc. There are programs to facilitate the creation of a GUI called graphical user interface builders such as Glade Interface Designer, GNOME Builder, etc.
GUI design
When implementing the WIMP paradigm.
The GUI of a program consists of numeral graphical control elements. The authoritative thing dictating the design of the GUI of any program is the "workflow" as researched by cognitive ergonomics or as reported by an active user population. To be effective, any GUI has to be tailored to the input and output hardware:
![]() |
A very common and misleading visualization of the human–machine interaction: the user "sits on top" of the GUI. |
![]() |
A correct visualization of the human–machine interaction: the users "sits below" the hardware. The user interacts directly with the hardware, not with the GUI! Any input and output traverses the kernel and is computed on the CPU. But the human user interacts directly only the available peripheral hardware! |
![]() |
A complete and correct visualization of the human–machine interaction. The human brain is on the far left side, the GUI is on the far right side. A couple of layers do sit in between! The GUI design has to take into consideration the cognitive abilities of the human brain to enable an efficient work flow for any interaction as well as the restrains of the employed peripheral hardware being used. The peripheral hardware is usually tailored to the features of the human (10 fingers, etc.) Output: In case a visual device is even used, this display has a certain size, a certain resolution (raster graphics) and refresh rate. Input: May be a combination of a computer keyboard and some pointing device. Or it may be solely a touchscreen. There could be gesture recognition, voice command recognition. Employment of a screen (display/monitor) does not automatically result in "graphics", but could display merely a Text-based user interface (TUI, cf. ncurses). A TUI can have symbols different than just letters, cf. character encoding. Latency is the delay between input and resulting output. In case this I/O latency becomes to big, it messes with the workflow. |
![]() |
Visualization of the human–machine interaction. Any input and output traverses the kernel. |
![]() |
WIMP with X. |