Portal:Computer programming
![]() | Portal maintenance status: (September 2019)
|
The Computer Programming Portal
Computer programming or coding is the composition of sequences of instructions, called programs, that computers can follow to perform tasks. It involves designing and implementing algorithms, step-by-step specifications of procedures, by writing code in one or more programming languages. Programmers typically use high-level programming languages that are more easily intelligible to humans than machine code, which is directly executed by the central processing unit. Proficient programming usually requires expertise in several different subjects, including knowledge of the application domain, details of programming languages and generic code libraries, specialized algorithms, and formal logic.
Auxiliary tasks accompanying and related to programming include analyzing requirements, testing, debugging (investigating and fixing problems), implementation of build systems, and management of derived artifacts, such as programs' machine code. While these are sometimes considered programming, often the term software development is used for this larger overall process – with the terms programming, implementation, and coding reserved for the writing and editing of code per se. Sometimes software development is known as software engineering, especially when it employs formal methods or follows an engineering design process. (Full article...)
Selected articles - load new batch
-
Image 1
A Blender screenshot displaying Suzanne, a 3D test model
Computer graphics deals with generating images and art with the aid of computers. Computer graphics is a core technology in digital photography, film, video games, digital art, cell phone and computer displays, and many specialized applications. A great deal of specialized hardware and software has been developed, with the displays of most devices being driven by computer graphics hardware. It is a vast and recently developed area of computer science. The phrase was coined in 1960 by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, or typically in the context of film as computer generated imagery (CGI). The non-artistic aspects of computer graphics are the subject of computer science research.
Some topics in computer graphics include user interface design, sprite graphics, raster graphics, rendering, ray tracing, geometry processing, computer animation, vector graphics, 3D modeling, shaders, GPU design, implicit surfaces, visualization, scientific computing, image processing, computational photography, scientific visualization, computational geometry and computer vision, among others. The overall methodology depends heavily on the underlying sciences of geometry, optics, physics, and perception.
Computer graphics is responsible for displaying art and image data effectively and meaningfully to the consumer. It is also used for processing image data received from the physical world, such as photo and video content. Computer graphics development has had a significant impact on many types of media and has revolutionized animation, movies, advertising, and video games, in general. (Full article...) -
Image 2In the C++ programming language,
decltype
is a keyword used to query the type of an expression. Introduced in C++11, its primary intended use is in generic programming, where it is often difficult, or even impossible, to express types that depend on template parameters.
As generic programming techniques became increasingly popular throughout the 1990s, the need for a type-deduction mechanism was recognized. Many compiler vendors implemented their own versions of the operator, typically calledtypeof
, and some portable implementations with limited functionality, based on existing language features were developed. In 2002, Bjarne Stroustrup proposed that a standardized version of the operator be added to the C++ language, and suggested the name "decltype", to reflect that the operator would yield the "declared type" of an expression.decltype
's semantics were designed to cater to both generic library writers and novice programmers. In general, the deduced type matches the type of the object or function exactly as declared in the source code. Like thesizeof
operator,decltype
's operand is not evaluated. (Full article...) -
Image 3
PDP-11 CPU board
Computer hardware includes the physical parts of a computer, such as the central processing unit (CPU), random-access memory (RAM), motherboard, computer data storage, graphics card, sound card, and computer case. It includes external devices such as a monitor, mouse, keyboard, and speakers.
By contrast, software is a set of written instructions that can be stored and run by hardware. Hardware derived its name from the fact it is hard or rigid with respect to changes, whereas software is soft because it is easy to change.
Hardware is typically directed by the software to execute any command or instruction. A combination of hardware and software forms a usable computing system, although other systems exist with only hardware. (Full article...) -
Image 4
W3sDesign Interpreter Design Pattern UML
In computer science, an interpreter is a computer program that directly executes instructions written in a programming or scripting language, without requiring them previously to have been compiled into a machine language program. An interpreter generally uses one of the following strategies for program execution:- Parse the source code and perform its behavior directly;
- Translate source code into some efficient intermediate representation or object code and immediately execute that;
- Explicitly execute stored precompiled bytecode made by a compiler and matched with the interpreter's virtual machine.
Early versions of Lisp programming language and minicomputer and microcomputer BASIC dialects would be examples of the first type. Perl, Raku, Python, MATLAB, and Ruby are examples of the second, while UCSD Pascal is an example of the third type. Source programs are compiled ahead of time and stored as machine independent code, which is then linked at run-time and executed by an interpreter and/or compiler (for JIT systems). Some systems, such as Smalltalk and contemporary versions of BASIC and Java, may also combine two and three types. Interpreters of various types have also been constructed for many languages traditionally associated with compilation, such as Algol, Fortran, Cobol, C and C++. (Full article...) -
Image 5
The Greek lowercase omega (ω) character is used to represent Null in database theory.
In SQL, null or NULL is a special marker used to indicate that a data value does not exist in the database. Introduced by the creator of the relational database model, E. F. Codd, SQL null serves to fulfill the requirement that all true relational database management systems (RDBMS) support a representation of "missing information and inapplicable information". Codd also introduced the use of the lowercase Greek omega (ω) symbol to represent null in database theory. In SQL,NULL
is a reserved word used to identify this marker.
A null should not be confused with a value of 0. A null indicates a lack of a value, which is not the same as a zero value. For example, consider the question "How many books does Adam own?" The answer may be "zero" (we know that he owns none) or "null" (we do not know how many he owns). In a database table, the column reporting this answer would start with no value (marked by null), and it would not be updated with the value zero until it is ascertained that Adam owns no books.
In SQL, null is a marker, not a value. This usage is quite different from most programming languages, where a null value of a reference means it is not pointing to any object. (Full article...) -
Image 6
Julia is a dynamic general-purpose programming language. As a high-level language, distinctive aspects of Julia's design include a type system with parametric polymorphism, the use of multiple dispatch as a core programming paradigm, just-in-time (JIT) compilation and a parallel garbage collection implementation. Notably Julia does not support classes with encapsulated methods but instead relies on the types of all of a function's arguments to determine which method will be called.
By default, Julia is run similarly to scripting languages, using its runtime, and allows for interactions, but Julia programs/source code can also optionally be sent to users in one ready-to-install/run file, which can be made quickly, not needing anything preinstalled.
Julia programs can reuse libraries from other languages (or itself be reused from other); Julia has a special no-boilerplate keyword allowing calling e.g. C, Fortran or Rust libraries, and e.g. PythonCall.jl uses it indirectly for you, and Julia (libraries) can also be called from other languages, e.g. Python and R, and several Julia packages have been made easily available from those languages, in the form of Python and R libraries for corresponding Julia packages. Calling in either direction has been implemented for many languages, not just those and C++. (Full article...) -
Image 7
Node.js is a cross-platform, open-source JavaScript runtime environment that can run on Windows, Linux, Unix, macOS, and more. Node.js runs on the V8 JavaScript engine, and executes JavaScript code outside a web browser.
Node.js lets developers use JavaScript to write command line tools and for server-side scripting. The ability to run JavaScript code on the server is often used to generate dynamic web page content before the page is sent to the user's web browser. Consequently, Node.js represents a "JavaScript everywhere" paradigm, unifying web-application development around a single programming language, as opposed to using different languages for the server- versus client-side programming.
Node.js has an event-driven architecture capable of asynchronous I/O. These design choices aim to optimize throughput and scalability in web applications with many input/output operations, as well as for real-time Web applications (e.g., real-time communication programs and browser games). (Full article...) -
Image 8Laboratory Virtual Instrument Engineering Workbench (LabVIEW) is a graphical system design and development platform produced and distributed by National Instruments, based on a programming environment that uses a visual programming language. It is widely used for data acquisition, instrument control, and industrial automation. It provides tools for designing and deploying complex test and measurement systems.
The visual (aka graphical) programming language is called "G" (not to be confused with G-code). It is a dataflow language originally developed by National Instruments. LabVIEW is supported on a variety of operating systems (OSs), including macOS and other versions of Unix and Linux, as well as Microsoft Windows.
The latest versions of LabVIEW are LabVIEW 2024 Q3 (released in July 2024) and LabVIEW NXG 5.1 (released in January 2021). National Instruments released the free for non-commercial use LabVIEW and LabVIEW NXG Community editions on April 28, 2020. (Full article...) -
Image 9
Large supercomputers such as IBM's Blue Gene/P are designed to heavily exploit parallelism.
Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.
In computer science, parallelism and concurrency are two different things: a parallel program uses multiple CPU cores, each core performing a task independently. On the other hand, concurrency enables a program to deal with multiple tasks even on a single CPU core; the core switches between tasks (i.e. threads) without necessarily completing each one. A program can have both, neither or a combination of parallelism and concurrency characteristics.
Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multi-core and multi-processor computers having multiple processing elements within a single machine, while clusters, MPPs, and grids use multiple computers to work on the same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks. (Full article...) -
Image 10
The source code for a computer program in C. The gray lines are comments that explain the program to humans. When compiled and run, it will give the output "Hello, world!".
A programming language is a system of notation for writing source code such as used to produce a computer program. A language allows a programmer to develop human readable content that can be consumed by a computer but only after translation via an automated process that enables source code to be executable. Historically, a compiler translates source code into machine code that is directly runnable by a computer, and an interpreter executes source code without converting to machine code. Today, hybrid technologies exist such as compiling to an intermediate form (such as bytecode) which is later interpreted or just-in-time compiled to machine code before running.
Computer architecture has strongly influenced the design of programming languages, with the most common type (imperative languages) developed to perform well on the popular von Neumann architecture. While early programming languages were closely tied to the hardware, modern languages hide hardware details via abstraction in an effort to enable better software with less effort. (Full article...) -
Image 11
Source code for a computer program written in the JavaScript language. It demonstrates the appendChild method. The method adds a new child node to an existing parent node. It is commonly used to dynamically modify the structure of an HTML document.
A computer program is a sequence or set of instructions in a programming language for a computer to execute. It is one component of software, which also includes documentation and other intangible components.
A computer program in its human-readable form is called source code. Source code needs another computer program to execute because computers can only execute their native machine instructions. Therefore, source code may be translated to machine instructions using a compiler written for the language. (Assembly language programs are translated using an assembler.) The resulting file is called an executable. Alternatively, source code may execute within an interpreter written for the language.
If the executable is requested for execution, then the operating system loads it into memory and starts a process. The central processing unit will soon switch to this process so it can fetch, decode, and then execute each machine instruction. (Full article...) -
Image 12
Kotlin (/ˈkɒtlɪn/) is a cross-platform, statically typed, general-purpose high-level programming language with type inference. Kotlin is designed to interoperate fully with Java, and the JVM version of Kotlin's standard library depends on the Java Class Library,
but type inference allows its syntax to be more concise. Kotlin mainly targets the JVM, but also compiles to JavaScript (e.g., for frontend web applications using React) or native code via LLVM (e.g., for native iOS apps sharing business logic with Android apps). Language development costs are borne by JetBrains, while the Kotlin Foundation protects the Kotlin trademark.
On 7 May 2019, Google announced that the Kotlin programming language had become its preferred language for Android app developers. Since the release of Android Studio 3.0 in October 2017, Kotlin has been included as an alternative to the standard Java compiler. The Android Kotlin compiler emits Java 8 bytecode by default (which runs in any later JVM), but allows targeting Java 9 up to 20, for optimizing, or allows for more features; has bidirectional record class interoperability support for JVM, introduced in Java 16, considered stable as of Kotlin 1.5.
Kotlin has support for the web with Kotlin/JS, through an intermediate representation-based backend which has been declared stable since version 1.8, released December 2022. Kotlin/Native (for e.g. Apple silicon support) has been declared stable since version 1.9.20, released November 2023. (Full article...) -
Image 13Logotype used on the cover of the first edition of The C Programming Language
C is a general-purpose programming language. It was created in the 1970s by Dennis Ritchie and remains widely used and influential. By design, C gives the programmer relatively direct access to the features of the typical CPU architecture; customized for the target instruction set. It has been and continues to be used to implement operating systems (especially kernels), device drivers, and protocol stacks, but its use in application software has been decreasing. C is used on computers that range from the largest supercomputers to the smallest microcontrollers and embedded systems.
A successor to the programming language B, C was originally developed at Bell Labs by Ritchie between 1972 and 1973 to construct utilities running on Unix. It was applied to re-implementing the kernel of the Unix operating system. During the 1980s, C gradually gained popularity. It has become one of the most widely used programming languages, with C compilers available for practically all modern computer architectures and operating systems. The book The C Programming Language, co-authored by the original language designer, served for many years as the de facto standard for the language. C has been standardized since 1989 by the American National Standards Institute (ANSI) and, subsequently, jointly by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC).
C is an imperative procedural language, supporting structured programming, lexical variable scope, and recursion, with a static type system. It was designed to be compiled to provide low-level access to memory and language constructs that map efficiently to machine instructions, all with minimal runtime support. Despite its low-level capabilities, the language was designed to encourage cross-platform programming. A standards-compliant C program written with portability in mind can be compiled for a wide variety of computer platforms and operating systems with few changes to its source code. (Full article...) -
Image 14
tcsh and sh shell windows on a Mac OS X Leopard desktop
A Unix shell is a shell that provides a command-line user interface for a Unix-like operating system. A Unix shell provides a command language that can be used either interactively or for writing a shell script. A user typically interacts with a Unix shell via a terminal emulator; however, direct access via serial hardware connections or Secure Shell are common for server systems. Although use of a Unix shell is popular with some users, others prefer to use a windowing system such as desktop Linux distribution or macOS instead of a command-line interface.
A user may have access to multiple Unix shells with one configured to run by default when the user logs in interactively. The default selection is typically stored in a user's profile; for example, in the local passwd file or in a distributed configuration system such as NIS or LDAP. A user may use other shells nested inside the default shell.
A Unix shell may provide many features including: variable definition and substitution, command substitution, filename wildcarding, stream piping, control flow structures (condition-testing and iteration), working directory context, and here document. (Full article...) -
Image 15Forth is a stack-oriented programming language and interactive integrated development environment designed by Charles H. "Chuck" Moore and first used by other programmers in 1970. Although not an acronym, the language's name in its early years was often spelled in all capital letters as FORTH. The FORTH-79 and FORTH-83 implementations, which were not written by Moore, became de facto standards, and an official technical standard of the language was published in 1994 as ANS Forth. A wide range of Forth derivatives existed before and after ANS Forth. The free and open-source software Gforth implementation is actively maintained, as are several commercially supported systems.
Forth typically combines a compiler with an integrated command shell, where the user interacts via subroutines called words. Words can be defined, tested, redefined, and debugged without recompiling or restarting the whole program. All syntactic elements, including variables, operators, and control flow, are defined as words. A stack is used to pass parameters between words, leading to a Reverse Polish notation style.
For much of Forth's existence, the standard technique was to compile to threaded code, which can be interpreted faster than bytecode. One of the early benefits of Forth was size: an entire development environment—including compiler, editor, and user programs—could fit in memory on an 8-bit or similarly limited system. No longer constrained by space, there are modern implementations that generate optimized machine code like other language compilers. (Full article...)
Selected images
-
Image 1This image (when viewed in full size, 1000 pixels wide) contains 1 million pixels, each of a different color.
-
Image 2Ada Lovelace was an English mathematician and writer, chiefly known for her work on Charles Babbage's proposed mechanical general-purpose computer, the Analytical Engine. She was the first to recognize that the machine had applications beyond pure calculation, and to have published the first algorithm intended to be carried out by such a machine. As a result, she is often regarded as the first computer programmer.
-
Image 3Stephen Wolfram is a British-American computer scientist, physicist, and businessman. He is known for his work in computer science, mathematics, and in theoretical physics.
-
Image 5Output from a (linearised) shallow water equation model of water in a bathtub. The water experiences 5 splashes which generate surface gravity waves that propagate away from the splash locations and reflect off of the bathtub walls.
-
Image 6Margaret Hamilton standing next to the navigation software that she and her MIT team produced for the Apollo Project.
-
Image 7A view of the GNU nano Text editor version 6.0
-
Image 9A lone house. An image made using Blender 3D.
-
Image 10An IBM Port-A-Punch punched card
-
Image 11Partial map of the Internet based on the January 15, 2005 data found on opte.org. Each line is drawn between two nodes, representing two IP addresses. The length of the lines are indicative of the delay between those two nodes. This graph represents less than 30% of the Class C networks reachable by the data collection program in early 2005.
-
Image 13GNOME Shell, GNOME Clocks, Evince, gThumb and GNOME Files at version 3.30, in a dark theme
-
Image 15Grace Hopper at the UNIVAC keyboard, c. 1960. Grace Brewster Murray: American mathematician and rear admiral in the U.S. Navy who was a pioneer in developing computer technology, helping to devise UNIVAC I. the first commercial electronic computer, and naval applications for COBOL (common-business-oriented language).
-
Image 16A head crash on a modern hard disk drive
-
Image 17Deep Blue was a chess-playing expert system run on a unique purpose-built IBM supercomputer. It was the first computer to win a game, and the first to win a match, against a reigning world champion under regular time controls. Photo taken at the Computer History Museum.
-
Image 18Partial view of the Mandelbrot set. Step 1 of a zoom sequence: Gap between the "head" and the "body" also called the "seahorse valley".
Did you know? - load more entries

- ... that the study of selection algorithms has been traced to an 1883 work of Lewis Carroll on how to award second place in single-elimination tournaments?
- ... that the programming language Acorn System BASIC was so non-standard that one commenter suggested that using it on the BBC Micro would be a disaster?
- ... that Phil Fletcher as Hacker T. Dog caused Lauren Layfield to make the "most famous snort" in the United Kingdom in 2016?
- ... that the 2024 psychological horror game Mouthwashing utilises non-diegetic scene transitions that mimic glitches and crashes?
- ... that both Thackeray and Longfellow bought paintings by Fanny Steers?
- ... that a pink skin for Mercy in the video game Overwatch helped raise more than $12 million for breast cancer research?
Subcategories
WikiProjects
- There are many users interested in computer programming, join them.
Computer programming news
- 23 May 2025 –
- Authorities from Canada, Denmark, France, Germany, the Netherlands, the United Kingdom, and the U.S. announce a joint operation to crack down on malware around the world, which took down over 300 servers, neutralized 650 domains, and seized over €3.5 million (US$3.9 million) of cryptocurrency. (DW)
Topics
Related portals
Associated Wikimedia
The following Wikimedia Foundation sister projects provide more on this subject:
-
Commons
Free media repository -
Wikibooks
Free textbooks and manuals -
Wikidata
Free knowledge base -
Wikinews
Free-content news -
Wikiquote
Collection of quotations -
Wikisource
Free-content library -
Wikiversity
Free learning tools -
Wiktionary
Dictionary and thesaurus