Jump to content

Talk:Reduced instruction set computer

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by KenShirriff (talk | contribs) at 21:22, 11 December 2023 (Merge proposal: new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Former featured articleReduced instruction set computer is a former featured article. Please see the links under Article milestones below for its original nomination page (for older articles, check the nomination archive) and why it was removed.
Article milestones
DateProcessResult
December 15, 2003Featured article candidatePromoted
January 8, 2005Featured article reviewDemoted
Current status: Former featured article
WikiProject iconComputing C‑class High‑importance
WikiProject iconThis article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
CThis article has been rated as C-class on Wikipedia's content assessment scale.
HighThis article has been rated as High-importance on the project's importance scale.
Taskforce icon
This article is supported by Computer hardware task force (assessed as High-importance).

Requested move 10 May 2017

The following is a closed discussion of a requested move. Please do not modify it. Subsequent comments should be made in a new section on the talk page. Editors desiring to contest the closing decision should consider a move review. No further edits should be made to this section.

The result of the move request was: Moved.Granted as a non-controversial request.(non-admin closure) Winged Blades Godric 05:56, 19 May 2017 (UTC)[reply]



Reduced instruction set computingReduced instruction set computer – This article was moved here from its previous title at Reduced instruction set computer on 20 April 2010 without any prior discussion to seek consensus. The only rationale was given in the edit summary: "intruductory [sic] paragraph's wording is awkward and more easily addresses RISC as an architecture ("… computing") than an instance of it's use ("… computer")."

I contend that this is incorrect. Whatever compositional problems the lead had, the solution cannot be to represent this topic as being called "reduced instruction set computing" when it is not. To do so would be to misrepresent the topic and what the topic is commonly called, thus introducing factual inaccuracies. The term "RISC" was introduced in David Patterson and David R. Ditzel's "The case for the reduced instruction set computer" (ACM SIGARCH Computer Architecture News, V. 8, No. 6, October 1980). Since then, that is what RISCs have been called. The idea that using "computing" instead of "computing" creates a distinction between "architecture" and an instance of its use is incorrect. The use of "computer" instead of "computing" in RISC is no different to that in terms such as "stored-program computer". One does not see instances of "stored-program computing". 50504F (talk) 07:23, 10 May 2017 (UTC)[reply]


The above discussion is preserved as an archive of a requested move. Please do not modify it. Subsequent comments should be made in a new section on this talk page or in a move review. No further edits should be made to this section.

Problems with the lead

The lead presently says:

Reduced instruction set computing, or RISC (pronounced 'risk', /ɹɪsk/), is a CPU design strategy based on the insight that a simplified instruction set provides higher performance when combined with a microprocessor architecture capable of executing those instructions using fewer microprocessor cycles per instruction. A computer based on this strategy is a reduced instruction set computer, also called RISC. The opposing architecture is called complex instruction set computing (CISC).

There's a few problems here besides the issue raised at Talk:Reduced instruction set computer#Requested move 10 May 2017:

  • To say that RISC is a CPU design strategy could be misunderstood by laypeople as self-contradictory given that a CPU is understood to be a part of a computer, yet the "C" in "RISC" means "computer". A person familiar with the topic would understand that whilst some sources describe RISC as such, its because of the inadequacies of the language in reconciling how the idea was originally framed and how it is framed today. RISC is better described as a type of computer.
  • "Microprocessor architecture" implies that RISC is intrinsically linked to microprocessors. Whilst that's the popular narrative, it's wrong. The first RISC was the IBM 801, and it wasn't a microprocessor.
  • "Microprocessor architecture" is linked to "microarchitecture"; "microarchitecture" is not a contraction of "microprocessor architecture".
  • The use of cycles per instruction (CPI) to mean instruction latency is completely wrong. To speak very generally, CPI is an average of all measured instruction latencies.
  • "Microprocessor cycles per instruction" is meaningless, there's no need to qualify "CPI" with "microprocessor".

I've improved the lead as best I can, but given the complexity of the topic, it's probably still be inadequate. 50504F (talk) 07:35, 26 May 2017 (UTC)[reply]

50504F, Thank you, those improvements made the lead much better. I agree that RISC not just a "design strategy", but a more-or-less objective type of hardware that can be distinguished from other types of hardware no matter what "design strategy" was used to develop it.
I'd like to suggest 2 further improvements:
* Alas, the current lead claims RISC has something to do "with a small ... set of instructions," a common misunderstanding that is specifically called out in the reduced instruction set computer#Instruction set philosophy section and the reduced instruction set computer#Comparison to other architectures section.
* I feel that RISC is better described as a type of CPU, rather than a type of computer. In my view, the things that make RISC different than the alternatives (TTA, CISC, DSP, etc.) only affect the CPU and have little or no effect on the other parts of computer architecture -- main memory, I/O, caching, etc. -- or other parts of a physical computer -- the form factor, the power supply, whether it has a single-chip CPU (microprocessor) or a discrete-transistor CPU or something else, etc.
--DavidCary (talk) 22:06, 12 January 2021 (UTC)[reply]
I think the lead is in reasonable shape now and I am removing the {{Technical}} tag. ~Kvng (talk) 16:12, 15 May 2023 (UTC)[reply]

What about PowerMacs using PowerPC processors from IBM for many years, G3, G4, G5...

It seems the article is not mentioning probably the biggest user of RISC processors, APPLE. In the 90s all Macs were powered by IBM RISC PowerPC processors, then the transition to Intel happened in the early 2000s... — Preceding unsigned comment added by 193.105.48.90 (talk) 13:37, 6 June 2017 (UTC)[reply]

Article implies performance parity with x86 throughout without providing data to back up that claim

This article repeatedly makes the reader believe there is performance parity between RISC and CISC computers, forcing itself to take a very ignorant look at computing as a whole in order to accomplish that goal. For example...

"The term "reduced" in that phrase was intended to describe the fact that the amount of work any single instruction accomplishes is reduced—at most a single data memory cycle—compared to the "complex instructions" of CISC CPUs that may require dozens of data memory cycles in order to execute a single instruction.[24]"

Despite the complexity of CISC instructions, modern CISC computers can execute between 3-4 IPC (instructions per cycle). Source: https://lemire.me/blog/2019/12/05/instructions-per-cycle-amd-versus-intel/

If we look at the manual for a common RISC CPU like the SiFive E21, we can see that..... “The pipeline has a peak execution rate of one instruction per clock cycle.” Source: https://sifive.cdn.prismic.io/sifive%2Fc93c6f29-5129-4e19-820f-8621300eca53_e21-core-complex-manual-v19.05.pdf

So despite having less complex instructions, RISC processors in practice still can't execute more instructions than their CISC counterparts. Additionally, CISC computers don't have to "translate" or "emulate" 90% of the code in existence. So they have the home-field advantage over RISC which must waste clock cycles to emulate the x86 instruction set. The emulation of x86 software is not a value-added feature of CISC to the user. It is a non-value-added feature that only exists to enable the CPU to do work most existing CISC computers can do natively.

Please modify the article to at least acknowledge the substantial performance difference between CISC and RISC. Also, please refrain from seeking to marginalize the substantial technological benefits of using CISC technology. I understand this is a RISC article, but it's disingenuous to represent RISC as an apples-to-apples comparison to CISC. It clearly isn't. You have the entire "energy efficiency" soap-box to stand on but you choose not to use it. Instead you stand on the performance soap-box trying to sell CISC-like capabilities under the RISC flag while somehow ignoring the fact that CISC still exists, and is still actually still more performant than RISC. Source: https://images.idgesg.net/images/article/2020/12/m1_cinenbench_r20_nt-100870777-orig.jpg — Preceding unsigned comment added by 2603:3005:3C77:4000:5DA2:1427:DC1C:5DE5 (talk) 19:28, 1 January 2021 (UTC)[reply]

So what about other RISC CPUs such as the POWER9 or the Apple M1? The execution unit of the SiFive E21 is, as the manual you cite says, "a single-issue, in-order pipeline"; you're not going to get maximum raw performance from that. The POWER9, however, is a superscalar out-of-order processor; the IEEE Micro paper on it says "Variants of the core support completion of up to 128 (SMT4 core) or 256 (SMT8 core) instructions in every cycle." I don't know whether any articles on the M1 are out yet, but this Anandtech article is looking at the Apple A14 as an example of what Apple's ARM chips' microarchitectures are like these days, and the A14 is another superscalar out-of-order processor.
"RISC vs. CISC" is a somewhat bogus comparison. Any modern RISC processor should be able to outrun that boring old single-issue, in-order 80386 CISC processor, but that's like saying a modern Ford could outperform a Model T - not very interesting. Comparing an x86 processor intended for desktop/notebook or server use with a 32-bit embedded processor is also not very interesting. If you want to compare Intel and SiFive, try comparing a Xeon with, say, a U84, which is a 64-bit superscalar out-of-order processor, just like current Xeons.
And, "Additionally, CISC computers don't have to "translate" or "emulate" 90% of the code in existence." notwithstanding, I can think of at least one CISC processor that would have to translate or emulate x86 code. Presumably what you meant is "x86-based computers don't have to translate or emulate 90% of the code in existence", so that part isn't comparing CISC with RISC, it's comparing x86 with various RISC instruction sets.
The extent to which that's a "home-field advantage" for a particular market depends on the effort involved in moving code from x86 to another instruction set. If we look at current CISC architectures, we have x86 and System/3x0, where I'm including x86-64 as part of x86 and z/Architecture as part of System/3x0.
For x86, it's currently mainly used in Windows desktops, notebooks, and servers, in Linux servers, and in Apple desktops and notebooks. For the latter, Apple have a porting guide, where the main issues they mention are:
  1. Apple's processors not having a strong memory-ordering model, so you have to be a more careful when sharing data between threads;
  2. "Cheating the compiler" by having a function defined with a fixed argument list and declared as having a variable argument list (and an Objective-C equivalent);
  3. some cases where various named constants have different values on different instruction sets;
  4. using assembler-language code or platform-dependent vector instructions;
  5. making an unsupported assumption about the return value of mach_absolute_time();
  6. some differences in float-to-int conversions;
  7. software that generates machine code on the fly.
A number of vendors have already fixed their code and are offering both x86-64 and ARM64 binaries (or, rather, binaries that include code for both instruction sets. I can testify, from working in the Core OS group at Apple when the PowerPC -> x86 switch was done, that the code I dealt with didn't need much work (but then I already knew about byte order from my days at Sun, with the big-endian 68K-based and SPARC-based machines and the little-endian Sun386i; that was one of the bigger issues in that transition, but isn't an issue going to little-endian ARM).
For Linux, a lot of software is part of what the OS vendor provides in packages, and that's largely reasonably portable code in C or C++, or code in various scripting languages where the interpreter *itself* is portable.
I can't speak for the Windows world, but at least some of that software may require little if any effort to port to ARM64.
As for translating or emulating, at least for some software, Rosetta 2 seems to do a decent job of translating.
So the home-field advantage might not be as large as you'd like it to be.
(For System/3x0, either the code is running on an IBM operating system for S/3x0, in which case the OS hasn't been ported to any other platform, so you can't move it, even to another CISC platform, too easily, or it's running on Linux, in which case see above - typical code moves there are probably from x86-64 to z/Architecture.)
As for the Cinebench results, if we look at some Cinebench R23 single-core results, the M1 doesn't do too badly. For the multi-core results, note that only four of the M1 cores are high-performance cores; it's interesting that it's in the same range as a group of 6-core Intel and AMD processors, where all the cores are presumably identical, so if the low-power cores count as half a high-performance core, that'd make an 8-core M1 the equivalent of a 6-core all-high-performance version. It will be interesting to see if Apple comes up with all-high-performance-core versions for desktop machines, and how well they do.
Bottom line:
  1. There's CISC and there's x86, which is one example of a CISC processor, but isn't the only one that's ever existed or even the only one that currently exists.
  2. Assumptions about RISC or CISC processors that may have been true in the early days of RISC don't necessarily apply now; for example, the P6 microarchitecture showed how to make at least some CISC processors do a good job of superscalar out-of-order processing ("some" doesn't just mean x86 - newer z/Architecture processors apparently do the same "break instructions up into micro-operations and throw them at the superscalar OOO execution unit" stuff), and high-performance RISC and CISC chips both have a ton of transistors.
  3. The article should probably be updated to reflect current reality - where "current reality" not only includes the now 25-year-old superscalar OOO micro-operation processor work, but also various current ARM processors, which are currently the only RISC processors that I know of that cover as wide a range of applications as x86 processors (they both go from laptops to servers and supercomputers, with ARM going below that to smartphones).
  4. The article shouldn't be an advocate for either type of instruction set. Guy Harris (talk) 22:00, 1 January 2021 (UTC)[reply]
Oh, and 5. CPU performance isn't the only contribution to system performance.
If we look at the November 2020 TOP500 supercomputer list, the top four machines have RISC CPUs - an ARM64 machine at the top, with two Power ISA machines below it, and a Sunway SW26010 machine below that, with the fifth using AMD Epyc processors. However, the two Power ISA machines and the Epyc machine have Nvidia GPUs as accelerators, so how much of the difference is due to CPU differences rather than GPU differences - or interconnect differences - is another matter. Guy Harris (talk) 22:39, 1 January 2021 (UTC)[reply]

Point of "Use of RISC architectures" section?

This seems like a grab-bag of different RISC architectures with arbitrary categories. Why are gaming systems considered low-end? — Preceding unsigned comment added by Indolering (talkcontribs) 03:38, 22 March 2021 (UTC)[reply]

I guess the theory is to indicate where RISC is being used, for the benefit of people who think x86 rules the world, but it is, indeed, a not-well-organized grab-bag. It's not particularly up-to-date, with several of the examples no longer applying, and some items just mention instruction sets without giving current examples where it's used.
At this point, the currently relevant RISC architectures, as I see it, are:
  • ARM, obviously, from microcontrollers to supercomputers;
  • SPARC, which is still being sold in servers;
  • Power ISA, which is still being sold in servers;
  • perhaps others used in embedded applications. Guy Harris (talk) 06:15, 22 March 2021 (UTC)[reply]

Removed ACE bits

It is a common game to claim (computer X is the first risc) based on some simplified definition of "what is risc?". The book making the claim that ACE is RISC is visible on Google Books here (for me at least, YMMV).

The (relatively short) article in question defines risc in a somewhat hand-waving manner, saying "no one at the time would agree with this definition" which is (top of page 199) essentially of "microcode slows execution, and long pipelines are slow and have interlocks". That is a rough description I could find much to agree with.

Then he attempts to link the two with the arguments that in ACE, "ease of programming had knowingly been sacrificed to speed", followed by sections noting it lacked microprogramming, that it could be simplified using interpreters, and then concluding "We can indeed conclude that the ACE is a RISC machine in the sense of having an architecture heavily influenced by the design of the computer".

None of these statements are part of the definition he posts. This is not surprising, as none of the definitional ideas even existed and would take the better part of a decade to emerge. The fact that it didn't have them is akin to claiming that horse carriages are really ICE automobiles because engines didn't exist at the time and the designers were all interested in speed.

If I sound dismissive, I am. Regardless of Turing's original desires, ACE emerged as a pretty bog-standard drum machine. That is by no means a denigration - it's bog standard because everyone used the concepts he helped develop. But the claim that it is a proto-RISC fails by the author's own definition as no machines of the era had the very features he quotes as definitional.

I'm not averse to new claims for first, but I am rather averse to the sort of hand-waving, wooly-headed argument presented in this article and the claim demands much better support in order to deserve being included here. Maury Markowitz (talk) 14:30, 4 January 2022 (UTC)[reply]

Merge proposal

I've been thinking that it would make sense to merge the CISC page into the RISC page. The problem is that the RISC and CISC page have a lot of overlap and mostly cover the same history and information, so they are largely redundant (when they aren't contradictory). As WP:OVERLAP says, "Remember, that Wikipedia is not a dictionary; there does not need to be a separate entry for every concept. For example, "flammable" and "non-flammable" can both be explained in an article on flammability." This is the case with RISC and CISC.

I'm not saying that CISC is unimportant, of course. But since CISC is essentially defined in opposition to RISC, you can't really discuss one without the other. There isn't a lot to say about CISC independent of RISC. I think that combining the pages would improve both of them. The CISC page is weak on citations so I was looking into improving it, but I realized that I would end up duplicating most of the RISC material and combining the pages would make more sense. KenShirriff (talk) 21:22, 11 December 2023 (UTC)[reply]