Talk:Computer architecture/Archive 1
![]() | This is an archive of past discussions about Computer architecture. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 |
Techextreme.com
I pulled the link to techextreme.com. All it has are advertisements and an offer to buy the domain. -- Bill
- Bill, it's usually best to put new comments at the bottom of each "talk" page rather than at the top. Also, you can easily sign your name in a handy Wikilinked format and add an automatic timestamp to your signature simply by typing four tildes (~~~~) at the end of your posting.
Any reference books
It will be great if you can suggest some books on this topic in the references/See-also section. - Bose 203.200.30.3 11:53, 10 September 2005 (UTC)
- C. Gordon Bell (and Bill Strecker?) wrote a famous one; I'll look on my bookshelf tomorrow.
different types of speed: latency, throughput, etc.
The paragraph starts by saying there are different types of speed but then mentions only one, latency. At a minimum there should be a mention of the classic dichotomy between latency and throughput. Ideogram 14:52, 31 May 2006 (UTC)
Virtual Memory & Reconfigurable Computing
Although these two topics are somewhat related to computer architecture, they do not embody it. Meaning, there are dozens of other topics in computer architecture that are just as important (if not more) that are not mentioned. I believe the article should be kept more general, perhaps adding those items to a separate "also see" list at the bottom.
- Quanticles 09:13, 28 January 2007 (UTC)
- I also agree that these two items should not receive the attention that they do on this page. The fact that many processors today (embedded) do not :make use of virtual memory and that reconfigurable computing is as limited as it is, shows that these two topics do not embody computer architecture :and should only be listed as related issues.
- - Some anonymous guy.
Ahmdal's Law
I cannot find any mention of "Ahmdal's Law" in wikipedia. does it belong in this article? should it have its own article? where is the primary source one must seek to find the origin of this "Ahmdal's Law" that I only hear about second hand?--User:William Morgan 04:38, 27 August 2006 (UTC)
- There is an article actually, at Amdahl's law. However when I searched for Amdahl I was taken straight to a page about some company. So I have added an other uses link to that article. AlyM 09:14, 27 August 2006 (UTC)
IMHO, it's such a common-sense principle that it does not really need its own article or even name. —Preceding unsigned comment added by 165.123.166.155 (talk) 03:19, 3 April 2008 (UTC)
Abstraction Layer question:
Regarding the color image with text: "A typical vision of a computer architecture as a series of abstraction layers:...", I don't understand how "assembler" can be a layer within a computer's architecture. I might agree if the term 'machine language' (or similar) were used here, but to me an 'assembler' is a running program which turns human readable (ASCII characters, e.g.) machine instruction input into machine readable code. Obviously, an assembler is used by programmers when writing (at least part of) an OS "kernel," but you also need software (such as an assembler) to create "firmware code" that can be understood by the "hardware," yet we see no "assembler" layer between "hardware" and "firmware" in the illustration. I'm merely an assembly hacker and technician, but still, either 'assembly' (perhaps, as a required operation) or 'Instruction Set reader(?)' (as a reference to how the machine's CPU can execute the "kernel" code) would make more sense to me as an "abstraction layer" than "assembler," but I'm certainly willing to learn. Daniel B. Sedory 00:18, 22 March 2007 (UTC)
I've already read the article here, but still see no connection between its discussion of computer architecture and the term "assembler"; the only time it appears on the whole page is a link under the diagram, which jumps to "Assembly Language"; where you'll find a definition similar to what I stated above:
"An assembly language program is translated into the target computer's machine code by a utility program called an assembler. The assembler performs a (more or less) one-to-one (isomorphic) translation from mnemonic statements into machine instructions and data."
So, is this diagram in error, or can someone please explain to me why the term "assembler" should appear in it? Daniel B. Sedory 21:33, 12 April 2007 (UTC)
- Hi Daniel, I saw the comment on my talk page, unfortunately I didn't saw this discussion previously. Yes I added that Image based on Tanenbaum '79 book and the classes I took at university. Tanenbaum defines an abstraction level as composed of a set of resources/services and a language to access them; so it is formally more correct to say that "assembly" is the language and "assembler" the theoretically-conceived abstraction level. (this article, like all the ones on related topics, is still in a very early stage of development and gives no explanation)--BMF81 11:11, 13 April 2007 (UTC
- I guess, Herr Tanenbaum has mistaken. If not, why not adding ALL programming languages and scripts before application level, to "make the world complete"? Terms "assembly" and "assembler" mean same thing - A-Level language(human understandable language close to that of the machine). It was meant "to assemble code", thats why calling it as a tool "assembler" or as a process "assembly" means exactly the same thing. Furthermore almost everyone who has programmed or written books about this language, referred to it as "assembler"; Dispute on these two terms is of academical nature, but was risen much later after the language was invented and used. Thats why it is purely abstract. To make it short, hardware DOES NOT understand assembly code. First after compiling it(even if very little needs to be done,it is still a complete transformation), hardware gets right "food". Now, replacing "assembler" with "machine code" makes it closer to the reality, still it is to be known, that still there can (and it is usually so) be two(or more) pieces of hardware, interconnected on one central board, that use fully different code! Compare CPU and GPU programming! Based on that, "assembler" should be replaced with "binary code". 213.196.248.84 (talk) 05:59, 24 April 2008 (UTC)
- Yes, it would definitely have been helpful to me if the article contained a statement to that effect: that "assembler" in regards to abstraction layers has this special meaning of a "theoretically-conceived abstraction level" and not one of many existing software assemblers! Does this mean some of the other terms in the diagram are being used in a far less than common meaning as well? As I said above, if there's an "assembler level" between "firmware" and the "kernel," why isn't there something similar between "hardware" and "firmware"? Or is that conveniently left as something outside the realm of computer architecture; something only for 'hardware vendors' to deal with? Daniel B. Sedory 11:32, 20 April 2007 (UTC)
- yes, exactly. "hardware" there just means "out of the scope of interest of computer scientists". But as Engineers and Physicians know, it can be further layered.--BMF81 04:31, 22 April 2007 (UTC)
- Yes, it would definitely have been helpful to me if the article contained a statement to that effect: that "assembler" in regards to abstraction layers has this special meaning of a "theoretically-conceived abstraction level" and not one of many existing software assemblers! Does this mean some of the other terms in the diagram are being used in a far less than common meaning as well? As I said above, if there's an "assembler level" between "firmware" and the "kernel," why isn't there something similar between "hardware" and "firmware"? Or is that conveniently left as something outside the realm of computer architecture; something only for 'hardware vendors' to deal with? Daniel B. Sedory 11:32, 20 April 2007 (UTC)
assembler is not an abstraction layer
it just does its work and goes away, it's not a layer, the image is wrong. 196.218.92.201 17:03, 24 September 2007 (UTC)
- If you read my questions and comments above, you'd see I was of the same opinion when I first read this article. However, I was told this is how a number of Computer Science professors have taught the subject. Do you have any proof to the contrary? Especially, can you provide some quotes from any textbook on the subject? (Would be good if you'd sign your name too.) Daniel B. Sedory 02:32, 26 September 2007 (UTC)
These last two sections can't be serious. Let these "computer science professors" come forward and try to publish their "theoretically-conceived abstraction level" ideas. Still ROTF over the term itself. What a claptrap. I hope no good money was paid to attend classes where this nonsense was spouted. Vyyjpkmi 03:48, 26 September 2007 (UTC)
- Assembly is NOT a part of PC architecture, this is ridiculous!! Its an A-level programming language, just as any other language, but is close to hardware. It sticks with hardware, that means coding for x86 and 68K is a bit different, but it still NEEDS a compiler and linker to work as a machine code! Ok, add "Machine Code" layer instead of "Assembly", but still its wrong cause all apps, including firmware, kernel and apps run (injected) as a machine code, with a minor exception of scripts, managed code,etc.-all those things that get interpretated@runtime.
Dear "professors", please do NOT confuse newbies. PC is real and functioning, theories may NOT. So please DO some practice for each theory,.. or at least disasm some stuff, before writing and lobbing such bulls#!t(excuse me!) 213.196.248.84 (talk) 05:38, 24 April 2008 (UTC)
This diagram is too misleading to leave even as a placeholder for a better one.Brock (talk) 14:45, 17 October 2008 (UTC)
Merge Hardware architecture into this article
The Hardware architecture in fact discusses "computer architecture", often assuming "hardware architecture + software architecture = system architecture". The lead section uses a great effort to explain, that computer is not the only thing that runs software. It gives examples of an airplane, Apollo spacecraft and a modern car as pieces of hardware, that are architected to run software, too. I think a car architect rarely calls himself a hardware architect, and in fact rarely designs embedded systems (=computers) that actually run the car's software. He needs a computer architect for that.
If there is in fact a term "hardware architecture" in common use, I doubt it means "architecture of machines that can run software". After merge, the new stub could be created with a proper, sourced definition.
The hardware is defined by wiktionary as:
- Metal implements.
- Firearm.
- Electronic equipment.
- The part of a computer that is fixed.
- Fixtures, equipment, tools and devices used for general purpose construction.
--Kubanczyk 22:14, 28 October 2007 (UTC)
Hardware architecture is indeed a term in common use within the field of computer science. I feel that redirecting to an article titled Computer Architecture would be confusing to many people. --Rickpock (talk) 18:52, 14 February 2008 (UTC)
- I disagree, PC Hardware is very specific. You cannot mix hardware architecture of a toaster and of PC. But you can inter-link them :) 213.196.248.84 (talk) 05:41, 24 April 2008 (UTC)
dalvir singh —Preceding unsigned comment added by 202.164.44.59 (talk) 09:46, 22 October 2009 (UTC)
Bold textcomputer architecture :-CA is a design of computers,including their instruction set , hardware components & system organization.CA deals with the design of computer & with computer systems
CA=Instruction set arch.+machine organisation —Preceding unsigned comment added by 124.253.229.168 (talk) 02:54, 23 September 2010 (UTC)
Open Architecture
I think this article needs to include at least a definition (and probably a link to) "open architecture ", but I'm not quite sure exactly where to put it. Does anyone feel competent ? Darkman101 (talk) 19:25, 5 October 2011 (UTC)
Performance
The article reads: Computer performance can also be measured with the amount of cache a processor has. If the speed, MHz or GHz, were to be a car then the cache is like a traffic light. No matter how fast the car goes, it still will not be stopped by a green traffic light. The higher the speed, and the greater the cache, the faster a processor runs.
Am I the only one who: 1) has no idea what the traffic light example is talking about 2) is pretty sure that very big caches are not a good idea (then they would not be faster than main memory), and that a computer's performance cannot be measured (well) by its cache size? —Preceding unsigned comment added by 165.123.166.155 (talk) 03:28, 3 April 2008 (UTC)
The example of car is suicidal at best. Please dont follow it in real life and do be careful when you come across a stale green light. Getting back to the discussion, the best analogy for cacche would be that of rear spoilers which provide some enhancement to speed but have to be kept clean after use, otherwise they might degrade performance. Just like spoilers cache are useful in certain specialised operations like mathematical functions etc. 0police (talk) 17:09, 4 January 2012 (UTC)
CPU design in 2006
"The performance race in microprocessors, in which they typically compete by increasing the clock frequency and adding more gates, is over," said Alf-Egil Bogen http://pldesignline.com/news/showArticle.jhtml?articleId=177105335
"Designers quietly tap async practices" http://pldesignline.com/news/174402071
"ARM-based clockless processor targets real-time chip designs" Marty Gold, 2006-02-09 http://www.eeproductcenter.com/micro/brief/showArticle.jhtml?articleID=179102696
"antenna-in-package (AiP) system" http://eet.com/news/latest/showArticle.jhtml?articleID=179103090
"Data Forwarding Logic, Zero-Cycle Branches, High Code Density Increase Throughput Per Cycle" http://www.atmel.com/dyn/corporate/view_detail.asp?FileName=AVR32bit_long_2_14.html
Does this article (or its sub-articles) adequately explain the terminology in the above reports?
Please sign your comments with ("0police (talk) 17:24, 4 January 2012 (UTC)") so that others can know how old your suggestions are. Also, why should this article include these reports? Please give a summary suggestion 0police (talk) 17:24, 4 January 2012 (UTC)
Energy v. Power
The top of the article says that simulators calculate "energy in watts". Watt is a unit of power, not energy. I assume power is the correct term here. Rdv (talk) 01:02, 18 March 2012 (UTC)
Computer architecture versus organization
I am currently taking a computer organization class and we went over the difference between computer organization and architecture for almost an hour. Why does a search for 'computer organization' redirect to this page? RyanEberhart 18:48, 15 March 2006 (UTC)
- At first blush, to my ear, "computer organization" and "computer architecture" sounds like a difference without a distinction. But I'd appreciate it if you'd tell us what your/your instructor's point of view was on the difference.
- In general, that which is visible to an assembly programmer is organization, whereby that which is invisible is architecture. For example, the number/size of registers would be organization, but stuff like pipelining, branch prediction, on-the-fly swapping of instruction exeuction etc. is architecture. We have two seperate courses, one on each topic.
- -RyanEberhart 02:58, 16 March 2006 (UTC)
- Urgh... Speaking as a practioner of computer architecture for 20+ years (and the possible inventor of the terms MacroArchitecure and UISA) that which is visible to an assembly language programmer is ISA (Instruction Set Architecture), or more specifically Assembly Language Instruction Set Architecture.
Stuff like pipelining, etc. is microarchitecture. "Architecture" refers to an abstraction that is supposed to be maintained across generations.
- -Andy Glew, —Preceding unsigned comment added by 134.134.136.34 (talk) 00:47, 5 April 2008 (UTC)
Okay, after a lot of thought and frequent reference to my Third Edition Hennessy and Patterson (Computer Architecture, A Quantitative Approach), it seems to me that we might be able to find consensus. H&P are themselves cross-bay academic rivals, and they don't always agree on everything. That, plus their amazing attention to scientific detail, is part of what makes their book probably the most authoritative in the field. From p. 10 of 3rd ed. hardback:
- In this book, the word architecture is intended to cover all three aspects of computer design---instruction set architecture, organization, and hardware.
The book clearly uses the disambiguated term "instruction set architecture" (ISA) to define what the wikipedia "computer organization" article, e.g. had been more loosely referring to simply as "computer architecture." ISA is everything connected to the instruction set, including (as above) the number/size of registers. Organization would include pipelining, branch prediction, etc. Architecture, on its own, can include all these things.
With that in mind, I've been actively working to bring the writeups within the articles "computer architecture" and "computer organization" into a more cosmic alignment.
(PS Ryan, it appears to me that Rochester actually uses H&P 3rd ed. as its textbook for ECE201/401 Advanced Computer Architecture (http://www.ece.rochester.edu/courses/ECE201/)...is it possible your instructor was a little bit confused...?)
- Su-steve 23:08, 19 February 2007 (UTC) Basing myself upon: (a) Hennessy and Patterson's 4th edition of Computer Architecture - A Quantitative approach (pages 8, 12)
(b) John P. Hayes's 2nd edition of "Computer Architecture and Organization" (p.47)
(c) Moderation of the meaning given by Hayes regarding distinction between architecture and implementation with the meaning given by Hennessy and Patterson that integrates implementation with architecture, where the latter two detach themselves from past interpretations given by thought similar to that expressed by Hayes in (b)
(d) Expansion of the interpretation of Computer Architecture given at http://en.wikipedia.org/wiki/Microarchitecture
...I would like to propose that Computer Architecture may be graphically described as shown in the figure. The relationship diagram may be expanded by addition of further sub-classifications.
Edepa (talk) 21:14, 30 December 2012 (UTC)
Origin of term
The article should the origin of the term "computer architecture", with Blaauw and Brooks in the 1960s. First general usage of the term was in reference to the IBM System/360 series, which were the first line of computers designed from the outset to offer a wide range of models all conforming to a common architectural specification (IBM System/360 Principles of Operation). --Brouhaha 21:44, 23 Jan 2005 (UTC)
Comment on the term:
Although architecture (of buildings) and computers have a much complicated history (in theory to say the least), maybe it is worth mentioning the etymology of the word "architecture" goes back to "architect" from latin Arkhitekton as in "master builder". [1]
Aalomaim (talk) 08:47, 12 April 2013 (UTC)
I also feel that the configurable computing section should be yanked. It is a small part of computer architecture and should be linked rather than having this much dedicated space in the main computer architecture article.
Architecture versus Architecture (Introduction text)
I edited a sentence in the intro. where it mentions the "architecture of buildings," the old sentence implied that building arch. is not logical. it also gave the impression that only computer arch. able to define "systems to serve particular purposes". (too general, building arch can be said to that as well, please let me know if I must elaborate). I tried my best to find a way to differentiate the two kinds of architecture (if that was the point) and I found the easiest way is to differentiate them by "discipline", but this might be redundant. Overall I think there is no need to mention the "architecture of building" in the intro. a link to architecture is sufficient. (except perhaps in the "Origin of the term" or its etymology, see my comment above under "origin of the term")
Aalomaim (talk) 08:47, 12 April 2013 (UTC)
ESCAPE
The link to the ESCAPE cpu simulation doesn't seem to lead anywhere. I did a moderate google search for another link to the software, but I can't seem to find anything. Anyone know where it might be? —Preceding unsigned comment added by 24.236.97.56 (talk) 08:06, 7 December 2008 (UTC)
I see that every mention of the ESCAPE simulation have been removed from this article. [1] I put a link to the current location of the ESCAPE cpu simulation source code into the computer architecture simulator article. To comply with the WP:PRESERVE policy, should this article also link to that source code? --DavidCary (talk) 17:56, 27 January 2014 (UTC)
Archiving talk page
I think it might be useful to archive conversations on this talk page. I've seen other talk pages that use MiszaBot and I might try to configure it for this page unless someone objects. I realize that the talk page is not very active, but I think that it would be easier to work with if old conversations were archived. Gabriel Southern (talk) 20:05, 9 March 2014 (UTC)
Can someone double check the grammar and syntax of the article?
I think I caught all the errors but it would help if someone would go over it again just to make sure there are no major errors and for the most part the article follows this. BestKogNA (talk) 14:16, 11 May 2017 (UTC)
subarchitecture
Should there be, somewhere in Wikipedia, discussion of subarchitecture? Right now, I am thinking of it in the context of Sun4, Sun4c, Sun4m, Sun4u, but many architectures over the years have subarchitectures worth noting. In the Sun4 case, as well as I know, it is mostly differences in MMU design, and so important for the OS, much less for users, but still should go somewhere. Gah4 (talk) 19:48, 10 June 2019 (UTC)
- The Sun architectures were system architectures. Sun-1 was 68000-based, with a Sun MMU; Sun-1U/Sun-2 were 68010-based, with a Sun MMU; Sun-3 was 68020-based, with a Sun MMU; Sun-3x was 68030-based, with the on-chip MMU; Sun-4 was based on various SPARC processors, with a Sun MMU and a VMEbus (as earlier Suns did); Sun-4c was based on an LSI Logic SPARC processor, with a Sun-style MMU (as I remember) and an SBus; Sun-4e had the same CPU and MMU, but a VMEbus; Sun-4m was based either on SuperSPARC or hyperSPARC processors, with the on-chip in-memory page-table based Sun Reference MMU, and using the MBus for multi-processor systems; Sun-4d was similar, but used the XDBus for multi-processor systems; etc..
- So the differences affected the ISA in some cases (68010 -> 68020, SPARC v7 -> v8 -> v9), affected the MMU in some cases (Sun MMU -> on-chip MMUs of various sorts), and affected only system buses in other cases (VME -> SBus, MBus -> XDBus).
- Different Sun subarchitectures fall into different subcategories in Computer architecture#Subcategories:
- some involve the ISA even if you don't include the MMU (68010 -> 68020, SPARC v7 -> v8 -> v9);
- some involve the ISA if you include the MMU;
- some involve only system design (Sun-4m -> Sun-4d, which both used SuperSPARC);
- and they may involve microarchitecture if one subarchitecture uses one set of CPUs/MMU designs and another uses a non-overlapping set, but some subarchitectures used multiple CPUs with different microarchitectures.
- So I'm not sure where this would fit. Guy Harris (talk) 21:22, 10 June 2019 (UTC)
- Seems like two choices are an article of its own, or a section in this article. Gah4 (talk) 23:21, 10 June 2019 (UTC)
- Computer architecture seems largely to be talking about CPU architecture. It gives two meanings of "computer architecture". For the first meaning, it speaks of "describing the capabilities and programming model of a computer", and, for the second meaning, it speaks of "instruction set architecture design, microarchitecture design, logic design, and implementation". Both of those sound CPU-centric; they don't mention, for example, I/O buses, which are at least part of the Sun-4 subarchitectures (VMEbus vs. SBus vs. various flavors of PCI for peripherals, MBus vs. XDBus vs. whatever for multiprocessor systems).
- If we were to include I/O buses as part of the System/360 architecture, the architecture would be specified by at least two documents - the Principles of Operation and IBM System/360 I/O Interface Channel to Control Unit Original Equipment Manufacturers' Information plus whatever additional internal specifications they have. In that case, perhaps Bus and Tag vs. ESCON vs. FICON could be thought of as distinguishing subarchitectures of S/370, just as the I/O bus is one item distinguishing Sun-4 from Sun-4c from Sun-4e.
- In addition, most if not all commodity microprocessors have, at the CPU architecture level, very little in the way of initialization specified - typically, when the CPU is reset, it clears a bunch of CPU state, including MMU state, and jumps to a given location in physical memory. The rest is up to whatever's at that location, which would typically be some firmware in non-volatile memory. There may be system architectural specifications, either explicit or implicit, that govern the behavior of the firmware.
- For x86 processors, one such specification is "compatibility with the original PC BIOS plus whatever other stuff has been added on in various specifications such as Advanced Power Management, the MultiProcessor Specification, Plug and Play, and the Advanced Configuration and Power Interface". Another is the EFI specification, possibly with other industry specifications.
- For SPARC processors, Sun had their original boot firmware; I don't remember whether any specification was ever published for it. They replaced that with Open Firmware; I'm not sure whether Oracle are still using that on SPARC servers or if they've adopted EFI.
- For Alpha processors, the way the standard firmware worked that was originally documented only in manuals only available inside DEC (DEC Semiconductor offered their own documented firmware to customers of Alpha chips). Eventually it was documented; some functions performed by hardware on some other processors, such as page-table walks, were implemented as traps to the PALcode part of firmware on Alpha.
- For MIPS processors, there was at one point the Advanced RISC Computing specification.
- So there are CPU architecture specifications and system architecture specifications, with some system details relegated to the latter. Most user-mode programming only depends on the CPU architecture; OS development, and peripheral development, depends on the latter as well.
- So would subarchitecture specifications would be system architecture specifications based on higher-level CPU architecture, and perhaps partial system architecture, specifications, where the subarchitecture specification standardizes some aspects of the system not covered by the higher-level specifications? And would a given subarchitecture include all machines designed to conform to that subarchitecture's specification? If so, are there examples other than the ones for SPARC-based systems (and Sun's 68k-based systems)? Guy Harris (talk) 01:15, 12 June 2019 (UTC)
- I hadn't thought about I/O bus, but in the Sun case that comes when the initializing OS tries to find out which I/O devices are attached. There are sysgen options, including which device drivers to include in the kernel. Having actually done Sun sysgens some years ago, I am not so sure that is related to what Sun calls subarchitecture. For example, within the same subarchitecture, there are systems based on VME bus, and ones that are Sbus. Even more, because we used to use systems that did it, there was an Sbus to VME adapter, to connect VME devices to Sbus hosts. As I remember, the differences come in the /usr/kvm directory, which has symbolic links from the appropriate /usr/bin or /usr/sbin directory. In the case of diskless hosts, you NFS mount the /usr and appropriate /usr/kvm from the server. One server can serve more than one subarchitecture, or even more than one architecture. (Years ago, I had a 4/110 running off a 3/260 NFS server.) I am not against including I/O systems in subarchitecture, but I believe it mostly doesn't apply to Suns. For IBM, there is XA/370 and then ESA/370, which, in addition to 31 bit addressing, have a completely different I/O system. There are some system that can IMPL different microcode to run S/370, XA/370 or ESA/390. RISC-V has many optional parts, though I don't know that anyone describes them as subarchitectures. I don't know ARM well at all, but it might be that it has some. Gah4 (talk) 03:48, 12 June 2019 (UTC)
- Yes, I already mentioned the I/O buses; as I indicated in my initial reply, one of the differences between Sun-4 and Sun-4c was the I/O bus (VME for Sun-4, SBus for Sun-4c) and the one difference between Sun-4c and Sun-4e was the I/O bus (VME for Sun-4e - we used it at Auspex as a host processor, so it could work with our other VME boards). So I'm absolutely certain that the I/O bus is one of the components that distinguishes SPARC-based subarchitectures. Others include, as I noted, the MMU (which was not specified as part of SPARC v7 or SPARC v8, and only semi-specified in SPARC v9, although Oracle SPARC Architecture 2015 does specify it), the firmware (Sun-4c introduced OpenBOOT/Open Firmware), and bit-width (Sun-4u introduced 64-bit processors).
- So that particular notion of "subarchitectures" might be Sun-specific, meaning it can be handled on the Sun-3 and Sun-4 pages. Guy Harris (talk) 04:28, 12 June 2019 (UTC)
- The reason for the question was that I wanted to link to a description of subarchitecture from the Sun pages, assuming that it wasn't just Sun. For Sun, you needed appropriate install tapes, though the differences were small in some cases. Knowing about subarchitecture saved disk space on servers for diskless hosts, as you didn't have to duplicate the parts that were not different. For OS X, each version will say which machines it works with, and which it doesn't. Those differences are most likely subarchitecture, but not specifically mentioned. Disks are big enough now, that we don't notice the wasted space, having to support more than one. I suspect that the difference show up when you try to boot off a disk that was meant for a different system. But maybe also the IA32 MMU hasn't changed over the years. Install systems figure out if they are installing 32 bit or 64 bit, which probably qualifies as a whole architecture, and don't need to know about subarchitechture. Gah4 (talk) 18:54, 12 June 2019 (UTC)
For Sun-3 and Sun-4, the subarchitectures required different kernels and may have required different versions of some platform-dependent system commands and libraries. The bulk of userland code didn't care.
For Macs, the only things that might qualify as "subarchitectures" would be based on the CPU type, e.g. 32-bit PowerPC, 64-bit PowerPC, 32-bit x86, 64-bit x86, and maybe the rumored 64-bit ARM in the future, but those have different instruction set architectures, so I don't see them as "subarchitectures". A given OS release includes all the code necessary to support the Macs it supports, with instruction-set differences handled by fat binaries with executable code for multiple ISAs. Apple eventually drops support for older machines, so they don't bother to include in a release drivers for peripherals that only appear in no-longer-supported machines, and they may compile with the compiler set to generate code using instruction set features that are in currently-supported machines but not in no-longer-supported machines, but it's not as if the dropped machines have a different subarchitecture from the supported machines; I didn't work in the low-level platform group, but I don't think there was any notion of "subarchitectures" at all similar to Sun's, just a notion of particular platform families and platforms within them, e.g. the machine on which I'm typing this is a MacBookPro11,5, with the family being "MacBookPro".
The only major changes to the IA-32 MMU was the addition of the Physical Address Extension (PAE) feature and of the NX bit. Apple never supported 32-bit machines that lacked PAE; if they ever supported machines without the NX bit, that would have been handled at runtime (in the pmap layer), so there weren't any separate kernels for no-NX and NX machines. Guy Harris (talk) 19:56, 12 June 2019 (UTC)
- I don't have a running Sun, but I do have a running NFS server with export files for diskless Suns. Looking in /export/exec/sun3/kvm, the main general user commands are ps, pstat, and w. These need to look into some kernel specific data structures, it seems enough that they are different for different subarchitecture. Also, config, eeprom, and format, but those are not normally for general use. Using subarchitecture allows only the parts that have that dependence to be different. /usr/bin/ps is then a symbolic link to /usr/kvm/ps. Looking at an OS X system, there is /bin/ps. On the other hand, looking in /usr/bin there are some files specifying x86-64, some i386, and some dual architecture. It seems that they don't install completely different file sets for 32 bit and 64 bit installs. Gah4 (talk) 21:23, 12 June 2019 (UTC)
- ps, pstat, and w might look at HAT layer ("MMU driver", similar to the Mach pmap layer I mentioned) data structures, which would differ between Sun-3 (Sun MMU) and Sun-3x (on-chip PMMU), and between Sun-4 (as I remember, 8KB-page Sun MMU), Sun-4c/Sun-4e (as I remember, 4KB-page Sun MMU), and Sun-4m/Sun-4d (SPARC Reference MMU); it's been a while.
- On macOS, however, ps uses sysctls that should hide whatever per-platform dependencies exist and 2) there aren't, as far as I know, any such dependencies in any case. Whether a binary is fat or not, and how fat it is, might depend on the build process for that particular program; there's no reason to ship fat binaries in recent versions of macOS, as they only run on 64-bit x86 Macs, but maybe nobody got around to changing the build rules for those particular programs. That probably changes in Catalina, as fat libraries aren't going to be shipped, because support for 32-bit applications is being dropped. Guy Harris (talk) 21:55, 12 June 2019 (UTC)
- So, one of the reasons to use Sun style subarchitecture is to save disk space, and also keep the kernel small. Both more important in the Sun days than now. Otherwise, I believe that changes to the user-mode instruction set, that aren't a whole new architecture, would qualify as subarchitectures. Back to S/360 days, there was the commercial instruction set (decimal), scientific instruction set (floating point), which were each optional on some models. IBM ESA/390 and System/z have, over the years, added instructions. Users (and compiler writers) have to then decide when to support the new ones. The IBM term for this seems to be ARCHLVL, I suspect for for Architectural Level. These are changes to the user mode instruction set. Gah4 (talk) 21:50, 12 June 2019 (UTC)
- So there's "subarchitectures" in the sense of system architecture differences that don't affect normal user-mode code (Sun-3 vs. Sun-3x, Sun-4 vs. Sun-4c vs. Sun-4e vs. Sun-4m vs. Sun-4d), and there's "subarchitecture" in the sense of instruction set architecture differences in the form of backwards-compatible additions (SPARC v7 -> SPARC v8, IA-32 from the 80386 to the 32-bit Pentium 4's, x86-64 from the first Opterons/Pentium 4s to the current AMD and Intel processors, S/370 picking up various instructions, z/Architecture ARCHLVL updates, ARM vX.Y, various MIPS/PA-RISC/Alpha extensions, etc.).
- S/360 was an example of a third case, where some instructions are add-on options; the PDP-11 had that as well. That continued into the microprocessor era until floating-point units got incorporated into the CPU chip.
- I'd consider 32-bit -> 64-bit as a fourth case; it's an "addition" but it's a lot bigger than, say, various streaming SIMD extensions.
- So I don't see any straightforward single unifying concept of "subarchitectures" here. Guy Harris (talk) 22:07, 12 June 2019 (UTC)