Jump to content

Talk:Machine code

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Untitled

[edit]

in computer ,how to covert highlevel language to machine language

With a compiler. --Mike Van Emmerik 21:20, 26 October 2005 (UTC)[reply]

Untitled 2

[edit]

It's not clear to me: is it the concensus that "machine language" is the same thing as "machine code"? Or is "machine language" a bit more like a grammar, and machine code only like "sentences" (programs or modules) expressed in that language? Or perhaps the language is a bit like an enum: you could talk about the Z80 language or the MIPs language, so while there is one Z80 language, there are many Z80 machine codes (compiled or assembled Z80 programs)? I think it would be good to spell this out in the article, which seems to use the two terms more or less interchangeably. --Mike Van Emmerik 21:20, 26 October 2005 (UTC)[reply]

Also, is there a consensus that instruction set is the same thing as "machine language"? The terminology makes it sound analogous to several natural languages being written out in some character set. But when someone talks about 2 different "machine languages", that always means he's talking about 2 different "instruction set"s, in my experience. I often hear the phrase "written in machine language", usually meaning that some person typed in assembly language). When I hear "machine code", the speaker is usually pointing out a block of hexadecimal numbers generated by a compiler or an assembler. Sometimes I hear "some machine code" or "the machine code for this program", so I think you are right. It's analogous to "some English text" or "the English text for this document". But "there are many Z80 machine codes" doesn't sound quite right to my ears. "There is a lot of Z80 machine code" sounds better -- I wish I could put a finger on exactly why. -- User:DavidCary --70.189.75.148 06:13, 5 February 2006 (UTC)[reply]

Assembly language vs. symbolic machine language

[edit]

I am in doubt if these are the same. I believe that assembly language is the language actually used for coding to the assembler. On the other hand you do not code in symbolic machine language but use it for examining code, ie. instead of reviewing the assembler output as pure hexidecimal, you can (for learning purposes) write it in symbolic machine code, where at least all the opcodes are replaced by a mnemonics. Symbolic machine code is not mentioned in neither Machine code or Assembly language. Velle 17:46, 23 March 2006 (UTC)[reply]

Revision of 8th November

[edit]

This revert by Karol appears to have been to a much earlier version, and happened to overlap with Tobias' reversion. --Mike Van Emmerik 22:27, 7 November 2005 (UTC)[reply]

Amended

[edit]

I added some text at the beginning. --VKokielov 04:06, 31 May 2007 (UTC)[reply]

...and then elsewhere, and then moved it around and trimmed it and... eventually ended up with the "magazine clipping" paragraph back in basically the same place, in exactly the same words, but without the second half. (In fairness, the second half was phrased in more subjective value-laden terms than the first half, but even the first half feels like opinion or a pet idea.)
It is sometimes perceived that machine languages are more fundamental than other computer programming languages.
Citation needed.
They are not; the power of a programming language has been shown to depend on the power of the underlying machine; see Turing machine.
Okay, but you're using the word "fundamental" in two different senses, then. Machine languages may not be more Turing-complete (any more than someone can be "more pregnant"), but they are more fundamental, more basic, precisely because they form the basis and fundament of modern computer programming. These days, you practically can't have a computer without that computer using some kind of machine language. (Obviously any machine has a "machine language" in the most general sense, but I mean "machine language" as described in this article: opcodes and registers and memory addressing modes, things that look familiar to anyone who knows any machine language.)
The structure of machine languages is a consequence of the necessity of simple elements in the electronic design of computers.
Now this is an interesting thesis. Can you support it? In particular, what is the structure of machine languages? I've given some common themes above (registers, opcodes, addressing modes), and I might add prefix-free coding; but none of those seem to be theoretically essential to computer architecture. And then, some symbolic machine languages (TI's C6000 DSP comes to mind) end up looking much more daunting and quirky than their actual architectures; so are you talking about languages or architectures, or can we consider the two concepts synonymous?
And once the structure of machine languages has been elucidated, it would be nice to list exactly what "simple elements" of architecture design you're talking about; and then how those elements necessitate that particular structure.
I generally agree with your theses (although I feel you're being tricky with sentence 2), but I don't think you can support them yourself, and I don't know of any definitive works on the subject, so I wouldn't even be able to say, "Seymour Cray says that the structure of machine languages is..." "The Structure of Machine Languages" sounds like a really interesting historical survey project, though. Any PhD candidates looking for a thesis topic? ;) --Quuxplusone 05:19, 31 May 2007 (UTC)[reply]
You win.  ;) --VKokielov 10:45, 31 May 2007 (UTC)[reply]

Differences

[edit]

Are programs that need a kernel to run in machine code, or is it an OS specific format? --Doomguy0505 10:29, 10 November 2007 (UTC)[reply]

What it used for

[edit]

the article rake about almost everything except the use of Machine language or Machine Code and who is programing by it .is the compiler programed By Machine Code or part of it ..or Some Part of OS are programed by machine code etc .Salem F (talk) 23:29, 12 October 2009 (UTC)[reply]

Sorry Salem F I do not quite understand what you are saying here. Were you asking a question? Can you please try saying it with different words? --220.101.28.25 (talk) 18:45, 26 October 2009 (UTC)[reply]
yes, I do not understand the question, either. Not many people program with machine code unless they have masochistic tendencies. So, I think Salem must have meant to ask something else. stmrlbs|talk 00:59, 27 October 2009 (UTC)[reply]
  • I think my question was clear ..I'll say it by another way ..

can we have more example of Programs that's wrote by machine code. i know some programs had wrote by machine code on Old days and now or Assemly Language take it place for ever (see the use of Assemply language) .........--Salem F (talk) 20:44, 29 October 2009 (UTC)[reply]

If the question is "who writes directly in machine code, rather than in assembly language or a higher-level language?", the answer is "almost nobody" - even for low-level machine-dependent parts of OS code, that's almost always written in assembly language, as most programmers don't have, as per the earlier comment, masochistic tendencies so strong that they aren't even willing to use an assembler.
Some software writes machine code, such as assemblers, compilers that directly generate machine code rather than generating assembly code and handing it to an assembler, just-in-time compilers, and so on. Guy Harris (talk) 21:42, 10 July 2024 (UTC)[reply]

Differences between "byte" code and "machine" code

[edit]

There isn't any. The only difference is that machines that understand "byte" code is often implemented in software, while machines that understand "machine" code is often implemented in hardware, but there is no reason why a "byte" code machine cannot be implemented in hardware or a "machine" code machine in software. In fact, there are plenty of example of both.

I suggest the two articles (this one and bytecode be merged and this point be clarified. 13:23, 17 June 2010 (UTC) —Preceding unsigned comment added by FrederikHertzum (talkcontribs)

Strongly oppose merging in the proposed way, at least because bytecode has 18 interlanguage links to articles (it means that in yet 18 languages these topics are separated). Should bytecode be considered as a special case of machine code or it should not, of course, is disputable. And there is a confusion between bytecode as a concept and Java bytecode, though. Incnis Mrsi (talk) 14:36, 20 June 2010 (UTC)[reply]
I don't believe I have made any suggestions as to how this should be merged (although I do see your point). Java bytecode is simply one machine language, which is used in the Java machine and as such is a type of "bytecode" or machine code. That there is no technical difference between the terms should at least be clarified in both articles, if they are not merged. 80.167.145.223 (talk) 03:24, 21 June 2010 (UTC)[reply]
I don't see how Java bytecode, which is machine independent, can be mistaken for machine code - which is obviously machine (ie, processor) dependent. Java bytecode is interpreted by a Java Virtual Machine. Machine code is interpreted by a processor. —Preceding unsigned comment added by 121.214.29.70 (talk) 15:45, 3 July 2010 (UTC)[reply]

The edit by user:Beland

[edit]

[1] what was wrong with two distinct sections about two opposite data transformations? Incnis Mrsi (talk) 17:27, 24 January 2013 (UTC)[reply]

The edit of 174.94.3.167

[edit]

Although definitely a good-faith edit, I opted to remove it because of

which is, at best, ambiguous. I would say that it is a rubbish, because any executable program represents a “use of machine code”. Incnis Mrsi (talk) 07:20, 25 May 2013 (UTC)[reply]

Yes, there is much confusion here. The last sentence of the intro, by using the word "typically", leaves the impression that a hardware processor may not need machine language to operate. And the first paragraph after that should say "electronic" rather than "physical" design.74.76.137.34 (talk) 16:34, 13 September 2015 (UTC)[reply]

I'm not sure I see the confusion here - the last sentence says that the interpreter is typically machine code, which is certainly true. But there is nothing at all preventing someone from writing an interpreter in an interpreted language (in fact it's happened often), but performance will usually be poor. Rwessel (talk) 04:11, 14 September 2015 (UTC)[reply]
And as regards physical vs. electronic, electronics are the most common way to implement a CPU, but is hardly a requirement. Pneumatic and hydraulic "logic circuits" certainly exist (and are used in mechanic systems to control operation of devices), and one could, in principle, build a computer out of such things. Babbage's Analytical Engine, for example, had it been built, would have been entirely mechanical. Rwessel (talk) 04:17, 14 September 2015 (UTC)[reply]

Relevance of Berkley Law professor's opinion on human readability

[edit]

The question is in the title.
Do you think that computing people must agree with the law professor's opinion? FelixHx (talk) 19:03, 17 May 2024 (UTC)[reply]

Changed link from Computer code article to computer program article

[edit]

Regarding this edit, I misspoke in my comment. My comment should say, "Computer code article is now redirected to Source code article, which doesn't apply here." Timhowardriley (talk) 20:53, 30 June 2024 (UTC)[reply]

The role of auxiliary files in Machine code#Readability by humans

[edit]

The IBM High Level Assembler (HLASM) has an ADATA option directing it to produce an Associated data file output, containg data describing the contents of both the source and object files. The available debuggers for, e.g., z/OS, have the ability to display the source line corresponding to an instruction of interest. However, the ADATA file itself is not human friendly. Should Machine code § Readability by human mention it? -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 12:46, 8 July 2024 (UTC)[reply]

That sounds like a separate "debug symbols" section. Windows also uses them[1] in separate .pdb files.[2] Most UN*Xes put debug symbols into the symbol table of an object or executable file, in a format such as stabs or DWARF, but Apple's OSes also have the notion of putting debugging symbols (in DWARF format) into a separate .dSYM file.
I don't think any UN*X assemblers can generate debug symbols, except with explicit pseudo-ops used by compilers that generate assembler code and rely on the assembler to produce object files, as assembly-language programming is rare on UN*Xes. I don't know whether Windows assemblers do; in 16-bit Windows, (x86) assembly-language programming may have been common, but I suspect assembly-language programming on Windows became less common over time. A lot more is probably done on OS/360's successors, even now, so having the assembler generate debug symbols may be more useful.
It looks as if ADATA files (which can be generated by language processors other than HLASM) can also contain a copy of the source code, at least for assembly language.[3] UN*X and Windows debuggers assume you have the source code handy, and just have debug symbols to associate instructions, or ranges of same, to a particular line in a particular source file.
So mentioning debug symbols in this context might be informative, but it shouldn't assume that they're in a separate file, and the details can be left to the debug symbols page. Guy Harris (talk) 19:39, 8 July 2024 (UTC)[reply]
@Guy Harris: I've updated Machine code and Debug symbol to include information on ADATA. How much additional detail should I include, e.g., link to format? Is anyone willing to add information on other formats? -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 12:55, 10 July 2024 (UTC)[reply]
@Chatul: For machine code, explicitly mentioning that ADATA files can include source code, with a link to the format given as a reference, would probably suffice, as the topic of that article is machine code, not debug symbols.
For debug symbol, links to the ADATA format (and whatever format the TEST option used) would be useful. For other formats, I'm a bit amazed that the word "dwarf", regardless of capitalization, appears nowhere in the article; that needs to be fixed. (Same with stabs.) Information on whatever format Microsoft uses, or a link to a page about that, should probably also be added. Guy Harris (talk) 20:54, 10 July 2024 (UTC)[reply]
Tools and methods to make machine code readable would be interesting. However, the current first sentence seems like a debate about whether or not to patent machine code. It isn't interesting. It is also a run-on sentence. Timhowardriley (talk) 22:12, 8 July 2024 (UTC)[reply]
See also § Relevance of Berkley Law professor's opinion on human readability. Guy Harris (talk) 21:37, 10 July 2024 (UTC)[reply]

References

  1. ^ "Symbols for Windows debugging". Microsoft Learn.
  2. ^ "Querying the .Pdb File". Microsoft Learn.
  3. ^ "Associated Data Architecture". High Level Assembler and Toolkit Feature.

Unexplained removal of text

[edit]

@Timhowardriley: Edit special:permalink/1245774947 removed the paragraph Early CPUs had specific machine code that might break backward compatibility with each new CPU released. The notion of an instruction set architecture (ISA) defines and specifies the behavior and encoding in memory of the instruction set of the system, without specifying its exact implementation. This acts as an abstraction layer, enabling compatibility within the same family of CPUs, so that machine code written or generated according to the ISA for the family will run on all CPUs in the family, including future CPUs. I believe that the first sentence is relevant and should be restored, possibly with different wording. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 15:29, 15 September 2024 (UTC)[reply]

This edit is the one you're thinking of. I think that information should be mentioned, but I'm not sure it needs to be mentioned in the lead. Guy Harris (talk) 18:54, 15 September 2024 (UTC)[reply]
The topic sentence in the paragraph talks about non-backward compatibility. However, X86#History says, "Many additions and extensions have been added to the original x86 instruction set over the years, almost consistently with full backward compatibility." Since the x86 series started in 1978, the "early" backward compatibility problem ended 46 years ago. Also, the backward compatibility problem doesn't help describe what machine code is. The second sentence in the paragraph changes the topic to ISA. Continuity is missing. The first time I read the article, I stopped reading here. Timhowardriley (talk) 19:38, 15 September 2024 (UTC)[reply]

Is bytecode interpreted to machine code?

[edit]

The last sentence in the lead says that bytecode is interpreted ... to the host computer's machine code. I think the verb phrase "interpreted to" doesn't apply here. My understanding is the application has a bytecode interpreter that executes the instructions. Other articles call the bytecode interpreter a "virtual machine". I think the article should spell this out. However, this is a lot of detail for the reader to digest. If a change is made, then the entire paragraph should be moved from the lead to a new section. Timhowardriley (talk) 20:12, 15 September 2024 (UTC)[reply]

The last sentence in the lead says "Bytecode is then either interpreted or compiled to the host computer's machine code." This can either be interpreted as "Bytecode is then either {interpreted} or {compiled to the host computer's machine code}." or as "Bytecode is then either {interpreted or compiled} to the host computer's machine code." The former is what is intended.
"Bytecode either is interpreted or is compiled to the host computer's machine code." might be less ambiguous. Guy Harris (talk) 20:22, 15 September 2024 (UTC)[reply]
Okay. It seems like you agree with my understanding that interpreting bytecode requires another program that may or may not produce machine code. For example, the Java virtual machine is a java bytecode interpreter and has implementations that don't produce machine code. To produce machine code, the virtual machine must execute a just-in-time compiler. But what I just said applies to the Java language only. I'm sure other languages have different methods to compile or not compile their bytecodes to machine code. The nuanced fact that interpreting bytecode may not produce machine code makes me think that bytecode shouldn't be mentioned in the lead. Timhowardriley (talk) 00:22, 16 September 2024 (UTC)[reply]
It seems like you agree with my understanding that interpreting bytecode requires another program that may or may not produce machine code. I agree with your understanding because that was my intent when I wrote that, even before you expressed that understanding. :-)
(Although machines can be built whose machine code is some language's bytecode.)
Some languages that are translated to bytecode may have implementations that never translate that bytecode to machine code, and some may even have no implementations that do translate bytecode to machine code. Java's not at all special in that regard.
And, yes, perhaps the lead - and perhaps the article as a whole - should leave discussion of interpreted languages to interpreter (computing) and discussions of bytecode to bytecode. JIT compilation is just another form of compilation, so it is arguably covered by "A high-level program may be translated into machine code by a compiler." Guy Harris (talk) 08:59, 16 September 2024 (UTC)[reply]
Just insert a comma in Bytecode is then either interpreted or compiled to the host computer's machine code. and there will be no ambiguity. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 07:41, 16 September 2024 (UTC)[reply]
However, there are two issues. 1) Remove the ambiguity. 2) Introduce the reader to machine code without confusion. When introducing, extraneous information is confusing. The fact that bytecode can be interpreted is extraneous to machine code. Timhowardriley (talk) 08:54, 16 September 2024 (UTC)[reply]

machine code, machine language and instruction

[edit]

I feel that the article still lacks a clear definition of machine language, machine code and instruction, and I hope someone can improve it. In order to explain, I have added a image.

This image was inspired by Digital Design and Computer Architecture

Machine code#/media/File:Machine language and assembly language.jpg ShiinaKaze (talk) 15:02, 24 September 2024 (UTC)[reply]

Examples

[edit]

Should the be more than two instruction sets in § Examples and, if so, what criteria are appropriate? New versus old? Binary versus decimal? CISC versus RISC?

I was considering adding some or all of

but am not sure how many are needed, if any. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 17:47, 21 February 2025 (UTC)[reply]

New features

[edit]

§ Instruction set incorrectly states A processor's instruction set is limited by a computer's architectural features. If a feature exists, then instructions are necessary to exploit it. Examples of architecture features that require instructions:. I qualified it and added a footnote In some cases a new feature may only require new register bits or fields., and Timhowardriley reverted the change with the comment Fastidious note is out of scope.. The text should either be corrected or removed. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 16:32, 21 March 2025 (UTC)[reply]

@Chatul and Timhowardriley: Perhaps the issue here is what "require instructions" means. It may be that User:Timhowardriley means that, to use a architectural feature in a program, the machine code must contain one or more instructions that use that feature. He didn't appear to say that new features necessarily require new instructions in the sense of "an unused opcode value must be assigned to a new instruction". Guy Harris (talk) 17:43, 21 March 2025 (UTC)[reply]
I interpreted the text as meaning that to add a feature you must add an instruction, which is incorrect. Sometimes a new feature only adds, e.g., a new field in a control register, a new bit in an operand. Perhaps the wording could be made less ambiguous. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 19:43, 21 March 2025 (UTC)[reply]

Instruction set - address field

[edit]

@Timhowardriley: The text uses the term address field ambiguously. On some machines there is a field that can contain a complete address, but other machines use truncated addressing, and the fields in the instuctions are substantially shorter that the size of an instruction, e.g., on z/Architecture, an instruction with a 12-bit displacement running in AMODE 64 uses 64-bit virtual addresses, constructed from 64-bit general registers and 32-bit access registers. The text should clearly distinguish the size of instruction fields and the sizes of computed addresses.

This may be TMI, but originally on the IBM System/370 virtual and physical addresses were both 24 bits. Shortly prior to S/370-XA, IBM offered an extended addressing option that allowed page table entries to reference up to 64 MiB of storage, although address spaces wee still limited to 16 MiB of virtual storage. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 21:33, 21 March 2025 (UTC)[reply]

I have never programmed at this level, so I'm not qualified to nuance the jargon. I just go by what's in my college textbook on the subject. Tanenbaum clearly means to call the bits to the right of the opcode that contains an address the "address field". He writes, "A fourth criterion [of an instruction format] concerns the number of bits in an address field." Tanenbaum is clear he's not describing the address space here. In the section on Paging he writes, "The number of addressable words depends only on the number of bits in an address and is in no way related to the number of memory words actually available. The *address space* ... is the set of possible addresses." In this Wikipedia article, the Instruction set section is describing characteristics of an instruction format. A major characteristic of an instruction format is the address field. The address field is explained in a paragraph and is meant to be referring to the bits to the right of the opcode. Timhowardriley (talk) 01:21, 22 March 2025 (UTC)[reply]
On some architectures there are other fields between the opcode and the address field, and on some architectures there is no address field. Most commonly there are fields too short to contain an address, from which addresses are computed. Addressing modes goes into the alternatives, but some simplified examples are
Burroughs B5000
Operand Call and Descriptor Call syllable contain small offsets into the Program Reference Table or a stack frame.
B6500
Value Call and name call contain a 14-bit field split between a lexical level and an offset into a stack frame.
IBM System/360
Depending on the opcode, the halfwod containing the opcode may be followed by one or two halfwords, each containing a 4-bit general register number (base register) and a 12-bit displacement. The actual address is bits 8-31{{For the 360/67 in 32-bit mode, the entire 32-bit sum.}} of the sum of the general register[a][b] and the displacement.
Other common architectures use truncated addressing with similar nomenclature to that of S/360 -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 21:32, 23 March 2025 (UTC)[reply]
@Chatul: No, not TMI - that's an example of a system where the virtual address size is smaller than the physical address size; it looks as if some System/390 models may have had that as well (31-bit virtual address size, but more than 2GB of physical memory).
The PDP-11/45 and PDP-11/70 are other examples (16-bit virtual addresses, 18-bit or 22-bit physical addresses). The VAX-11/780 had 32-bit virtual addresses and 29-bit physical addresses, for an example of a machine where the virtual address size is larger than the physical address size. Guy Harris (talk) 01:46, 22 March 2025 (UTC)[reply]
@Timhowardriley: The size of the address field may or may not be related to the size of physical or virtual memory; IBM System/360 memory-reference instructions have a displacement field that's only 12 bits long, with the operand address being the sum of the displacement field, the contents of a base register specified by a 4-bit register field and, for RX instructions, the contents of an index register specified by another 4-bit register field. Physical addresses were 24 bits; there is no field in an instruction capable of storing a full physical addresss. The System/360 Model 67 also supported virtual addressing; it had 32-bit virtual addresses, but the instruction set was, other than a few instructions to handle virtual memory, the same as on other models, so there was no field capable of storing a full virtual address, either. This continued until z/Architecture, which I think included some instructions with larger displacement sizes, but I'm not sure any supported a full 32-bit virtual address, much less a full 64-bit virtual address.
Most if not all RISC architectures also don't have an address field large enough to hold a 32-bit virtual address, as they tend to have 32-bit instructions. For branch instructions they may either have relative displacements for the target or have an absolute word address (as instructions are 32-bit aligned) with the low-order bits used to store an opcode (I think SPARC has a procedure call instruction that works that way). For load and store instructions, the address is constructed from a register and a displacement, similar to System/3x0.
So if a textbook speaks of instructions having an address field whose size is determined by the maximum physical or logical/virtual address, rather than a field that's used to construct an operand address, that's an oversimplification presumably done for pedagogical reasons. Guy Harris (talk) 01:46, 22 March 2025 (UTC)[reply]
The Sixth Edition of Structured Computer Organization speaks of an "address field" in chapter 5 (The Instruction Set Architecture Level), but does not say that field necessarily contains a physical or virtual address; depending on the addressing mode, it may contain a register number, a register number and an offset, two registers and an offset, etc. The part of that chapter that begins "A third criterion concerns the number of bits in an address field." doesn't say the number of bits in an address field must allow that field to contain a memory address; it's discussing the minimum size of of an addressed location, e.g. an 8-bit byte, a 16-bit item, a 32-bit item, etc., and notes that the smaller that minimum size, the more bits needed in order to address a memory of a given size in bits. This means, for example, that an address field with an offset would have to be 16 bits long in order to support an offset of 16K 32-bit words on a byte-addressed machine but would only need to be 14 bits long in order to support an offset of 16K 32-bit words on a 32-bit-word-addressed machine.
I.e., that passage isn't saying the address field size depends on the size of memory, it's saying it depends on the granularity of addresses. Guy Harris (talk) 09:29, 22 March 2025 (UTC)[reply]
Wikiediting is an eye-opening experience. Timhowardriley (talk) 15:10, 22 March 2025 (UTC)[reply]
For the S/360 through z
S/360 other than 360/67
12-bit displacement, 24-bit real
360/67 in 24-bit mode
12-bit displacement, 24-bit virtual, 24-bit real
360/67 in 32-bit mode
12-bit displacement, 32-bit virtual, 24-bit real
S/370 without extended addressing
12-bit displacement, 24-bit virtual, 24-bit real
S/370 with extended addressing
12-bit displacement, 24-bit virtual, 26-bit real
S/370-XA
12-bit displacement, 31-bit virtual, 31-bit real
S/370-ESA not in AR mode
12-bit displacement, 31-bit virtual, 31-bit real
S/370-ESA in AR mode
12-bit displacement, 31-bit[c] virtual, 31-bit real
ESA/390 not in AR mode
12-bit displacement, 31-bit virtual, 31-bit real
ESA/390 in AR mode
12-bit displacement, 31-bit[c] virtual, 31-bit real
ESA/390 extension after Z
Relative instructions have 16-bit signed halfword offsets
Relative long (RL) instructions have 32-bit signed halfword offsets
z/Architecture
12-bit displacement, 24-, 31- or 64-bit virtual, 64-bit real
Relative instructions have 16-bit signed halfword offsets
"Yonder" (Y) instructions have 20-bit offsets
Relative long (RL) instructions have 32-bit signed halfword offsets
The manual may have gotten a litle bigger. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 21:32, 23 March 2025 (UTC)[reply]
Regarding The text should clearly distinguish the size of instruction fields and the sizes of computed addresses.: I understand now and agree. What you called "computed addresses", the book calls "memory space". The size of the instruction field is independent of the computed address. Timhowardriley (talk) 04:51, 23 March 2025 (UTC)[reply]
"Independent of the computed address" presumably meaning "independent of the size of computed addresses", e.g. 16 bits on machines with 16-bit operand addresses (whether physical addresses, if no address mapping is done, or logical/virtual addresses, if address mapping is done), 12 bits on machines with 12-bit operand addresses, 18 bits on machines with 18-bit operand addresses, 32 bits on machines with 32-bit operand addresses, 64 bits on machines with 64-bit operand addresses, etc.. Guy Harris (talk) 07:44, 23 March 2025 (UTC)[reply]
Meaning something like "The size of a physical or virtual address is independent of the sizes of the instruction fields used to compute (form?) that address". -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 21:32, 23 March 2025 (UTC)[reply]


Cite error: There are <ref group=lower-alpha> tags or {{efn}} templates on this page, but the references will not show without a {{reflist|group=lower-alpha}} template or {{notelist}} template (see the help page).