Jump to content

Talk:Instruction-level parallelism

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by The1physicist (talk | contribs) at 03:52, 21 May 2006 (comments). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Hi:

The 1st sentence of the last paragraph of instruction level parallelism says "As of 2004, the computer industry has hit a roadblock in getting further performance gains from ILP". I wondering what roadblock refers to. Does it refer to software techniques or hardware techniques? From which papers/reports/experiences/perspectives, the author made this assumption? I am a student and curious about it.

Thanks very much!

John

The roadblock that I was referring to was the difference in operating frequencies between the CPU and main memory. CPUs are now running in multiple Gigahertz (1 cycle << 1 nanosecond), while the access of times of DRAMS are still in the range of ~50 nanoseconds. The result is that any memory reference that misses within all of the on-chip caches will force the CPU to encur a penalty of hundreds of cycles. None of the techniques that exploit ILP can overcome this very large discrepancy. That's why a PC with a 4Ghz CPU is only marginally faster then one with a 3Ghz CPU. Dyl 23:45, 28 November 2005 (UTC)[reply]

Dyl, please explain why you reverted my edit. The memory wall can limit performance, but it does not limit ILP. Software is ultimately what determines ILP. The industry isn't shifting to TLP because of the memory wall, it's because ordinary code doesn't parallelize well.the1physicist 23:56, 18 May 2006 (UTC)[reply]

Your diagnosis of limited ILP in "normal" software as the reason TLP/MP being used instead of more ILP-heavy techniques is not correct. The IPC (instructions per cycle) is first limited by the memory wall. That is, the main reason wider/faster machines are not being built is due to the memory wall. If memory latency dropped dramatically, the industry would start building wider&faster machines again. I'm not saying that ILP in "normal" software is infinitely high, of course it is not. I am saying that the memory latency is currently the main culprit in limiting performance, not the ILP in the code. Until you solved the memory issue, there was no reason to try more esoteric ILP enhancing techniques as the performance was already throttled by another seemingly unsolveable issue. Also, the re-newed popularity of TLP is due to its latency tolerant properties more then anything else. Dyl 06:00, 19 May 2006 (UTC)[reply]
Some comments: "I'm not saying that ILP in "normal" software is infinitely high, of course it is not." Well then the article need to say something about this. Next time, instead of wholesale reverting my edit (as is done with vandalism), change what you think is wrong, and keep the improvements. Reverting good faith edits for a minor error tends to piss people off. "the memory latency is currently the main culprit in limiting performance, not the ILP" Nope, the 'effective' ILP can be limited by the memory wall, but ILP is inherently a software concept. Either way, you need to cite a source defending your position.