Talk:IDL (programming language)
(Full disclosure -- I (User:Mpiper) am an RSI employee.) The phrase "IDL's market share has been recently decreasing considerably" in the second sentence isn't true. I don't want to change it, though, because it could be seen as a conflict of interest. I think a less strongly worded statement, or a deletion, would be more appropriate.
- (William M. Connolley 21:52, 10 Mar 2005 (UTC)) Fair enough... my own disclosure: I use and like IDL. I have no idea what its market share is. But perhaps some figures could be supplied if anyone wants to reinsert the text.
I modified the statement about loop speed: if I understand correctly, IDL loops are implemented in C and therefore can't be faster than a well-designed loop in C -- they can at most run as fast as such a thing. That is true of modern vectorized languages in general, including IDL, GDL, Matlab, Octave, Python, and Perl. zowie 19:10, 3 May 2005 (UTC)
- (William M. Connolley 19:44, 3 May 2005 (UTC)) Fair enough. IDL is probably faster than a non-well-crafted-by-the-user loop, but as fast as a well crafted one...
I won't mess with it, but I'm not entirely sure what the appropriate way to write the "Problems" section is, and I'm not especially happy with the way it is now -- several "example problems" have been named, and solutions to them have been posted, but (in my experience anyway) the main problem is one that infects a great deal of late-model software: backward compatibility enshrines earlier mistakes. In the case of IDL, this adds up to a lot of "solutions" that the programmer must remember, and that tend to crowd out real design. I certainly don't want this to become a battleground page -- languages in particular are fodder for unproductive flame wars -- but I don't think that the Problems section does anything useful the way it stands now. It should either be augmented (by someone other than me -- I'm probably too partisan, as a quick google for "zowie IDL" will reveal) or deleted.
- Hmmm... Came back to this some weeks later and noticed it was still the same. I added "...requiring individual work-arounds by the programmer", which I hope does not add too much POV but captures the flavor of what I was trying to get across... zowie 4 July 2005 17:43 (UTC)
(Niel Malan)I have a problem with the "Feature" "has all function arguments passed by reference ("IN-OUT")". Certainly IDL passes variables by reference, but indexed variables are passed by value. And what is "IN-OUT" supposed to mean?
- I believe that this is a reference to the fact that you can pass values into and out of subroutines via the pass-by-reference mechanism, though as you point out it is broken (or at least violates the principle of least surprise) in IDL, because subregions of arrays act differently than complete arrays. But, hey, this is Wikipedia -- if you've got a problem with a page, fix it! :-) zowie 20:18, 23 August 2005 (UTC)
Problems
My limited experience with IDL at the Summer Science Program programing an orbit determination project were not pleasant. I tested out various for and while loops, and the performance was like snot. Java, which runs on a virtual machine was alot faster, (like ten times). Anyway, our version of IDL had a bunch of problems, a really dumb one was the the constant pi, !dpi, was wrong in the seventh decimal place. I don't want to be flaming, but what are your thoughts on all of this? --BorisFromStockdale 07:15, 10 March 2006 (UTC)
- The value for double precision pi is not incorrect. IDL by default only prints 8 significant figures and rounds the last digit. To see the full value, you need to use a FORMAT keyword in PRINT, e.g. print,!DPI,format='(F24.22)' .Leuliett 13:28, 10 March 2006 (UTC)
- I haven't put it in the Problems section but I consider IDL's formatting of double precision variables to be pretty bug-ridden -- at the very least, it frequently violates the principle of least surprise. For example, PRINTing a bunch of doubles to a file and then READing them back in (without specifying a FORMAT) gives incorrect results, because the default FORMAT doesn't use the double-precision context: the first variable to be read in gets the first part of the mantissa of the first variable written, and the second variable to be read in gets the rest of the mantissa and the exponent from the first variable, so (for example) writing 1.234567890000e13 and 3.141592653589793238e0, then attempting to read them back in gives you 1.234567 and 890000e13. (I may have the digits counts wrong). That problem goes back at least 15 years so it is clear that RSI has no intention of fixing it. zowie 15:34, 10 March 2006 (UTC)