Jump to content

Talk:Half-precision floating-point format

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Snogglethorpe (talk | contribs) at 10:20, 20 June 2012 (add note about confusing history). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

This page confuses increased and decreased precision. Surely increased dynamic range is double-precision : what does 'precsie' mean ? Half-precision reduces storage requirements instead !

ie Half of what ? Hmmm ... Maybe they should be one page ?

=> To me it was perfectly clear: the precision is the amount of bits/digits past the floating point. An integer is very unprecise, because between every two integers there's a big range of other numbers. e.g. what if you want to drink 3.5 glasses of milk instead of 3 or 4? An integer is not precise enough to represent this. A single precision floating point has no problem with 3.5, or even 3.000 005 (dunnow the exact amount of zeros allowed, but my point is clear). Single precision takes up 4bytes/32bits though, so it's quite memory hungry. To combine the best of 2 worlds (small memory footprints of ints but good precision of singles/floats), half precision float/a float with 16bits/2bytes memory representation is ideal. You trade the loss of precision for the gain in less memory footprint. Indeed, with those 2 bytes, 3.000 005 is not representable. 3.05 is representable though.

Hardware Support for HP?

Is anyone aware of actual hardware providing native support for half-precision floating-point datatypes? This would be a helpful information to be added.

129.27.140.172 (talk) 15:35, 5 November 2009 (UTC)[reply]

precision with large numbers

quite clever, pulling the equivalent of a 40-bit integer out of a 16-bit space, but accuracy must surely suffer at the high end? if the largest figure is ~65500 with 10 (11?) bits of mantissa, are we moving in steps of 64 (32?) up at that end? are we not then sacrificing accuracy of representation for wide range? 193.63.174.10 (talk) 09:46, 18 October 2010 (UTC)[reply]

(I mean - a 16 bit, unsigned integer at that point would move in single steps, being 32-64x more precise; every number beyond ~1024/2048 is in fact less precise than the integer form, and we're not able to represent "halves" until 512/1024 or less, and full single-place decimal to about 100/200) 193.63.174.10 (talk) 09:52, 18 October 2010 (UTC)[reply]
Also, what of 20 or 24bit precision? Are there no standards for using that in either integer or float form? The number packs in to 2.5 or 3 bytes, we get somewhat better precision and the same or a wider range, without using quite so much space. Or even 21-bit (RGB+very simple transparency in 64 bits; 18/19/20 to give RGB+4/7/10bit transparency with HDR... or even 17/13 (4x half would be 16/16 of course)... i'd probably err towards 19/10 or even 19/20/18/10 RGBA) 193.63.174.10 (talk) 09:57, 18 October 2010 (UTC)[reply]

Confusingly written

The range is the difference between the minimum, non-zero value, and the maximum value. The precision is the number of significant figures available in the format. "Single" format (from memory) has a range of 10^-37 to 10^37, and about 7 significant figures. Double format has a range of about 10^307 down to 10^-307, and about 14 significant figures.

It looks like, from the entry, that 16 bit floating points have a range from 5*10^-8 to 65504 (10^-8 to 10^4), so about 12 orders of magnitude. And, that the precision is about 4 significant figures.

As for the section: Precision limitations on integer values, it would probably be more useful to have a chart of epsilon versus number magnitude. 195.59.43.240 (talk) 13:25, 25 April 2012 (UTC)[reply]

History is confusing

What's up with the history section? The page originally said that ILM created it for EXR (circa 1999), and that matches what I've seen elsewhere (e.g. the OpenEXR history page just says "ILM created the "half" format"). Later someone edited the wikipedia page to say that "NVidia and ILM created it concurrently" -- and otherwise seems to imply (but never says so explicitly) that NVidia invented it. But all the links seem to be very vague about exactly happened...

So what's the actual sequence of events? It's unlikely they (ILM and NVidia) both developed the same exact standard independently, so presumably either they cooperated on its development, or one of them did it first, and the other based their work on that.

--Snogglethorpe (talk) 10:20, 20 June 2012 (UTC)[reply]