Favorite quotes:
> members of the committee, for the most part, were about equally altruistic. IBM's Dr. Fred Ris was extremely supportive from the outset even though he knew that no IBM equipment in existence at the time had the slightest hope of conforming to the standard we were advocating. It was remarkable that so many hardware people there, knowing how difficult p754 would be, agreed that it should benefit the community at large. If it encouraged the production of floating-point software and eased the development of reliable software, it would help create a larger market for everyone's hardware. This degree of altruism was so astonishing that MATLAB's creator Dr. Cleve Moler used to advise foreign visitors not to miss the country's two most awesome spectacles: the Grand Canyon, and meetings of IEEE p754.
> In the usual standards meetings everybody wants to grandfather in his own product. I think it is nice to have at least one example -- IEEE 754 is one -- where sleaze did not triumph. CDC, Cray and IBM could have weighed in and destroyed the whole thing had they so wished. Perhaps CDC and Cray thought `Microprocessors? Why should we worry?' In the end, all computer systems designers must try to make our things work well for the innumerable ( innumerate ?) programmers upon whom we all depend for the existence of a burgeoning market.
> Epilog: The ACM's Turing award went to Kahan in 1989.
A couple threads from way back:
An Interview with the Old Man of Floating-Point (1998) - https://news.ycombinator.com/item?id=7769303 - May 2014 (17 comments)
An Interview with the Old Man of Floating-Point (1998) - https://news.ycombinator.com/item?id=6656197 - Nov 2013 (21 comments)
One time me and a friend having an animated conversation on the 7th floor of Soda hall at Berkeley and William Kahan came out and gave us a coupon for Sizzlers. I think that was his way of telling us to get the fuck out.
Random anecdote: When I built my first 386 box that had a socket for a 387, I was super eager to fill that socket because even then PC builders were the same as today... but realized there wasn't any software that I used which would utilize it (my QuickC C-compiler didn't even support it!) The first app I remember that used it was Excel. It wasn't till the 486 that commodity games started using it.
He taught numerical analysis at Berkeley, and though he was a great guy, I think he was waay to smart to be teaching undergrads... he'd go off on examples about literally every way that things like SVD could go wrong b/c of FP quirks, or how Matlab implements thing incorrectly, etc.
As someone who has implemented a lot of low level functions using all the tricks of Floating point math, I have very mixed thoughts on Floating Point. Nan and -0.0 both seem like aggressively bad ideas to me. I can totally see why it was believed at the time that they would be good, but they just add a ton of special cases if you want to do things right that slow everything down. IMO, it would have been much better if we got errors instead of NaN (like we do for integer division by zero).
That said, the ability to use double-double schemes to extend precision is wonderful and makes things much easier than they are in most of the Floating Point alternatives that have been proposed (eg Posits).
And before our master Kahan, there was Pat H. Sterbenz. I still have my "Floating Point Computation" in the photocopied bound sheaf that was handed out to numerical analysis grad students at ASU in the late 80s. I learned an enormous amount about what digital computation means in the presence of algorithms in that class.
EDIT: I have an 8087 chip always installed to sitting on my monitor base. Because Kahan.
Note that I have worked with chips designed in this century that did not implement denormals in hardware.
> I think it is nice to have at least one example -- IEEE 754 is one -- where sleaze did not triumph.
A != A if A is a NaN: that's pretty sleazy.
I hope I'm not too late to the party to correct some things I see here. The big accomplishment of Kahan and IEEE 754 was to get companies to agree on where the sign, exponent, and fraction should go, so that data interchange finally became possible between different computer brands.
Kahan wanted decimal floats, not binary, and he wanted Extended Precision to be 128, not 80. I've had many hours of conversation with the man about how Intel railroaded that Standard to express the design decisions that had already been made for the i8087 coprocessor. John Palmer, who I also worked with for years, was proud of this, and told me "Whatever the i8087 is, THAT is the IEEE Standard."
Posits have a single exception value, Not a Real (NaR) for all things that fall through the protections of C and Java and all the other modern languages for things like division by zero, and the square root of a negative value. Kahan wanted the quadrillions of Not a Number (NaN) patterns to be used to encode the address of the instruction in the program to pinpoint where it happened, but the support for this in computing languages never happened. By around 2005, vendors noticed they could trap the exceptions and spend hundreds of clock cycles handling them with microcode or software, so the FLOPS claims only applied to normal floats, not subnormals or NaN or infinities, etc. This is true today for all x86 and ARM processors, and SPARC for that matter. Only the POWER series from IBM can still claim to support IEEE 754 in hardware; hardware support for IEEE 754 is all but extinct.
There are over a hundred papers published comparing posits and floats, both for accuracy on applications and difficulty of implementation. LLNL and Oxford U have definitively showed that posits are much more accurate than floats on a range of applications, so much so that a lower (power-of-two) precision can be used. Like 32-bit posits instead of 64-bit floats for shock hydrodynamics, and 16-bit posits instead of 32-bit floats for climate and weather prediction. For signal processing, 16-bit posits are about 10 dB more accurate (less noise) than 16-bit floats, which means they can perform lossless Fast Fourier Transforms (FFTs) on data from 12-bit A-to-D convertors.
For the same precision, posit hardware add/subtract units appear slightly more expensive than float add/subtract, and multiplier units are slightly cheaper for posits than for floats. This echos what was found comparing the speed of the Berkeley SoftFloat emulator with that of Cerlane Leong's SoftPosit emulator. Naive studies say posits are more expensive because they first decode the posit into float subfields, apply time-honored float algorithms, then re-encode the subfields into posit format. This does not exploit the perfect mapping of posits to 2's complement integers.
Float comparison hardware is quite complicated and expensive because there are redundant representation like –0 and +0 that have to test as equal, and redundant NaN exceptions that have to test as not equal even when their bit patterns are identical. Posit comparison hardware is unnecessary because they test exactly the same way as 2's complement integers. NaR is the 2's complement integer that has no absolute value and cannot be negated, 1000...000 in binary. It is equal to itself and less than any real-valued posit.
The name is NaR because IEEE 754 incorrectly states that imaginary numbers are not numbers, and sqrt(–1) returns NaN. The Posit Standard is more careful to say that it is not a _real_.
The Posit Standard is up to Version 4.13 and close to full approval by its Working Group. Don't use any Version 3 or earlier. The one on posithub.org may be out of date. In Version 4, the number of eS bits was fixed at 2, greatly simplifying conversions between different precisions. Unlike floats, posit precision can be changed simply by appending bits or rounding them off, without any need to decode the fraction and the scaling. It's like changing a 16-bit integer to a 32-bit integer; it costs next to nothing, which really helps people right-size the precision they're using.
When I teach about floating point, the two things I try to impress on the students are: it remains a truly incredible engineering feat to believably fit the entire real number line (plus infinities) into 32 or 64 bits, and, it was an incredible political feat to get so many competing companies to agree on one particular way of doing this; both are thanks to Kahan's leadership. Complaints about the quirks of using floating point could be tempered with some appreciation of the hard design decisions that were made, and with gratitude for the people who pulled it off.