I was reading your article "The pitfalls of verifying floating-point computations". On page 18, you discuss the computing of sin(14885392687), and say:
Both the Pentium 4 x87 and Mathematica yield a result ≈1.671e−10. However, GNU libc on x86 64 yields ≈1.4798e−10, about 11.5% off!
Although your article is meant to show that exact analysis of floating point computations is feasible (and within human reach for simple examples), the quoted sentence is quite a good counterargument:-). Namely, despite appearance, the correct result (formally, the more correct result) is the latter one, 1.48e-10. That can be proved using Taylor series estimates (exact, and pretty simple for such a small argument modulo 2pi) and integer arithmetic (exact, you just have to believe the first 20 or so digits of pi). I can send you computations if needed.
By the way, I think I know what went wrong in your use of Mathematica. You probably used N@Sin@14885392687 (or Sin@14885392687.), which works with machine precision, and doesn't guarantee any digits. In fact, it just calls the C function. If you call Sin with exact argument 14885392687, and use the second argument of N to request specific number of decimals (for example, N[Sin@14885392687,20]), thus employing Mathematica's precision-tracking engine, you will see that the second significant digit is 4, not 6. (For more details see http://reference.wolfram. com/mathematica/tutorial/ NumericalPrecision.html#10697. )
Nema komentara:
Objavi komentar