2016-03-26

Dynamic vs static typing

(This is a response to a post "Dynamic Languages are Static Languages". I tried to post a comment, but Wordpress told me it couldn't send it, probably because the post in question is too old. So I'm putting it here.)

Your thesis sounds wrong to me. My experience is that strong typing simply disallows some perfectly normal easy programs to be written. And that shouldn't be true if dynamically typed languages are just a special case of statically typed ones.

A few months ago, in order to learn Haskell (in an academic setting), I tried to write a Haskell function to test if a Boolean function was valid (always true). The literature I was using (The Haskell Road to Logic, Math and Programming, by Doets and van Eijck) did that in stages, valid1 for unary functions, valid2 for binary ones... I, as a diligent professional programmer, saw a common bit of code in those, and naively thought I could use recursion (and currying) to make a function valid, so that valid 1 was valid1, valid 2 was valid2, and so on. That way, my code would be much more general, _and_ easier to write.

Imagine my shock when I got an occurs check error. Of course, the problem is that valid1 had type (Bool->Bool)->Bool, but valid2 had type (Bool->Bool->Bool)->Bool, so there simply was no way for valid to have a type, and the GHC told me so. In fact, to be completely honest, valid _has_ a type: a dependent product type, n:Integer->(Bool->...->Bool)->Bool, where there are n arrows inside parentheses. But Haskell can't represent such types, at least as far as I learned (please correct me if I'm wrong).

So, you'll probably tell me I was simply using too weak a type system. But Haskell is the poster child for functional programming. If I can't use that, then what? Miranda is even weaker. Standard ML is not pure, it seems. Coq is not even Turing complete. _Where_ do you find that ideal typing engine you're constantly talking about? Or is it just a platonic ideal, with no implementations in the real world?

Or you're going to tell me that valid function is simply not a function. It is a nonsensical notion, since in a well-typed language you could only have valid1, valid2 and so on. To that I'll tell you it's nonsense. I have perfectly well represented valid in Coq, and Coq surely knows its type theory (however, as I said, it has other problems).

Or you're going to tell me that it's no better in the world of dynamically typed languages. That's simply not true. I can write valid in Python in a few minutes. (Thanks to Python's introspection capabilities, I can even drop the first parameter, and ask my argument function how many bools it would like to receive. But I know static typing fans would probably scream at such an idea.:)

Or you're going to tell me to shut up and simply use lists. Integer->([Bool]->Bool)->Bool is a good approximation, and I did use that variation in the end. But it feels like a horrible kludge. I don't want lists, at least I surely don't want Haskell's interpretation of them as codata. It makes no sense to ask if a Boolean function receiving an infinite list of arguments is valid. And even if it did, I have no way to put something in the "Integer" place. Yes, I can get rid of it, and get it from the length of the list. But the situation is even worse than that: I seem to have no way of restricting the argument to only receive lists with _same_ length. Boolean function that knows how to handle both 2-element and 3-element lists is simply not a valid argument for the function valid I was intended to write.

I don't think I can change your opinion, you obviously know a lot about those things. But maybe you can change mine, by communicating your idea better. What am I missing?

2013-03-21

Zanimljivosti za mlađe matematičare

Evo par stvari koje sam napisao nedavno (za Školsku knjigu i znanstveni portal). Na kraju ništa od toga nije ispalo, pa stavljam online. Namijenjeno uglavnom osnovnoškolcima.
Zašto ne smijemo dijeliti nulom?
Generalizirani Fibonacci
Uvod u množenje cijelih brojeva (programi 1,2,3 - pokrenuti u IDLEu, Python 3.3)

2013-02-01

Pitfalls of using Mathematica blindly

(I sent this to David Monniaux. Unfortunately, I never got a reply.)

I was reading your article "The pitfalls of verifying floating-point computations". On page 18, you discuss the computing of sin(14885392687), and say:
Both the Pentium 4 x87 and Mathematica yield a result ≈1.671e−10. However, GNU libc on x86 64 yields ≈1.4798e−10, about 11.5% off!

Although your article is meant to show that exact analysis of floating point computations is feasible (and within human reach for simple examples), the quoted sentence is quite a good counterargument:-). Namely, despite appearance, the correct result (formally, the more correct result) is the latter one, 1.48e-10. That can be proved using Taylor series estimates (exact, and pretty simple for such a small argument modulo 2pi) and integer arithmetic (exact, you just have to believe the first 20 or so digits of pi). I can send you computations if needed.

By the way, I think I know what went wrong in your use of Mathematica. You probably used N@Sin@14885392687 (or Sin@14885392687.), which works with machine precision, and doesn't guarantee any digits. In fact, it just calls the C function. If you call Sin with exact argument 14885392687, and use the second argument of N to request specific number of decimals (for example, N[Sin@14885392687,20]), thus employing Mathematica's precision-tracking engine, you will see that the second significant digit is 4, not 6. (For more details see http://reference.wolfram.com/mathematica/tutorial/NumericalPrecision.html#10697.)