So, I once again find myself puzzled by people who cannot accept that .9... = 1. I'm not sure exactly what it is that draws mathematicians to this puzzle; it very well could be the average person's confusion-- similar to how many theologians felt the need to waste their time refuting The da Vinci Code.
Anyway, I stumbled across this, a mostly well written essay discussing the topic, with the author ultimately coming to the conclusion that he isn't entirely convinced of their equality.
Avoiding the painful first paragraph (which had me expecting an entertaining piece of non-sense), his argument comes down to a rejection of attainable infinities. viz, the philosophical ideal that infinity is outside of the realm of mathematics.
This is, I guess, a valid axiom, assuming of course that you're a platonist (or, as it turns out in this case, an intuitionist). A formalist can choose to reject the infinity axiom (or any axiom dealing with infinity-- thus, the idea of infinity), but he cannot claim it is wrong. Other philosophies of math run into similar ideological problems. Platonists can claim that infinity "doesn't exist", and I guess, looking at the average complaint with the .9... = 1 proofs comes down to "infinity doesn't exist!"-- an argument which only makes sense in a mathematical platonic context. Of course, Delaney's rejection is largely from the intuitionist persepcetive... I'll deal with that specifically in a bit.
Overall, I find it an annoyingly awkward position to take, and cannot believe that someone with the math training that Delaney has would take it. Allow me to explain:
First and foremost, we lose the concept of the decimal expansion (or maybe not...). This is a small loss for Delaney, as he rejects them anyway "With great temerity, I still hold that any decimal expansion is never exactly equal to pi. The decimal expansion is simply an approximation of pi."
Besides the implicit rejection of any non-finite decimal expansion, this seems to be a a confusion between number and numeral. A numeral is a symbolic encoding of a number-- that is a symbol (or group of symbols) which is interpreted to mean a number. It is, for lack of a better description, the "name" of a number. Just as the word "one" is symbolic of the mathematical object 1, any written decimal expansion is a symbolic description of the number under question. So, yes, any written numeral describing pi (either "pi" or "3.14159..." or anything else) will not actually equal the number itself. However, pi is a number with a value in the real (well-defined) number line. And a decimal expansion (as a mathematical object, rather than a symbolic representation) will have the exact value of pi. Whether or not a human can "read" this value on a page is immaterial-- the mathematical object which is a decimal expansion is identical to the mathematical object which is left as a greek letter. Again, they represent the exact same mathematical object. When doing algbera (or calculus, or anything else), you are working with mathematical objects, not any representation of the mathematical object. It is only when the representation is ambiguous (e.g. when a number is rounded) that problems arise, and that is not because of the objects, but because the representation is imprecise. [note: In rare instances, you do work with the representation, but in such pursuits (aptly named "metamathematics"), the representation is treated as a mathematical object, and is subject to it's own rules (like how working with the reals is different than working with vectors).]
Unless I'm mistaken, we are then forced to allow "infinite decimal expansions" as long as we allow the mathematical object they are identical to. If, on the other hand, we reject these objects, we are rejecting the irrationals (a hefty price to pay), as well as many of the rationals: any number which can be represented as p/q, where p is relatively prime to q, and q is relatively prime to both 2 and 5 will be disallowed (which is to say, 4 out of every 10 non-integer rationals)
We are left with a set of numbers which is not closed under any operation-- that is, we do not even have a group, unless we restrict ourselves to the integers... a boring mathematical universe indeed. Unless of course, we "diagonalize", which Delaney also rejects.
The point of that whole paragraph (which may have been lost somewhere) is that we either accept decimal expansions as valid mathematical objects which are [i]equal[/i] to their fractional counterpart, or we reject almost the whole number line.
In the same way as he questions the nature of decimal representations, he questions "epsilon" the elusive little infinitesimal that he (correctly) calls a "logical entity". Now, he suggests that the separation of epsilon between two numbers (e.g. .99... and 1) should be taken into account. Contrary to his likely expectation, I'm not inclined to disagree. The problem, of course, is that saying two numbers are unequal in a continuum (e.g. the real number line) means there is a number between them. There is no number between them. The "difference"-- epsilon-- is a representation of exactly that idea: they are at the same spot on the continuum, thus there is no mathematical difference between them.
If epsilon had any actual properties in the real number system, we as the mathematical community would be glad to take these into consideration, but adding it has the same effect as 0-- that is to say, no effect at all.
His argument (in the Repeating Nines essay) really hinges on these two concepts: Epsilon, and unattainable infinities. Beyond that, he seems to be bitter because he's on "the losing side" of the debate between "logicists and intuitionists". Of course there is no "losing side" in the first place, as there are plenty of intuitionists doing math right now. Delaney apparently just has trouble working in systems with rules different from those he thinks are "right"-- even most Platonists I know can work in a system they think is untrue. Why was Delaney unable to finish?
Beyond that, he tries to claim that "logicists" (I assume this is a blanket term for formalists and platonists) focus on paradoxes as a way to construct a system... How is this possible? As far as I know, set theory is built to avoid paradoxes. Russell's paradox is a bit of blow to Fregean set theory, but Quine's set theory deals with it quite well; it reduces the statement to nothing. Not a null statement-- but something entirely outside of the universe of discourse. Logicians don't focus on paradoxes, however they are, as you are certainly aware, something that must be dealt with when they arise.
I offer, as "evidence for infinity" the fixed point combinator (because a set-theoretical notation will click with him about the same as the rest of the set theoretical notions have). Y Defined (in an untyped lambda calculus) as Y->λf.(λx.f(xx))(λx.f(xx)). I won't go into the details of the operation, but if we are given Yf, we get f(Yf) which becomes f(f(Yf) -> f(f(...(f(Yf))..)) [I know, my parenthesis are backwards from Church's.] So, Yf= f(Yf) = f....(Yf)... No matter how many f's are started with, an infinite number are attained; that is calling y infinity: infinity +1 = infinity + 1000 = infinity. So, would a functional representation, rather than a set theoretic notation soothe his aching soul, and allow him to accept transfinite theory? Probably not; I'm sure he'll describe some other absurdity which "proves" infinite is unattainable.
He can work in a system whose maximal cardinality is Aleph-0 all he wants, and I won't restrict that, but it doesn't mean infinity is any less real than one-- less applicable, yes, but just as real (which is to say, entirely a mental construct, with certain agreed-upon properties.)
I know, it's rambly...
Arbtirary thoughts on nearly everything from a modernist poet, structural mathematician and functional programmer.
- ► 2010 (18)
- ► 2009 (29)
- ▼ July (4)