So we're going to take a little detour and forget about functions for a moment. Instead, we're going to talk about ordinals. First, we need to talk about ordered sets really quick.
An ordered set is a set with order. I know...
So, a set is any collection of objects within the universe of discourse (i.e. if we're talking about numbers, we can't have a set with plates; but if we're talking about dishes we can't have a set with numbers). Typically, order doesn't matter, so the set {1,2,3} is the same as the set {2,3,1}. Also, the number of times an element shows up doesn't matter, so {1,1,1,2,3} is the same as {1,2,3} (is the same as {2,3,1}).
In an ordered set, the order does matter. Some relation applies to each set of two differentelements. I don't use the word pair, because in a mathematical pair, (a,b) is not the same as (b,a). Either a < b or b< a.
To be more for formal. Define a relation < on a set S. For any distinct members a,b of S, either (a,b) is in < , or (b,a) is in < .
An ordered set is well-ordered if the set, and every subset (ordered by the same relation) of it has a first element. For example the integers (including negative integers) ordered according to the normal definition of "< " are not well-ordered, because it does not have a first element. Neither are the natural numbers ordered backwards (i.e. {...,3,2,1,0}).
But the natural numbers ordered normally are, since any subset will have a first element.
An ordinal number is a "number" which designates a well-ordered set. (Well, any well-ordered set that satisfies the same properties).
Anyway, avoiding all the definitions and proofs, when you add two ordinals, attach the second set to the end of the first set. and "rename" the elements to avoid repetition. Notice that this is not commutative-- a+b is not necessarily the same as b+a.
Also, before you ask, numbers are defined as the set of all numbers that are before it in the usual ordering of the naturals. 0={} (empty set), 1={0} = {{}}, 2={0,1}, etc.
For example 1+ω (ω meaning infinity) is {0,0,1,2,...}. Renaming all of the elements after the first one gives us {0,1,2,...} = ω
However, +1 gives is {0,1,2,3,...,1}. The last element can't be relabeled to the "next element of ω" because ω has no last element. So we make 1 into 1' (or something) so we have {0,1,...,1'}, which is still infinite but has a first and a last element, so is not ω.
[note: from now on the relabeling of identical elements will be assumed, so ω+1={0,1,2,...,1} will be acceptable)
Now, multiplication works similarly. In a*b, a is appended to itself b times. E.g.
2*4 = {0,1,0,1,0,1,0,1} (with proper relabeling).
Again, 2*ω = {0,1,0,1,.....} Again, we get 2*ω = ω
But, ω*2 = {0,1,2,...,0,1,2,}. This is ω+ω which is not the same as ω, because we have two disjoint maximal subsets with no last element.
That's really all we need to know about ordinals: addition and multiplication are non-commutative, and for any finite a, a*ω = a+ω = ω, and ω*a = ω+ω+...+ω, (a times).
Now that that is out of the way, we can look back at lambda calculus.
The first thing to notice is that everything has a successor and a predecessor (There's a function for that, but it's messy and we don't need to see it.) And if a comes before b, and b comes before c, then a comes before c. Also, the predecessor of 0 is o-- which is to say, 0 is the first element.
What I'm trying to say is that each of the natural numbers in LC represents an ordinal number.
I said earlier that Y might act like an ordinal number. Namely, ω:
ω=Y=λf.(λx.f(xx)) λx.f(xx)
Since I haven't actually described how this works.Yf takes a function and returns f(Yf). That is, it returns itself with an extra f out front; where f is the function it was given. This also means that ff(Yf) -> f...ff(Yf). Hurray! An infinite number of f's.
We need to see if this is actually a valid encoding of ω. So, we need to make sure all of the following hold:
1+ω=ω, 2*ω=ω, ω+1 is distinct from ω, and ω*2=ω+ω.
Let's look back at our definition.
Addition:
plus m n := λm.λn.λf.λa. m f (n f a)
Multiplication:
times m n := λm.λn.λf. m (n f)
1+Y:
-> λf.λa. 1 f (Y f a) -> λf.λa. f (Y f a). There are a few rules of LC that allow us to throw out the a. I'm not sure how to describe them easily. It has to do with free and bound variables... I'll let you figure that out.
Anyway,
-> λf.f(Yf) which is what Yf "reduces" to. So 1+Y holds as expected
Y+1:
λf.λa. Y f (1 f a) -> λf. λa. Y f fa -> λf.λa.f (Yf) fa
The fa at the end don't go in the (Yf) parenthesis, because Y doesn't use them.
So what we have is equivalent to
λf. (Yf) f.
The a can be thrown out, but not the f, since it's the same as the argument to Y. This means, we have an extra f at the end. I.e. Y is not Y+1.
2*Y:
λf. Y (2 f) -> λf. (Y (ff))
This is our own major problem. I'm not entirely sure what to do with this, and it isn't going to reduce to (Yf) unless I'm very much mistaken.
Y*2:
λf. 2 (Yf). -> λf.(Yf)(Yf)
Y+Y = λf.λa. Y f (Y f a). Again, this becomes:
λf. Yf (Yf), rearranging our notation:
λf. (Yf)(Yf)
So we have a tricky 2*Y. Does that mean Y isn't ω? Probably. Does that mean Y isn't infinite? No. Especially since 2*Y is still going to end up infinite.
I doesn't matter anyway, since Delaney was talking about addition, and this clearly holds for Y=ω.
Future things:
Figure out how to do 2*Y. Figure out how to encode sets and cardinals in LC.
Arbtirary thoughts on nearly everything from a modernist poet, structural mathematician and functional programmer.
Tuesday, July 29, 2008
Sunday, July 27, 2008
On Lambda Calculus and Ordinal Numbers (part 1)
I'm still annoyed at the guy I posted about on Friday. Mostly, I'm annoyed about how he accuses mathematicians of being contrary to critical thought, and of being authoritarian. Those two things don't work in math-- you can preach something as dogma, but if you don't have proof (based on certain clearly stated axioms, and valid lines of reasoning), your words ring hollow.
The other thing I'm annoyed about is his callous rejection of transfinite theory. He seems to think Hilbert's Hotel was supposed to be an intuitive "proof" that infinities work the way they do, rather than a clarification of how they work. Also, he rejects infinities... absolutely and completely. As a formalist, I disagree with the rejection of anything interesting, but something else is crying out in pain at that idea-- perhaps I'm more of a Platonist than I think.
Anyway, I'd like to clarify my comment on the fixed point combinator (Also known as the Y-combinator), because... mostly because I'm a nerd. But also because I think it acts like an infinite ordinal.
I'm going to write this in two parts. The first will be on Lambda Calculus, the second on Ordinals and how to apply LC to ordinals. Hopefully, I can also discuss cardinals, but that may be a bit of a stretch.
First, a bit on (untyped) Lambda Calculus and Church Encoding: Lambda Calculus is a formalism for dealing with functions. For those who have heard of set theory, think of it as an analog, only everything is a function (or an argument of a function), rather than a set (or element of a set). The basics are as follows: You have some function f, and some argument a you apply the function to the argument and get a result ((f)a) -> b. So b is the result of evaluating f. It is defined based on how f is defined, so if we define ((f)a) -> a, then for any a we use we get the same value back (this function is called the identity, often just I.) When it isn't ambiguous, we can drop the parenthesis:
Ia is the same as ((I)a) which will evaluate a.
You can have any number of arguments:
(f)ab) -> c, or (..(f)a... z) -> A.
But, you can treat f as a function which returns a new function. Looking at the first example above [(f)ab -> c], we can say (f)a -> fa, where fa is a new function, which will take some new argument. In this way, a function taking any number of arguments can be turned into a series of "chained" functions, all returning new functions.
What's useful about this, is we can forget that non-functions exist (until it's convenient to do otherwise), and just work with functions that return other functions. Also, we can abuse our notation and use ((f)abcd...) to be ((((((f)a)b)c)d)...); the first is obviously cleaner.
Now we can also define what Church calls "abstractions". Abstraction is just a way of defining a function. It looks like:
f := λa.b
This is the same as what I showed earlier with ((f)a) -> b, but it's more precise notation (technically, my earlier notation means something different). What it says is f is a function which takes 1 argument and returns b. So the function I (the identity) is I := λa.a
We can do the same thing with multiple arguments:
f := λabc.d OR f := λa.λb.λc.d.
Notice how the second version can be interpreted to mean f takes one argument a, and returns a function which takes another argument, which returns a function....
I won't get into the gritty details, but there are only a few things we're allowed to do in lambda calculus:
*Define a function, using abstraction (and maybe a special notation... we'll cross that bridge when we come to it.),
*Rename variables (this is called alpha-conversion,). There are certain rules for this (which we don't need to pay attention to), but the only real purpose is to avoid confusion, and to make
proofs easier to follow.
*Apply abstract functions. (Called beta-reduction) This only comes into play with abstractions, but since we can only use functions which have been defined via abstraction (or in terms of already defined functions), this can be applied to any function. If we have some abstraction λx.fx (where f is some function we already know), applying the abstraction looks like this:
((λx.fx) a) -> fa.
All that really happens is we return what's after the "." with everything before the dot replaced with everything outside of the parenthesis.
*Replace equivalent expressions with each other (called eta-conversion). So, for example, I and λx.x can replace each other in any expression.
This comes into play when we have, for example ((I)a). To evaluate this:
((I)a) -> ((λx.x)a) , which is an application, that beta-reduces to a. This is, of course, mind-numbingly tedious, but in actual calculation, most of these steps are done implicitly
***
Now, it's a little more complicated than that, but not much. However, we can construct all of our numbers with it. I'll only focus on the natural numbers (0,1,2, etc), but it can be extended to integers, rationals and the reals using messy definitions in much the same way as in set theory.
Now, a "Church Numeral" is an encoding of a number using lambda calculus. Basically, the goal is to use LC to make definitions for all of our numbers (well, counting numbers) as well as multiplications, addition and exponentiation (exponentiation, just because it's easy). We'll ignore subtraction and division because they are actually rather messy. What we need are functions that interact the same way our numbers and operations interact.
Let's start with numbers. An easy way to define a number is to say that a number n takes a function, and applies it n times. This makes sense, but it isn't quite lambda calculus. So, we start with 0:
0 := λf.λa.a The astute reader will notice that what it returns is the identity:
0 := λf.I
1 is defined in the same way:
1 := λf.λa.fa Ignoring the a, we notice that this is the identity. The one difference is that we restrict the valid arguments to function values. (In general, f will always be a function. a can be, but whether it is necessarily a function is dependent on context, and will be clarified)
Now, we can define a successor. That is, the number after the number we're given. It takes (awkwardly) 3 arguments (see above about arguments, and see below about bracket notation):
SUCC := λn.λf.λa.f (nfa) (i.e. ((f) (((n)f)a))
What it says is given any number n, the successor is defined by applying the function argument of n (that is, f) to the other argument of n (that is, a) one extra time.
This is how we get to each number from the previous one. As an example:
[Don't worry if you can't follow my examples, they shouldn't actually be necessary.]
SUCC 2 -> (λn.λf.λa.f (nfa)) λf.λa.ffa -> λf.λa. f ((λf.λa.ffa) f a)
applying the "f a" at the end we get:
λf.λa. fffa; when we count the f's this means 3.
Now, using the same logic as the successor function, we can defined addition:
[M+N] := plus m n := λm.λn.λf.λa. m f (n f a) {remember that m and n are numbers}
The thing to notice about this is (n a) is the "a" in m. I.e. ((λf.λa.f...f a) f) (n a) -> (λa.f...fa) (n a) -> f....f...fa , where the first "f..." is m f's, and the "f...f" is n f's. So we have m+n f's, as we want.
Another example:
[2+3] -> (λm.λn.λf.λa. m f (n a)) 2 3 -> λf.λa. 3 f (2 f a) -> {dropping the leading λf.λa for cleanliness}
3 f ((λf.λa.ffa) f a) -> 3 f (ffa) -> ((λf.λa. fffa) f) ffa -> (λa.fffa) ffa ->{adding the λs back in} λf.λa.fffffa -> 5
Next,
[M*N] := times m n := λm.λn.λf. n (m f) {i.e. ((n) (m f))}
Notice that the function being applied n times is (m f). So, we have (remembering back to 2nd grade) n sets of m f's, or n*m f's. Again, just as we want.
Finally, exponentiation. I don't know if I'll work this into the second part... but we'll see.:
[M^N] := exp m n := λm.λn.λf n m f
This is very much like multiplication, only n is applied straight through (without the little "m f" detour), so What ends up happening is we get n copies of M all multiplied together.
Anyway, tomorrow I'll have a post about ordinals, and how to apply ordinal arithmetic to LC
The other thing I'm annoyed about is his callous rejection of transfinite theory. He seems to think Hilbert's Hotel was supposed to be an intuitive "proof" that infinities work the way they do, rather than a clarification of how they work. Also, he rejects infinities... absolutely and completely. As a formalist, I disagree with the rejection of anything interesting, but something else is crying out in pain at that idea-- perhaps I'm more of a Platonist than I think.
Anyway, I'd like to clarify my comment on the fixed point combinator (Also known as the Y-combinator), because... mostly because I'm a nerd. But also because I think it acts like an infinite ordinal.
I'm going to write this in two parts. The first will be on Lambda Calculus, the second on Ordinals and how to apply LC to ordinals. Hopefully, I can also discuss cardinals, but that may be a bit of a stretch.
First, a bit on (untyped) Lambda Calculus and Church Encoding: Lambda Calculus is a formalism for dealing with functions. For those who have heard of set theory, think of it as an analog, only everything is a function (or an argument of a function), rather than a set (or element of a set). The basics are as follows: You have some function f, and some argument a you apply the function to the argument and get a result ((f)a) -> b. So b is the result of evaluating f. It is defined based on how f is defined, so if we define ((f)a) -> a, then for any a we use we get the same value back (this function is called the identity, often just I.) When it isn't ambiguous, we can drop the parenthesis:
Ia is the same as ((I)a) which will evaluate a.
You can have any number of arguments:
(f)ab) -> c, or (..(f)a... z) -> A.
But, you can treat f as a function which returns a new function. Looking at the first example above [(f)ab -> c], we can say (f)a -> fa, where fa is a new function, which will take some new argument. In this way, a function taking any number of arguments can be turned into a series of "chained" functions, all returning new functions.
What's useful about this, is we can forget that non-functions exist (until it's convenient to do otherwise), and just work with functions that return other functions. Also, we can abuse our notation and use ((f)abcd...) to be ((((((f)a)b)c)d)...); the first is obviously cleaner.
Now we can also define what Church calls "abstractions". Abstraction is just a way of defining a function. It looks like:
f := λa.b
This is the same as what I showed earlier with ((f)a) -> b, but it's more precise notation (technically, my earlier notation means something different). What it says is f is a function which takes 1 argument and returns b. So the function I (the identity) is I := λa.a
We can do the same thing with multiple arguments:
f := λabc.d OR f := λa.λb.λc.d.
Notice how the second version can be interpreted to mean f takes one argument a, and returns a function which takes another argument, which returns a function....
I won't get into the gritty details, but there are only a few things we're allowed to do in lambda calculus:
*Define a function, using abstraction (and maybe a special notation... we'll cross that bridge when we come to it.),
*Rename variables (this is called alpha-conversion,). There are certain rules for this (which we don't need to pay attention to), but the only real purpose is to avoid confusion, and to make
proofs easier to follow.
*Apply abstract functions. (Called beta-reduction) This only comes into play with abstractions, but since we can only use functions which have been defined via abstraction (or in terms of already defined functions), this can be applied to any function. If we have some abstraction λx.fx (where f is some function we already know), applying the abstraction looks like this:
((λx.fx) a) -> fa.
All that really happens is we return what's after the "." with everything before the dot replaced with everything outside of the parenthesis.
*Replace equivalent expressions with each other (called eta-conversion). So, for example, I and λx.x can replace each other in any expression.
This comes into play when we have, for example ((I)a). To evaluate this:
((I)a) -> ((λx.x)a) , which is an application, that beta-reduces to a. This is, of course, mind-numbingly tedious, but in actual calculation, most of these steps are done implicitly
***
Now, it's a little more complicated than that, but not much. However, we can construct all of our numbers with it. I'll only focus on the natural numbers (0,1,2, etc), but it can be extended to integers, rationals and the reals using messy definitions in much the same way as in set theory.
Now, a "Church Numeral" is an encoding of a number using lambda calculus. Basically, the goal is to use LC to make definitions for all of our numbers (well, counting numbers) as well as multiplications, addition and exponentiation (exponentiation, just because it's easy). We'll ignore subtraction and division because they are actually rather messy. What we need are functions that interact the same way our numbers and operations interact.
Let's start with numbers. An easy way to define a number is to say that a number n takes a function, and applies it n times. This makes sense, but it isn't quite lambda calculus. So, we start with 0:
0 := λf.λa.a The astute reader will notice that what it returns is the identity:
0 := λf.I
1 is defined in the same way:
1 := λf.λa.fa Ignoring the a, we notice that this is the identity. The one difference is that we restrict the valid arguments to function values. (In general, f will always be a function. a can be, but whether it is necessarily a function is dependent on context, and will be clarified)
Now, we can define a successor. That is, the number after the number we're given. It takes (awkwardly) 3 arguments (see above about arguments, and see below about bracket notation):
SUCC := λn.λf.λa.f (nfa) (i.e. ((f) (((n)f)a))
What it says is given any number n, the successor is defined by applying the function argument of n (that is, f) to the other argument of n (that is, a) one extra time.
This is how we get to each number from the previous one. As an example:
[Don't worry if you can't follow my examples, they shouldn't actually be necessary.]
SUCC 2 -> (λn.λf.λa.f (nfa)) λf.λa.ffa -> λf.λa. f ((λf.λa.ffa) f a)
applying the "f a" at the end we get:
λf.λa. fffa; when we count the f's this means 3.
Now, using the same logic as the successor function, we can defined addition:
[M+N] := plus m n := λm.λn.λf.λa. m f (n f a) {remember that m and n are numbers}
The thing to notice about this is (n a) is the "a" in m. I.e. ((λf.λa.f...f a) f) (n a) -> (λa.f...fa) (n a) -> f....f...fa , where the first "f..." is m f's, and the "f...f" is n f's. So we have m+n f's, as we want.
Another example:
[2+3] -> (λm.λn.λf.λa. m f (n a)) 2 3 -> λf.λa. 3 f (2 f a) -> {dropping the leading λf.λa for cleanliness}
3 f ((λf.λa.ffa) f a) -> 3 f (ffa) -> ((λf.λa. fffa) f) ffa -> (λa.fffa) ffa ->{adding the λs back in} λf.λa.fffffa -> 5
Next,
[M*N] := times m n := λm.λn.λf. n (m f) {i.e. ((n) (m f))}
Notice that the function being applied n times is (m f). So, we have (remembering back to 2nd grade) n sets of m f's, or n*m f's. Again, just as we want.
Finally, exponentiation. I don't know if I'll work this into the second part... but we'll see.:
[M^N] := exp m n := λm.λn.λf n m f
This is very much like multiplication, only n is applied straight through (without the little "m f" detour), so What ends up happening is we get n copies of M all multiplied together.
Anyway, tomorrow I'll have a post about ordinals, and how to apply ordinal arithmetic to LC
Saturday, July 26, 2008
On One (and Infinity)
So, I once again find myself puzzled by people who cannot accept that .9... = 1. I'm not sure exactly what it is that draws mathematicians to this puzzle; it very well could be the average person's confusion-- similar to how many theologians felt the need to waste their time refuting The da Vinci Code.
Anyway, I stumbled across this, a mostly well written essay discussing the topic, with the author ultimately coming to the conclusion that he isn't entirely convinced of their equality.
Avoiding the painful first paragraph (which had me expecting an entertaining piece of non-sense), his argument comes down to a rejection of attainable infinities. viz, the philosophical ideal that infinity is outside of the realm of mathematics.
This is, I guess, a valid axiom, assuming of course that you're a platonist (or, as it turns out in this case, an intuitionist). A formalist can choose to reject the infinity axiom (or any axiom dealing with infinity-- thus, the idea of infinity), but he cannot claim it is wrong. Other philosophies of math run into similar ideological problems. Platonists can claim that infinity "doesn't exist", and I guess, looking at the average complaint with the .9... = 1 proofs comes down to "infinity doesn't exist!"-- an argument which only makes sense in a mathematical platonic context. Of course, Delaney's rejection is largely from the intuitionist persepcetive... I'll deal with that specifically in a bit.
Overall, I find it an annoyingly awkward position to take, and cannot believe that someone with the math training that Delaney has would take it. Allow me to explain:
First and foremost, we lose the concept of the decimal expansion (or maybe not...). This is a small loss for Delaney, as he rejects them anyway "With great temerity, I still hold that any decimal expansion is never exactly equal to pi. The decimal expansion is simply an approximation of pi."
Besides the implicit rejection of any non-finite decimal expansion, this seems to be a a confusion between number and numeral. A numeral is a symbolic encoding of a number-- that is a symbol (or group of symbols) which is interpreted to mean a number. It is, for lack of a better description, the "name" of a number. Just as the word "one" is symbolic of the mathematical object 1, any written decimal expansion is a symbolic description of the number under question. So, yes, any written numeral describing pi (either "pi" or "3.14159..." or anything else) will not actually equal the number itself. However, pi is a number with a value in the real (well-defined) number line. And a decimal expansion (as a mathematical object, rather than a symbolic representation) will have the exact value of pi. Whether or not a human can "read" this value on a page is immaterial-- the mathematical object which is a decimal expansion is identical to the mathematical object which is left as a greek letter. Again, they represent the exact same mathematical object. When doing algbera (or calculus, or anything else), you are working with mathematical objects, not any representation of the mathematical object. It is only when the representation is ambiguous (e.g. when a number is rounded) that problems arise, and that is not because of the objects, but because the representation is imprecise. [note: In rare instances, you do work with the representation, but in such pursuits (aptly named "metamathematics"), the representation is treated as a mathematical object, and is subject to it's own rules (like how working with the reals is different than working with vectors).]
Unless I'm mistaken, we are then forced to allow "infinite decimal expansions" as long as we allow the mathematical object they are identical to. If, on the other hand, we reject these objects, we are rejecting the irrationals (a hefty price to pay), as well as many of the rationals: any number which can be represented as p/q, where p is relatively prime to q, and q is relatively prime to both 2 and 5 will be disallowed (which is to say, 4 out of every 10 non-integer rationals)
We are left with a set of numbers which is not closed under any operation-- that is, we do not even have a group, unless we restrict ourselves to the integers... a boring mathematical universe indeed. Unless of course, we "diagonalize", which Delaney also rejects.
The point of that whole paragraph (which may have been lost somewhere) is that we either accept decimal expansions as valid mathematical objects which are [i]equal[/i] to their fractional counterpart, or we reject almost the whole number line.
In the same way as he questions the nature of decimal representations, he questions "epsilon" the elusive little infinitesimal that he (correctly) calls a "logical entity". Now, he suggests that the separation of epsilon between two numbers (e.g. .99... and 1) should be taken into account. Contrary to his likely expectation, I'm not inclined to disagree. The problem, of course, is that saying two numbers are unequal in a continuum (e.g. the real number line) means there is a number between them. There is no number between them. The "difference"-- epsilon-- is a representation of exactly that idea: they are at the same spot on the continuum, thus there is no mathematical difference between them.
If epsilon had any actual properties in the real number system, we as the mathematical community would be glad to take these into consideration, but adding it has the same effect as 0-- that is to say, no effect at all.
His argument (in the Repeating Nines essay) really hinges on these two concepts: Epsilon, and unattainable infinities. Beyond that, he seems to be bitter because he's on "the losing side" of the debate between "logicists and intuitionists". Of course there is no "losing side" in the first place, as there are plenty of intuitionists doing math right now. Delaney apparently just has trouble working in systems with rules different from those he thinks are "right"-- even most Platonists I know can work in a system they think is untrue. Why was Delaney unable to finish?
Beyond that, he tries to claim that "logicists" (I assume this is a blanket term for formalists and platonists) focus on paradoxes as a way to construct a system... How is this possible? As far as I know, set theory is built to avoid paradoxes. Russell's paradox is a bit of blow to Fregean set theory, but Quine's set theory deals with it quite well; it reduces the statement to nothing. Not a null statement-- but something entirely outside of the universe of discourse. Logicians don't focus on paradoxes, however they are, as you are certainly aware, something that must be dealt with when they arise.
I offer, as "evidence for infinity" the fixed point combinator (because a set-theoretical notation will click with him about the same as the rest of the set theoretical notions have). Y Defined (in an untyped lambda calculus) as Y->λf.(λx.f(xx))(λx.f(xx)). I won't go into the details of the operation, but if we are given Yf, we get f(Yf) which becomes f(f(Yf) -> f(f(...(f(Yf))..)) [I know, my parenthesis are backwards from Church's.] So, Yf= f(Yf) = f....(Yf)... No matter how many f's are started with, an infinite number are attained; that is calling y infinity: infinity +1 = infinity + 1000 = infinity. So, would a functional representation, rather than a set theoretic notation soothe his aching soul, and allow him to accept transfinite theory? Probably not; I'm sure he'll describe some other absurdity which "proves" infinite is unattainable.
He can work in a system whose maximal cardinality is Aleph-0 all he wants, and I won't restrict that, but it doesn't mean infinity is any less real than one-- less applicable, yes, but just as real (which is to say, entirely a mental construct, with certain agreed-upon properties.)
I know, it's rambly...
Anyway, I stumbled across this, a mostly well written essay discussing the topic, with the author ultimately coming to the conclusion that he isn't entirely convinced of their equality.
Avoiding the painful first paragraph (which had me expecting an entertaining piece of non-sense), his argument comes down to a rejection of attainable infinities. viz, the philosophical ideal that infinity is outside of the realm of mathematics.
This is, I guess, a valid axiom, assuming of course that you're a platonist (or, as it turns out in this case, an intuitionist). A formalist can choose to reject the infinity axiom (or any axiom dealing with infinity-- thus, the idea of infinity), but he cannot claim it is wrong. Other philosophies of math run into similar ideological problems. Platonists can claim that infinity "doesn't exist", and I guess, looking at the average complaint with the .9... = 1 proofs comes down to "infinity doesn't exist!"-- an argument which only makes sense in a mathematical platonic context. Of course, Delaney's rejection is largely from the intuitionist persepcetive... I'll deal with that specifically in a bit.
Overall, I find it an annoyingly awkward position to take, and cannot believe that someone with the math training that Delaney has would take it. Allow me to explain:
First and foremost, we lose the concept of the decimal expansion (or maybe not...). This is a small loss for Delaney, as he rejects them anyway "With great temerity, I still hold that any decimal expansion is never exactly equal to pi. The decimal expansion is simply an approximation of pi."
Besides the implicit rejection of any non-finite decimal expansion, this seems to be a a confusion between number and numeral. A numeral is a symbolic encoding of a number-- that is a symbol (or group of symbols) which is interpreted to mean a number. It is, for lack of a better description, the "name" of a number. Just as the word "one" is symbolic of the mathematical object 1, any written decimal expansion is a symbolic description of the number under question. So, yes, any written numeral describing pi (either "pi" or "3.14159..." or anything else) will not actually equal the number itself. However, pi is a number with a value in the real (well-defined) number line. And a decimal expansion (as a mathematical object, rather than a symbolic representation) will have the exact value of pi. Whether or not a human can "read" this value on a page is immaterial-- the mathematical object which is a decimal expansion is identical to the mathematical object which is left as a greek letter. Again, they represent the exact same mathematical object. When doing algbera (or calculus, or anything else), you are working with mathematical objects, not any representation of the mathematical object. It is only when the representation is ambiguous (e.g. when a number is rounded) that problems arise, and that is not because of the objects, but because the representation is imprecise. [note: In rare instances, you do work with the representation, but in such pursuits (aptly named "metamathematics"), the representation is treated as a mathematical object, and is subject to it's own rules (like how working with the reals is different than working with vectors).]
Unless I'm mistaken, we are then forced to allow "infinite decimal expansions" as long as we allow the mathematical object they are identical to. If, on the other hand, we reject these objects, we are rejecting the irrationals (a hefty price to pay), as well as many of the rationals: any number which can be represented as p/q, where p is relatively prime to q, and q is relatively prime to both 2 and 5 will be disallowed (which is to say, 4 out of every 10 non-integer rationals)
We are left with a set of numbers which is not closed under any operation-- that is, we do not even have a group, unless we restrict ourselves to the integers... a boring mathematical universe indeed. Unless of course, we "diagonalize", which Delaney also rejects.
The point of that whole paragraph (which may have been lost somewhere) is that we either accept decimal expansions as valid mathematical objects which are [i]equal[/i] to their fractional counterpart, or we reject almost the whole number line.
In the same way as he questions the nature of decimal representations, he questions "epsilon" the elusive little infinitesimal that he (correctly) calls a "logical entity". Now, he suggests that the separation of epsilon between two numbers (e.g. .99... and 1) should be taken into account. Contrary to his likely expectation, I'm not inclined to disagree. The problem, of course, is that saying two numbers are unequal in a continuum (e.g. the real number line) means there is a number between them. There is no number between them. The "difference"-- epsilon-- is a representation of exactly that idea: they are at the same spot on the continuum, thus there is no mathematical difference between them.
If epsilon had any actual properties in the real number system, we as the mathematical community would be glad to take these into consideration, but adding it has the same effect as 0-- that is to say, no effect at all.
His argument (in the Repeating Nines essay) really hinges on these two concepts: Epsilon, and unattainable infinities. Beyond that, he seems to be bitter because he's on "the losing side" of the debate between "logicists and intuitionists". Of course there is no "losing side" in the first place, as there are plenty of intuitionists doing math right now. Delaney apparently just has trouble working in systems with rules different from those he thinks are "right"-- even most Platonists I know can work in a system they think is untrue. Why was Delaney unable to finish?
Beyond that, he tries to claim that "logicists" (I assume this is a blanket term for formalists and platonists) focus on paradoxes as a way to construct a system... How is this possible? As far as I know, set theory is built to avoid paradoxes. Russell's paradox is a bit of blow to Fregean set theory, but Quine's set theory deals with it quite well; it reduces the statement to nothing. Not a null statement-- but something entirely outside of the universe of discourse. Logicians don't focus on paradoxes, however they are, as you are certainly aware, something that must be dealt with when they arise.
I offer, as "evidence for infinity" the fixed point combinator (because a set-theoretical notation will click with him about the same as the rest of the set theoretical notions have). Y Defined (in an untyped lambda calculus) as Y->λf.(λx.f(xx))(λx.f(xx)). I won't go into the details of the operation, but if we are given Yf, we get f(Yf) which becomes f(f(Yf) -> f(f(...(f(Yf))..)) [I know, my parenthesis are backwards from Church's.] So, Yf= f(Yf) = f....(Yf)... No matter how many f's are started with, an infinite number are attained; that is calling y infinity: infinity +1 = infinity + 1000 = infinity. So, would a functional representation, rather than a set theoretic notation soothe his aching soul, and allow him to accept transfinite theory? Probably not; I'm sure he'll describe some other absurdity which "proves" infinite is unattainable.
He can work in a system whose maximal cardinality is Aleph-0 all he wants, and I won't restrict that, but it doesn't mean infinity is any less real than one-- less applicable, yes, but just as real (which is to say, entirely a mental construct, with certain agreed-upon properties.)
I know, it's rambly...
Tuesday, July 15, 2008
Faithfulness...
God just pointed out to me a (sort of) new interpretation of Romans 8:35-39:
Who shall separate us from the love of Christ? Shall trouble or hardship or persecution or famine or nakedness or the sword? As it is written:
Who shall separate us from the love of Christ? Shall trouble or hardship or persecution or famine or nakedness or the sword? As it is written:
"for your sake we face death all day long;
we are considered as sheep to be slaughtered."
No, in all these things we are more than conquerors through him who loved us. For I am convinced that neither death nor life, neither angels nor demons, neither the present nor the future, nor any powers, neither height nor depth, nor anything else in all creation, will be able to separate us from the love of God that is in Christ Jesus our Lord.
**
I guess it's not so much a new interpretation as it is a new revelation. Once we have tasted of that love. Once we feel that deep bittersweet longing so acutely-- so much more acutely than the bitter sehnsucht we as humans are cursed with-- it is impossible to forget it. Try as our heart might, and despite any laziness, or any apathy, or hardship or distractions, something will always bring that love back into our heart, and like the eager bride before her wedding, we shake with excitement.
God's faithfulness is amazing. It's so incredible how he seeks us despite everything. If only I could consistently nurture that relationship, instead of getting apathetic, I would see more of the amazing things I've seen...
we are considered as sheep to be slaughtered."
No, in all these things we are more than conquerors through him who loved us. For I am convinced that neither death nor life, neither angels nor demons, neither the present nor the future, nor any powers, neither height nor depth, nor anything else in all creation, will be able to separate us from the love of God that is in Christ Jesus our Lord.
**
I guess it's not so much a new interpretation as it is a new revelation. Once we have tasted of that love. Once we feel that deep bittersweet longing so acutely-- so much more acutely than the bitter sehnsucht we as humans are cursed with-- it is impossible to forget it. Try as our heart might, and despite any laziness, or any apathy, or hardship or distractions, something will always bring that love back into our heart, and like the eager bride before her wedding, we shake with excitement.
God's faithfulness is amazing. It's so incredible how he seeks us despite everything. If only I could consistently nurture that relationship, instead of getting apathetic, I would see more of the amazing things I've seen...
Subscribe to:
Posts (Atom)