Arbtirary thoughts on nearly everything from a modernist poet, structural mathematician and functional programmer.

Thursday, December 2, 2010

You...

I wrote this a while ago, and nearly forgot about it. Found it in an old forum I no longer post in:

Like every day, I've spent most of today doing math. Sheet upon sheet black with ink, dusty green chalkboards covered in arcane symbols; definitions and theorems that are far too elaborate to explain. For the first time in a while I remembered: I cannot convince myself that any of this is real. There are no infinite collections or compact spaces in the real world. There is no finite axiomatization for life. This is all meaningless symbols on paper.

You, however, are real. Your breath, your eyes, the taste of your lips, are ever so beautifully, painfully real. You have meaning-- you mean everything. But reality and meaning...

They scare me. Far, far too much. There is no reason, no logic behind it, but I am afraid. So I will stay in my fantasy world. I will miss you terribly, but infinity is easier to deal with than love.

Tuesday, October 5, 2010

10 things your barista should want you to know.

This post has been removed.


It was really bad... It was mostly a place for me to complain about something that really grates me (the "ghetto latte"), and to talk about something that people don't seem to think about: unseen costs. The biggest cost for a business is operating costs, not the cost of the product itself. "Doing it yourself" costs a lot more than it may seem at first. Sometime soon(tm), I'll write something about the idea of unseen costs directly. In a less obnoxious and boring way.

Cheers.

Thursday, August 19, 2010

Hungarian math education

Since going to Hungary, I've been wondering why exactly the math education in Hungary is so great; there hasn't been a concerted effort to "improve curriculum" or any formal attempt to make the system so great, but Hungarian math education is fantastic, at least from high school on. The Hungarian math circle got started as something of a spontaneous cultural phenomenon, but I think there are some deeper cultural reasons that it sprouted.

Today I was thinking about Hungary. The things I miss, as well as the things I found annoying. One of the annoying things is the Hungarian mentality. In part because of 800 years of sidelining and oppression from almost all of their neighbors, and in part because of the depression which came from Soviet influence, Hungarians are very reserved, and wear a facade of depression. Along with this, the Hungarians picked up from the Habsburgs a German practicality. As a result of culturally enforced depression, and culturally enforced practicality, open display of excitement and passion for something are frowned upon. If you don't believe me, spend a week or two in Budapest, and watch for how easy foreigners are to spot (hint: they're the loud people who laugh in public), and watch how silent and serious children are.

Hungarian mathematicians, as opposed to most other Hungarians I met, are very excited, passionate people. I think a small group of young students who were interested in math, and couldn't have given a damn what other people thought of them were very public about their passion for math, and this became a sort of counter-culture movement in post-war Hungary. Youth who wanted to open up found this community as a natural place to revolt against the sullen Hungarian attitude. As with most "revolutionary" cultural movements, this group pushed the boundaries. A lot of modern methods and ideas in combinatorics and set theory came out of this group when they were still pretty young.

The math culture in Hungary has perpetuated itself quite well. Partly, this is the natural result of passion being imparted to the students by the instructors, but I think it's largely a continuation of the revolt against the Hungarian mentality: math remains culturally acceptable, but at the same time disillusioned youth can express themselves freely in a culture which continues to uphold an image of stoic depression.

(Or maybe I'm being too hard on Hungarians... Bocsanat, Magyar!)

Tuesday, July 6, 2010

LaTeX

It seems the LaTeX engine I use on this blog is broken... I'll be replacing it soon, assuming it doesn't fix itself first.

Structuralist philosophy and methodology in mathematics

There is a philosophy of mathematics (or rather a collection of related philosophies) called structuralism. In brief, a structuralist believes that mathematical "objects" are positions in a structure, rather than existent objects. This is a rather incohesive and shallow ramble about structuralist methodology and philosophy in mathematics.

Since I hold a rather formalist, and somewhat classical platonic view of mathematics-- which is to say, I do not believe mathematical notions exist in the real world in any real capacity, but rather in some external abstract universe-- I intend to talk about brands of structuralism which do not in any way invoke "reality". Perhaps mostly because of my biases against "the real world", I cannot rightly fathom philosophies of mathematics which invoke the real world in any way.

Anyway, it seems that structuralism as a methodology pre-dates structuralism as a philosophy. What is a "structuralist methodology"? It is the approach which emphasises structures over systems. To use some language from logic, a structural methodology approaches theories, instead of models. A simple example is the tacit tendency to forget the difference between isomorphic groups: $\mathbb{Z}/3\mathbb{Z}$ is "the same group" as $\mathbb{Z}_3$. From a purely set-theoretic or material point of view, this is not correct: the first group has cosets of $3\mathbb{Z}$ in $\mathbb{Z}$ as group elements, and the second has $\{1,2,3\}$ as its underlying set. But the two groups are isomorphic, which means that they act the same as groups The tendency to forget the (quite irrelevant) difference between the two groups is the heart of structuralist methodology.

The Bourbaki group was one of the first to emphasize "high abstraction". Their methods are truly similar in spirit to the modern category-centric structural approach. While I have never read Bourbaki, all the information I can find leads me to believe that the set-theoretic foundation is a result of 2 things: when Bourbaki started, set theory was the only thing to work with (categories had not yet been invented), and "Bourbaki is relentlessly linear in its exposition". With this linearity in mind, changing to a categorical perspective late in the game was out of the question.

The structural approach permeates mathematics, particularly in algebraic areas, and in almost all contemporary approaches to foundations. Isomorphic structures are taken as identical; in category theory (and especially in higher category theory), there is a real push to eliminate notions which do not interact sensibly with equivalence-- equivalence is a weaker notion than isomorphism, but it is still considered a "good enough" notion of equality.

With the ubiquity of structural methodologies in mind, it should be no surprise that a closely related philosophy should spring up. I'm only surprised that it took so long (at least 40 years from the start of Bourbaki) to really pin down "structuralism". A structuralist philosophy takes this methodology as a philosophical starting point: it is not simply productive to study mathematical ideas from a structural viewpoint, but mathematical objects are structures. 3, for example, is not a specific set (e.g. {{}.{{}}.{{}.{{}}}} or {{{{}}}}), but rather a convenient short-hand for any object satisfying a "3-like" position in a structure. This seems "obvious" to me, since any structure which satisfies the Peano axioms will have natural number arithmetic. My formalist tendencies are at work here; the notion of an intended model seems somewhat foreign to me. There are many, many ways to construct the reals within ZFC; if they all act like reals, then what is the "correct" model? All statements true in a specifc model, but not in others are not part of real analysis; the "correct interpretation" is one where real numbers are taken as sui generis objects.

Finally, a change of topic. There seems to be a deep relationship between structuralism and phenomenology, which seems under-explored. Levinas, for one, makes a big deal of "existence without existents"; that is, being without thing-ness. This is exactly the idea of structuralism: we are studying mathematical notions without reference to a specific object to which the notion applies.

Monday, June 21, 2010

With You

I originally wrote this in Ghent (or Brussels?) in early January, but I've had trouble tying it all together. I'm still not competely satisfied with it, but I am content, so here goes.

Edit: Ugh. There are a few lines which just won't work... the first stanza is so seperate from the rest of the poem; I am sorely tempted to cut it out entirely, but I have very difficult-to-explain reasons for wanting to keep it.
Edit again: Yeah. I think the first stanza is the biggest problem. Until/unless I find a way to express what the first stanza is supposed to express, consider the poem to start with "Through days and day-dreams".
Edit again (again): I've removed the first stanza from this post.
****

Through days and day-dreams, I walk alone,
along ancient ways on cobblestones.

Yet in all my wandering, there is one thought
which fills my heart:
I would like,at last,
to be warm at Home.
In Bed.

Friday, June 18, 2010

A few thoughts on the officiating in the 2010 World Cup...

Edit: Sorry about the lack of linkage... I'm too lazy to dig back up all my sources

I'll try not to bore you too much with sports, but I'm super-stoked for the World Cup, and as always, I have some skepticism about some of the officiating.

I was excited last week when the officials were completely, always right. There were a number of times (in each game) when I said "are you kidding?" and then when watching the replay, I realized that the referee had made the correct call, and in a hard-to-see play. My hat's off to them.

Unfortunately, this hasn't kept up. A few days ago (I've forgotten which game), a goal was scored by a clearly off-sides striker. Why wasn't it called? The assistant referee was 3 or 4 yards up-field from the last defender. This is possibly pardonable in a club match, but not at the international level; especially in the World Cup finals.

Then today, we see an inconsistent, booking-happy referee in the Serbia vs Germany game. I'm really skeptical about most of the cautions he made. Regardless, he was inconsistent the whole match, and clearly was approaching his role with a very different mentality in the second half.

And of course, it'll be a while until people shut up about the USA-Slovenia draw... From the first moment I was questioning the referee-- he was inconsistent, missed some clear infringements, and called some spurious "fouls" where there was little contact, and a clean challenge. There were a number of nearly identical little pushes from behind throughout the match. One earned a caution, 2 (I think) earned the proper free kick they deserve, and at least 2 earned an absent-minded turn of the head.

The first booking had me laughing. Findley's yellow for a "handball" (clearly unintentional and off of a hand in a natural position-- viz, clearly not an infringement according to the Laws of the Game) had me groaning. The third booking was fair. The late-game caution near midfield was ever-so-slightly iffy. Had the referee not lost the benefit of the doubt with sub-par decision, I would hardly even bring it up, but his performance all game was questionable enough that I don't feel bad questioning it.

The booking on Josy's break-away was hardly cautionable-- the only reason I see for a booking was stopping a "clear goal-scoring opportunity", so if there's a booking, the guilty player should be sent off.

Finally, we get to the decision that Americans will be moaning about for months to come-- the spurious infringement during Donovan's (would-be goal-scoring) free-kick. Firstly: the announcers were wrong, it was not off-sides, the referee explicitly said it was for a foul. He did, however refuse to say what sort of foul, and who committed it.* Pictures of the play show what appears to be no less than 4 Slovenian defenders fouling American players, and I've seen only one picture which shows what may be an American foul-- Bocanegra appears to have his arms around a defender (Pecnik). In that picture, it looks to me like Bocanegra is falling, and (instinctively) lifting his arms to grab something. Watching the replay only confirms that pictures can never capture what's going on before set-pieces-- a pair of the apparent Slovenian infringements clearly were not, and I can only find 2 certain infringements: Radosavljevic bear-hugging Bradley and Cesar holding back DeMerit. Also, Bocanegra (as I suspected) is laid out at the beginning of the play (He's the guy rolling around when Edu shoots). I can't tell how legitimate his fall is, but considering Pecnik doesn't come down with him, he is clearly not holding Pecnik.

In any case, no less than 2 fouls in the action of the play are missed by the referee, and we have no idea what it was he did see.

*Two notes: first, it is officially (according to something I read on the internet...) a foul against Edu. Considering he hardly touches the only defender who's near him, the only possible call is offsides. Which the referee denied. And which he clearly wasn't.

Second, apparently this is coyness is within the referee's rights. Even in the post-game report, referees are not required to say (i)who committed an infringement, or (ii)what the infringement was, unless the player is booked.
During the game, fine, but in the post-game report? That is unacceptable, precisely for situations just like this. No one knows what the call was, and apparently, this includes the esteemed Mister Coulibaly. Fans need to know what it was that he saw.

Unlike many (American) fans, I won't say that Coulibaly should be investigated for any sort of gross misconduct-- he was hardly fair to the Slovenians, and this is not the first match where his decisions have been loudly questioned-- and I won't say that FIFA should rectify anything. I also (unlike many fans the world over) won't say that new technology should be used in-game to rectify wrong decisions-- part of the beauty of soccer is the pace, and I'd prefer FIFA avoid bad decisions by being more vigilant about using referees with a history of very controversial decisions (3 of the 5 Africa Cups he's refereed in!), than by slowing the game down. But I will say this:

  • FIFA needs to force referees to say who committed an infringement, and what it was, at least in a post-game report-- allowing the referee to archive these after re-watching the game, so as not to slow down the game.
  • FIFA needs to make public statements whenever a referee consistently makes controversial calls, and whenever a referee makes a highly controversial call. Either defending the referee by saying something along the lines of "You may not agree with his interpretation of the Laws, but we consider them acceptable", or admitting that the referee was wrong. A clear procedure for such situations (for petitioning for a statement, and for the subsequent review.)
  • In the case that the referee is wrong, FIFA needs to administer punative actions. If a player acts in an unacceptable manner, they are typically fined, and often suspended for longer than the one-game suspension that comes with a send-off. Similar measures should be instituted for referees whose decisions are not within acceptable interpretation of the Laws.
Edit (About 20 minutes after posting): It seems FIFA is in fact reviewing Coulibaly's performance, and will likely be unassigned from future matches in the tournament. This is a good first step, but I'll be waiting for a statement from the tournament organizers... I'm also interested in hearing from FIFA regarding the censorship claims

Monday, June 7, 2010

Art and Meaning

A recent Abstruse Goose reminded me of Preface to Dorian Gray. Of course, people deride critics all the time. Especially modern art critics. I tend to defend art critics even though I don't understand them, because I know they are working from a different framework, a different background and dialogue than most people (including myself), and that this framework is not necessarily wrong.

But this Abstruse Goose reminded me of something that I don't like about... well, most people who discuss art, and this thing is what reminded me of Dorian Gray: Trying to find meaning in art is to murder it. The criticism and analysis of art should concern itself only with the aesthetics of the art-- the emotional complex that the art inspires. Yes, a piece of art can have a moral, or a meaning, but if the piece does not stand without that moral, then it is a failure as a work of art. The meaning should be extracted and the work discarded.

If something can be said, it can be said directly. "But," you say, "what about metaphors and paraboles?" Yes, they have their place, but they are not art, and should not be treated as such. Pilgrim's Regress is not a good piece because it tells the tale of someone coming to God; it is a good piece because it captures the beauty, the sehnsucht that helped to define Lewis's faith. Without capturing this beauty, it would be nothing. Speaker for the Dead is not a good piece because it questions the neo-colonialism inherent in anthropological methodology; it is a good piece because it captures so many emotional struggles.

I mentioned two novels, because novels most often do have a meaning; but the meaning is not what makes them good art. Aesthetic considerations are what make or break a novel. On the other hand, most visual art, and most good poetry do not have a meaning, and trying to suck meaning out a poem or picture is to fail to see the piece as art. This is unforgivable.

You may say that understanding the meaning helps to appreciate the aesthetics. Take my example of Pilgrim's Regress: is the aesthetic appeal not greater appreciated by understanding the God and the path to salvation? Yes, it surely is, but this understanding is assumed by the work, not expressed by the work. Someone approaching the work from a framework similar to Lewis's own will be better able to appreciate the work. But this says nothing about meaning. Understanding the background of a work, understanding the dialogue that the work is a part of and the "etymology" (if you will forgive the abuse of terminology) of the work will give someone a better appreciation for the aesthetics of the work. But to say that this is achieved by studying the meaning of the work is to commit cum hoc ergo propter hoc. Just because understanding the background helps both to understand the meaning and the aesthetics does not mean that understanding the meaning helps you to understand the aesthetics.

By the way, this is why you will rarely (never?) see me engage in a semantic analysis of a poem; I will study it syntactically, and I will study the way images are invoked, but I will not attempt to understand the author's intent, or understand the meaning of a poem. If the intent is anything but to create something of beauty, then I don't care.

Saturday, May 22, 2010

Another poem.

Can't think of a title:

A man. A woman.
Sitting on the steps
of a Church in London.

A stifling silence.
Seconds fade. The man sighs,
"So what are we to do?
Will you be my tragic muse?"

A mirthless smile.
A soft sad kiss on the shoulder.
And a heavy, hesitant stance.

A man. Sitting
silently on the steps
of a churchyard, thinking.

Categories of paths; functors and natural transformations...

Somehow blogger screwed up, and a post I started (but never finished) a long time ago is "older" than another post of mine. It's about path categories, and higher categories... Hopefully worth a read?

Thursday, May 6, 2010

Mu/Wu...

Mu is a Japanese word. I'll let you look it up there, since it's easier.

Programmers tend to like to re-imagine koans and Buddhist stories; so here's a classic question re-envisioned.

Prelude> Does the dog have the Haskell nature?

Couldn't match expected type `Human' against inferred type `Dog'.
In the first argument of `Haskell nature?', namely `dog'
In the expression "Does the dog have the Haskell nature?"


Cheers.

Monday, April 19, 2010

Categories of paths; functors and natural transformations thereon

Remerber when I said I'd post something about natural transformations and path categories? Well, here it is.

I talked briefly about paths in that post. Let's talk about them a little more. We want a way to talk about a path from a to b in an arbitrary topological space. In $\mathbb{R}$, this is easy enough: take any portion of a curve which starts at a and ends at b. While that's easy to understand, it's a bit unwieldy to work with directly. But we do know that a curve is some sort of a function (think back to 8th grade algebra). So, let's decide this curve is a function. And let's say $f(0)=a$, and $f(1)=b$, and for every $c\in [0,1]$, $f(c)$ lies on the curve we just talked about.

There are a two important things to notice here:
  • We now know the domain of our function: the unit interval (let's call it $I$).
  • This curve should be continuous, or else we have to jump... and what kind of a path is that?


There's something great about these two facts: Where did I mention that our path is a path on $\mathbb{R}$? Nowhere. This means that we can replace $\mathbb{R}$ with any topological space.
So: A path on a topological space $X$ is a continuous function $p:I\rightarrow X$, and we call $p(0)$ the starting point, and $p(1)$ the ending point.

I mentioned categories in the titles, so you may be wondering about now where the category comes from. Let our objects be the points of $X$, and let a morphism from $a$ to $b$ be a path with end-points $a$ and $b$. Is this actually a category? let $id_a$ be constant function $\forall c\in I, p(c)=a$, and let the composition be convolution: $q\circ p = p*q$, where $p*q (c) = p(2c)$ if c<1/2 and $q(2(c-1))$ otherwise.

We run into one big problem here: composition isn't associative, and composition with the identity isn't quite right... but they're both close. You hit all the same points in the right order, but the "speeds" aren't quite right.

We can rectify this with homotopies, but if we use just any homotopy, we'll get pretty boring spaces... so we do what topologists always do to their homotopies when they need to restrict them: fix the end points. So, two paths equivalent up to homotopy with fixed endpoints are now the same.

This gives us a topological space as a category. If this is a category, we should be able to get continuous functions as functors. Yep!
If $F:X\rightarrow Y$, a path $p$ will be sent to the path $F\circ p$. (Exercise: Check that this is indeed a functor, with our wishy washy paths.)

I'm not sure if I've talked about natural transformations. A natural transformation intuitively is a way to transform one functor into another, at the objects.
For two functors $F,G:C\irghtarrow D$, a natural transformation $\eta:F\rightarrow G$ is a collection of morphisms $\eta_x : Fx \rightarrow Gx$, one for each object of $C$. These morphisms need to interact properly with $F$ and $G$. Namely, if $f:x\rightarrow y$ in $C$, then
$\eta_y\circ Ff = Gf\circ\eta_x$. In other words-- starting from $Fx$, following $Ff$ and then a component takes you to the same place as following a component, and then following $Gf$. This must happen everywhere.

One way to understand what's going on is this: There are 3 worlds, $C$, and two other worlds living inside of $D$: $F$-world, and $G$-world. The components of a natural transformation allow us to travel from $F$-world to $G$-world, and if $\eta$ is truly a natural transformation, than we can travel from $F$-world to $G$-world, and then around $G$-world, or you can make the same trip in $F$-world, and then cross over to $G$-world, and either way, you end up in the same place.

What do natural transformations look like when our categories are these path-spaces? Let's see if we can't figure out what is going on. First, let $F,G:X\rightarrow Y$, where X and Y are topological spaces. Now let's look at a path p with endpoints a and b. This will give us two paths in Y, $Fp:Fa\rightarrow Fb$, and $Gp:Ga\rightarrow Gb$. We need a way to turn the first path into the second... How do we do this? Take a homotopy $H:X\times I\rightarrow Y$, where $H_0=F,\; H_1=G$. If we take $H\circ (p\times id)$ (I.e., we take our path, and then apply the homotopy), we get a homotopy from $Fp$ to $Gp$. If we further restrict ourselves to $H_0$, we get... a path $Fa\rightarrow Ga$, and likewise, if we look at $H_1$ we get a path $Fb\rightarrow Gb$. So if $H$ is truly a homotopy, it defines a natural transformation from $F$ to $G$. Likewise, if we have such a natural transformation, we can define a homotopy.

So natural transformations are homotopies. I'm going to stop here, but a fun remark: natural transformations turn Cat what's called a 2-category. So our "2-dimensional" homotopy space (i.e., paths as morphisms and homotopies as natural transformations) turn Top into a 2-category. We can keep going: homotopies between homotopies make 3-morphisms, homotopies between homotopies between ... form n-morphisms, and suddenly we have some notion of $\infty$-category. Moreover, homotopies (and paths) are invertible; which means we actually have an $\infty$-groupoid.

And hopefully that helps motivate some of (higher) category theory. Cheers.

(Disclaimer: There may be gross inaccuracies in this post... please let me know if you find any)

Saturday, April 10, 2010

Interesting notes on Dedekind

I'm reading through Dedekind's The Nature and Meaning of Numbers(as translated by W. Beman), an early treatment of set theory. I find the following convention interesting:

A system [set] $A$ is said to be part of a system $S$ when every element of $A$ is also element of $S$. Since this relation between a system $A$ and a system $S$ will occur continually in what follows, we shall express it briefly by the symbol $A\subset S$. The inverse symbol $S\superset A$, by which the same fact might be expressed, for simplicity and clearness I shall wholly avoid, but for lack of a better word, I shall sometimes say $S$ is whole of $A$ [$S$ contains $A$], by which I mean to express that among elements of $S$ are found all the elements of $A$. Since further every element $s$ of a system $S$by (2) can be regarded as a system, we can hereafter emply the notation $s\subset S$.

(The bold is mine, and the symbol used by Dedekind is not, in fact, $\subset$, but the same symbol is used throughout. The [...] is also my own clarification.)

The question is, of course: is he confusing the two notions $A\subset S$ and $s\in S$, or is he just abusing notation? Considering the context, I doubt the latter, so it would appear he is confusing the two notions. On the other hand, his reasoning seems to be clear throughout, and points (1) and (2) (the text of which I will not force upon you), seem to suggest that he well understands the difference between the idea of "system" and "thing" (as he puts it), that I find the first alternative likewise hard to accept. Although perhaps not, as (2) may provide the source of his confusion. He says "For uniformity of expression it is also advantageous to include the special case where a system $S$ consists of a single (one and only one) element $a$, i.e., the thing $a$ is element of $S$, but every thing different from $a$ is not an element of $S$." This seems to suggest that that the confusion is not elementhood versus subsethood (forgive the abuse of the English language...), but rather, $a$ and $\{a\}$. Either way, it's fascinating, and it seems that this confusion (if that's what it is), does not pop up in the rest of the text.

Another interesting point is that I see the first (that I know of) use of a few common words and notations. A few that come to mind:


  • The word identity to mean what is commonly meant in the mathematical community. That is
    The simples transformation of [function from] a system is that by which each element of its elements is transformed into itself; it will be called the identical transformation of the system.

  • . for composition: "This transformation [the composition] can be denoted briefly by the symbol $\psi .\phi$ or $\psi\phi$." This same paragraph has the first proof I've seen that sets with functions forms a category... albeit, not in those words, and as his set theory is naive, it is not technically correct (as it is not even a consistent system!)


There also appears to be (at least) two flawed proofs.
He shows first that $f(A)\subset f(B)\Rightarrow A\subset B$, and from this concludes that $f(\cap_{i\in I} A_i) = \cap_{i\in I} f(A_i)$. (You can show the first is false by taking some $s\in S\setminus B$, and mapping it into $f(B)$. Then $A=\{s\}$ is a counterexample. Actually, the second is quite correct, assuming the first statement...) Although again, I'm not quite certain: This theorem appears where he is discussing bijections (which he calls "similar transformations"), which might lead one to believe he means only in the case of bijective functions, but in every other theorem in the section he is clear to point out if the function is supposed to be bijective. Further, it appears every function is a priori surjective up until the next section.

While I'm picking apart such a crucial text, I might as well continue complaining: the translation is also infuriating at times as it translates phrases such as dann ist $A$ and dann gibt es as "then is $A$" and "then is there", rather than the more natural "then $A$ is" and "then there is". I do like Miltonic inversion, but this is hardly poetic writing...

Sunday, April 4, 2010

On algebra

Here, someone posted a question about "what algebraists do."

I think the question is interesting, and I like my response (man, does my voice sound good... or something), so I'm posting it here. As the discussion progresses, I'll continue to update this post.

*****
(forcesofodin)
Seems like most of the math majors at my school call themselves algebraist. I really am unsure what an algebraist does. It seems like they're the mathematical equivalent of biologists, observing, categorizing, all the while linking categorizations and members thereof together in new (sometimes surprising ways). But having a name and label for everything (it's been done with finite groups I believe) seems to uninteresting a goal for so many people to be algebraist. Indeed, over categorization and labeling breeds repugnant amounts of technical terms. I know some people like to name-drop with technical terms, but to me it seems more beneficial working to not use the technical terms, to be able to explain to those without the background. Even our major tools, the morphisms, are just ways of categorizing new groups/rings into a variety of already encountered sub types of rings/groups. Such a goal would be wholly useless for someone doing analysis on differential equations.

*****

(me)
In the grand scheme of things, what mathematicians do is categorize and describe increasingly abstract structures. This isn't a pursuit unique to algebraists. The Poincare conjecture was part of a classification movement which is similar in spirit to the classification of finite groups: What manifolds are diffeomorphic to R^n? To S^n? Through the 20th century, you saw the same push for this classification as you did for FSGs. You also see similar attempts to classify things in graph theory-- there are two "forbidden" minors for planarity, but there are 33 (I think?) for the projective plane, and hundreds for other spaces; graph theorists are spending a good deal of time categorizing embeddability.
When you look at category theorists (who I consider to be algebraists...) you see that they aren't categorizing (uh... sorry) anything any more than anyone else-- in fact higher dimensional category theorists are just starting to really figure out what exactly it is they're trying to talk about; they don't have a whole lot of time to worry about how to taxonomize these things.

Regarding term-dropping: Think of terms like Hausdorff, regular, normal, and compact in topology (and continuous, uniformly continuous in analysis); these are all convenient shorthands that say "the object we are looking at satisfies some extra properties." These extra properties give us information about the structure we are looking at. Would you really rather I say "Let G be a topological space for which every open cover has a finite subcover" every time I talk about compact spaces, or would you rather I say "Let G be compact" and move on to what I'm trying to say?
Yes, there are people who like to drop big words to feel good about themselves, but the point of these abstract, esoteric definitions is not to be esoteric or precocious-- the point is to get past the things we see over and over again, and move onto what we're trying to talk about. The words, like any word, are a way for us to communicate information efficiently. Because mathematicians work with new structures all the time, we have to also be in the business of creating language. Since we are trying to describe structures for which there has never been a need for words, by using other structures for which there has never been a need for words, anything we tried to say would very quickly become unruly if we didn't have a quick way of saying it.

My favorite example recently is from ETCS (a structuralist set theory): the axioms for it can be very conveniently stated "The category of sets is a well-pointed topos with a natural number object satisfying the axiom of choice." If you know what a well-pointed topos is, what a natural number object is, and what the axiom of choice is, then you understand the axiom system. Compare that to ZFC-- while the ZFC axioms might be easier to pick apart (explaining the whole axiom system for ETCS in words that "any" mathematician could understand immediately would take... a while), a number of mathematicians are familiar with all 3 of the things needed to understand that axiom (at least, as familiar as they are with the formalism of ZFC) from other areas, so this sentence conveys a good deal of information-- so long as you have the language. It allows someone talking about ETCS to move past the definition, and get to "real" mathematics quicker.

Anyway, onto your question "what does an algebraist do?"
That's a difficult question, in large part because "algebraist" is a much vaguer term than "analyst" or "topologist". A category theorist could be called an algebraist, someone doing finite group theory will be using very different methods than someone doing infinite group theory, and they work with completely different structures than someone doing ring theory or galois theory.
So the question becomes: what about a pursuit makes it "algebraic"? I would say the focus is on some notion of transformation. An action is "algebraic" if it involves pushing some object through a transformation to see what happens. An algebraist studies the way these transformations interact with each other. Turning it around "algebraic [insert mathematical field here]" is the study of a given class of objects (those of the mathematical field we are "algebra-izing") by studying how the objects move under these transformations.

So, I would say an algebraist studies transformations. This is my principal reason for calling category theorist algebraists: when it comes down to it, they aren't studying categories, they are really studying functors and natural transformations-- ways that categories can be transformed.

Also, you say

But having a name and label for everything (it's been done with finite groups I believe) seems to uninteresting a goal for so many people to be algebraist.


Interestingly, there was a discussion about the classification of finite simple groups on the FOM mailing list, in which someone said John Conway was "pessimistic" about the classification: he meant that Conway was pretty sure the classification was complete. So mathematicians spend a good deal of time classifying things, but really, the goal isn't to classify things, it is to understand the structures that we see. The classification is a (possibly unfortunate, possibly fortunate) side effect.

Cheers,
Cory

*****

I failed to respond to the following statement in the above, and I'm feeling rather philosophical (and not particularly sleepy... and also, apparently, verbose) today, so I'll say something about this.

Even our major tools, the morphisms, are just ways of categorizing new groups/rings into a variety of already encountered sub types of rings/groups. Such a goal would be wholly useless for someone doing analysis on differential equations.


that's not all what morphisms are; A morphism from an object A to an object B is a way of saying that you have a relation between to objects-- it means you can say something about B by looking at A (or just as often, you can say something about A by looking at B). The beautiful thing about morphisms is that they show up everywhere: functions are morphisms of sets, homomorphisms are morphisms of (algebraic) structures, continuous functions are morphisms of (topological) spaces, paths are morphisms of points (in a topological space), homotopies are morphisms of continuous functions, proofs are morphisms of propositions, functors are morphisms of categories, natural transformations are morphisms of functors (in more than one way)... the list goes on; an example which is close to home at the moment is morphisms of graphs: a k-coloring of G is a morphism from G to the k-clique*.

In fact, whenever you have a transitive, reflexive relation, you have morphisms, and vice versa. The idea of morphism has very much permeated all of math. Even if it's not (explicitly) being used in an algebraic sense, this categorical language is becoming more and more common, because it very nicely captures something all of matehmaticians do: apply a certain type of function to our objects. What type of function? One that preserves the "interesting" structures of our object. I find it hard to believe that such a general and pliable notion is useless for any mathematician.


*There are some really great results that prove the colorability of whole classes of graphs, simply by making use of composition of morphisms, and apparently graph homomorphisms are being used to precisely and neatly say things that could only be said using rather messy and approximate arguments before.

*****
(forcesofodin)
fair enough, I wish you had taught me algebra.

*****
(pseudonym)
There's so much more to algebra than groups, rings and fields! In broad terms an algebra is just a pair $(X,\Omega)$, where $X$ is a set and $\Omega$ is a set of operations of finite arity on , in which a number of additional rules may hold governing the actions of the operations. The additional structure that can be placed on a general algebra, such as demanding that certain identities hold in the application of sequences of operators (e.g. associativity etc.) make the concept of an algebra very flexible in what it can be used to model. Along with the familiar objects mentioned above algebras have applications in order theory (lattices), logic (boolean algebras with operators, cylindric algebras etc.), theoretical computer scientists can even use algebra to describe the way computer programs work (Kleene algebras), and there is plenty more besides these examples.

With regards the terminology, on an undergrad course it can seem like its just there for its own sake. You prove a lot of stuff that seems like busywork. But this is just because even relatively advanced undergrad/beginning grad courses are really only introductions. They're trying to give you an overview of the tools that are available but they rarely have time to motivate them by going into the problems from which the definitions emerged.

*****
(me)
fair enough, I wish you had taught me algebra.


No you don't, I really don't have the background in algebra I should considering the amount of time I spend raving about it...
(If only I spent that time doing it...)

Thanks for your post, pseudonym, that's a really important point; it also explains why "algebraic combinatorics" focuses so much on lattice theory. (At least, if my description of "algebaric ___" is correct in general.)

Also,

But this is just because even relatively advanced undergrad/beginning grad courses are really only introductions. They're trying to give you an overview of the tools that are available but they rarely have time to motivate them by going into the problems from which the definitions emerged.


This is definitely the hardest part of math education, and is also one of the biggest problems (although there may not be a good solution to it.) The step from solving exercises to original math is really more of a leap, and one with which I am currently floundering. Were it somehow possible to introduce these motivating situations sooner, I think this leap would be easier to make, as students would get to see why we do it that way instead of some other way.

I think this shows up in topology a lot; the definition is signficiantly more abstract than anything most students have seen in analysis at that point, and some understanding seems to get lost along the way. The number of questions on math overflow revolving around "Why is topology definted this way" is some interesting evidence for this.

*****

(pseudonym)
When I look back at my undergrad days I can see how several of the tutors tried to work motivation and exposition into their problem sets, but at the time a lot of it went over my head. I was fairly good at solving problems but I was a long way from seeing them in a wider context. I think the problem is that often the motivating issues are too complex to get across to people who haven't aquired the mathematical maturity gained from a few years of grappling with terminology and educational 'toy' problems.

*****

(jason.chase)
You all sound very intimidating. I am just about to leave my world of problem sets for this wider, terrifying world. I don't know whether reading this is inspirational of scary.

*****

(me)
Hmm... that very well could be the problem... And I do know that my instructors seem to have gotten better at communicating motivation over the past couple of years... perhaps I've just gotten better at understanding it.

You all sound very intimidating. I am just about to leave my world of problem sets for this wider, terrifying world. I don't know whether reading this is inspirational of scary.


Heh. :)
I can promise it is much more requarding and enjoyable once you start trying to break out of problem sets-- pursuing an idea (even a fruitless one!) is very exciting, and gives you a much deeper understanding of the thing you're working with than any problem set can. Suddenly seeing a connection (such as noticing a surprising structure show up "in the wild") is a wonderful feeling that is very difficult to get across with problem sets. (Although I certainly have had this happen while working on a problem set.)

Of course, problem sets will always be important-- I never expect to understand a book until I work the problems, and never expect to understand a lecture or paper without working out the proofs on my own, even when they are "trivial"-- so you'll be able to comfortably hide inside a cozy problem set for a bit whenever you get too afraid of the wilderness.

*****

What I meant by overuse of terminology is when a fellow mathematics student throws in technical terms specific to their expertise that they know I don't know, instead of trying to offer possibly longer explanations in terms that are appropriate to my background. In my experience it is the algebra whiz kids that are the most likely to do this, but perhaps it's only a mistake of not realizing that they at one point didn't know these words. I wish I could remove this statement altogether though as it's a gross generalization fueled by finitely many cases of personal frustration.

With regards the terminology, on an undergrad course it can seem like its just there for its own sake. You prove a lot of stuff that seems like busywork. But this is just because even relatively advanced undergrad/beginning grad courses are really only introductions. They're trying to give you an overview of the tools that are available but they rarely have time to motivate them by going into the problems from which the definitions emerged.


Yes this is an excellent point, and a topic that should be explored in its own thread (but not on the algebra forum of course). It's interesting to look back at high school books, and early undergrad books at the problems to see how they were really setting you up for later material. Like integral convergence questions in my calc book use for the exponent the power p, as a primer to showing the difference between convergence in the different Lp spaces. That's a bad example perhaps, but you know what I mean.

I think a good professor will tell the students why something will be important later. The downfall to this, is that it can lead to students ignoring other "less relevant" parts of the course material. But if only I knew how important Taylor's theorem was when I was learning integral calculus as a freshman. Something I know consider to be the most important tool in applied mathematics is something I used to think was busy work to fill the end of the semester.

Of course, problem sets will always be important-- I never expect to understand a book until I work the problems, and never expect to understand a lecture or paper without working out the proofs on my own


This is an excellent point as well. In the transition to theoretical mathematics I foolishly began overlooking the importance of "drill work". However, in studying for the GRE math subject test I've seen an amazing improvement in my problem solving skills as a whole, that are no doubt a result of repeated drill work. Tools I knew about but in practice never thought to use are now actively surfacing in my consciousness , and I feel so much more empowered.

Above all the foundations of your knowledge base need to be practiced over and over again as you progress (i.e. algebra, geometry, trig. , calculus calculus calculus). A building is only ever as strong as its foundation, and an A grade almost never implies true mastery. I can't tell you how many kids who get A's in algebra can't apply the same tricks in the calculus setting or beyond.

even when they are "trivial"-- so you'll be able to comfortably hide inside a cozy problem set for a bit whenever you get too afraid of the wilderness. :D


This can help build confidence and help alleviate some of the fear of mathematics, it's important for the student to realize 'hey, I CAN do this stuff'. Fear of mathematics is such a powerfully negative force for some people. In tutoring calculus I have seen near brilliant people fail to answer the simplest of questions, only because of the fear and preconceptions of calculus. If I had asked the same questions without calculus floating in the air, they would have thought I was belittling them. So in learning mathematics an air of confidence (but not over confidence or self importance) is powerful and necessary. Maybe I should really say an understanding of one's own potential. I have a saying I made up about this:

Knowledge is only useful if you know you have it
But only a fool thinks himself otherwise
So praise not what you think you know
And embrace only the potential to grow

Monday, March 1, 2010

The fundamental group functor part 2

So... in the previous post I promised to finish what I was saying about the fundamental group functor... So far I've sketched the proof that this is, indeed, a functor. I would show the proof in detail, but it's long, tedious and not very informative-- the point is, it's a map from Top$_*$ to Grp which preserves morphisms. There are much more interesting things a functor can preserve. Namely, it can preserve products and coproducts.

So what's a product? As a motivating example, look at Set. When we talk about the product of two sets, we clearly mean the cartesian product. Since we're interested in category theory at the moment, we don't really want to talk about members of the product, we want to talk about maps to and from the product.

It turns out there are two really nice maps $pr_1:A\times B\rightarrow A$ and $pr_2: A\times B\rightarrow B$, the projections onto $A$ and $B$ respectively. It turns out that they have a really nice universal property: Given an object $V$, and two maps $f:V\rightarrow A$, $g:V\rightarrow B$, we can "factor" $f$ and $g$ through $A\times B$ in a unique way. This means that we have a unique map $h:V\rightarrow A\times B$ such that $f=pr_1\circ h$ and $g=pr_2\circ h$. At first it may be a bit surprising that this map (sometimes called $f\times g$) is unique. But really, our two projections forget everything about one side of our product, so the function needs to act "independently" on $A$ and $B$, and there's really only one way to get this to interact properly with the projections.

Something that's more surprising is this: the product is unique up to unique isomorphism. This means that if there is a "different" product (Why not try $B\times A$?), there is a single, canonical isomorphism between the two objects-- just factor the projections from one product through the other. This map is unique, and it damned-well better be an isomorphism. (To see that it is, factor the projections back the other way, wave your hands about and say something about "the identity morphism".)

Ok. So, by analogy with Set, we (sort of) get what a product is. What about coproducts? A nice thing about category theory is that whenever you see a word that starts with "co", you can figure out what it means in 3 easy steps:
  1. remove the "co" from the word.
  2. Draw the diagram that represents the word you just found.
  3. Turn around all the arrows.
So, this means the coproduct, $A\coprod B$, should have two maps $i_1:A\rightarrow A\coprod B$ and $i_2:B\rightarrow A\coprod B$ (called the imbeddings) such that for any pair of maps $f:A\rightarrow V$ and $g:B\rightarrow V$, we have a unique morphism $h:A\coprod B\rightarrow V$ such that $h\circ i_1 = f$ and $h\circ i_2 = g$. For some reason this always seems a little harder to follow. Let's work it out in Set. Let's look at $f$ and $g$ as in the definition. We want some map (call it $f*g$ because I can't think of what the actual notation is) that goes from somewhere to $V$ such that $f*g \circ i_1 = f$ and the same with $g$. We want $i_1$ and $i_2$ to do almost nothing... what happens if we take $A\coprod B$ as the disjoint union? (Hence the notation...) What is the imbedding? It's the "move me from $A$ to $A\coprod B$" function. And what could $f*g$ possibly be? Well obviously, it's the function which sends $a\in A\mapsto f(a)$ and $b\in B\mapsto g(b)$.

Ok. Great. We know what the product and coproduct need to look like (at least when we only care about the product of two objects). What exactly are they in Top$_*$? It turns out they are the wedge products-- take the disjoint union (familiar?) and glue the two spaces together at their base-points. This means that we have two completely unrelated spaces (modulo open sets containing the basepoint.)

This idea of a coproduct being the result of "smashing together" two objects without making them at all related is basically consistent throughout basically every category. In fact, in the category of groups, it's the free product, which is the "freely generated" product of the two groups-- For groups $G$ and $H$, this means the set of all words on $G\cup H$, where things reduce in the "obvious" way and no other way... I'm going to pretend like this makes sense to you, since (as you've surely learned by now) I have yet to decide what level of audience I'm writing for.

So, taking this back to the fundamental group functor: for two (pointed) spaces $(S,s)$ and $(T,t)$, we would like $\pi_1(S\times T, (s,t)) = \pi_1(S,s)\times \pi_1(T,t)$ and $\pi_1(S\coprod T, (s,t)) = \pi_1(S,s)\coprod \pi_1(T,t)$.

Guess what? I'm going to cop out of actually proving this! (Are you surprised? You should be used to this by now...)
However, I will at least wave my hands around a bit and give you a feel for why it's true. First let's look at products. As an example, look at the torus-- $S^1\times S^1$. Draw a path on this. We want to be able to push this path down to a path which only lives in one copy of $S^1$ in each component. (I.e., a path which stays on $(S^1\times\{0\})\cup(\{0\}\times S^1)$. ) We can do this by pushing (in a continuous fashion-- i.e., homotopically) all points of our path onto one of our two reference circles.

For coproducts: it's a little more obvious in some sense--- any path is going to stay in one of our two spaces for a while, and then cross over to the other. The homotopy group we get here "reduces" in the obvious way, and no way else-- i.e. it's the free product.

Ok. there. I've fulfilled my promise. Expect a more detailed and less obnoxiously hand-wavy post about natural transformations and path categories soon (TM)

Monday, February 22, 2010

The Zahir and Asterion

Sorry... I know I promised to finish that last post about 2 weeks ago... I'll get around to that soon. In the mean time, I've just started reading The Zahir by Borges, which I somehow haven't read. I could have sworn I had read the whole of The Aleph, but I digress. I stumbled upon the following passage (I don't know who the translator is):

Until the end of June I distracted myself by composing a tale of fantasy. The tale contains two or three enigmatic circumlocutions: “water of the sword”, it says, instead of blood, and “bed of the serpent”, for gold, and is written in the first person. The narrator is an ascetic who has renounced all commerce with mankind and lives on a moor. (The name of the place is Gnitaheidr.) Because of the simplicity and innocence of his life, he is judged by some to be an angel; that is a charitable sort of exaggeration, because no one is free of sin. He himself (to take the example nearest at hand) has cut his father’s throat, though it is true that his father was a famous wizard who had used his magic to usurp an infinite treasure for himself.

Protecting this treasure from mad human greed is the mission to which the he has devoted his life; day and night he stands guard over it. Soon, perhaps too soon, that watchfulness will come to an end: the stars have told him that the sword that will cut him off forever has already been forged. (Gram is the name of the sword.) In an increasingly tortured style, the narrator praises the luster and flexibility of his body; one paragraph offhandedly mentions “scales”; another says that the treasure he watches over is of red rings and gleaming gold. At the end, we realize that the ascetic is the serpent Fafnir and the treasure on which the creature lies coiled is the gold of the Nibelungen. The appearance of Sigurd abruptly ends the story.

This sounds rather amusingly like... The House of Asterion which was published in the same collection. This is one of the things I really like about Borges: He makes very subtle references to other works of his. Can anyone think of any other specific examples of this?

Saturday, January 30, 2010

The fundamental group functor

This is something I've always (read: since I learned about it less than 6 months ago) found pretty neat. There's nothing terribly original here-- everything can be found in any algebraic topology book, and in most general topology books, but I don't think categorical language makes its way in there all the time...

The point of this "little" post is to point out that the operation taking a (pointed) topological space $(X,x_0)$ to it's fundamental group, $\pi_1(X,x_0)$ is a functor which preserves products and coproducts... (Did that sentence have a point? Sorry... I'm done.)

First, as a technical point: we need to work int he category of pointed spaces: Top$_*$. (A pointed topological space is just a pair $(X,x_0)$ where $x_0\in X$. The morphisms are continuous functions $f:(X,x_0)\rightarrow (Y,y_0)$ such that $f(x_0)=y_0$. The idea is we are distinguishing a point, just as we do to get the fundamental group.) The reason for this is that it gives a nice way of distinguishing between base points (for our fundamental group) in different path-components-- every selection of base point gives us a new space-- Some are isomorphic. This allows us the avoid the technical nightmare of what to do with non-path-connected spaces. (I.e., we don't get a functor if we're only working in Top) There's another reason for this: Wedge products give Top$_*$ a sensible notion of coproduct-- or at least, one which is actually preserved by the functor.

So, first of all, what does it mean for us to have a functor? A functor is a map between categores which preserves identities and composition of morphisms. In other words, for categories $C$ and $D$, $F:C\rightarrow D$ is a functor if $F(id_c)=id_{F(c)}$ for every object $c\in C$, and for every pair of morphisms
\[c_0\stackrel{f}{\rightarrow}c_1\stackrel{g}{\rightarrow}c_2\]
In C, we have that $F(g)\circ F(f) = F(g\circ f)$.

Given a function $f:(X,x_0)\rightarrow(Y,y_0)$, $f$ induces a homomorphism $f_* : \pi_1(X,x_0)\rightarrow \pi_1(Y,y_0)$-- Any path in $X$, when fed through $f$ becomes a path in $Y$. Since the map preserves basepoints, a loop at $x_0$ becomes a loop at $y_0$-- seeing that this is compatible with homotopy isn't too difficult.

To say that $\pi_1(-)$ is a functor means that $f_*(\pi_1(X,x_0)) = \pi_1(\operatorname{Im} f,y_0)$ and that $(id_X)_* = id_{\pi_1(X)}$ (Sorry, commutative diagrams are not working so hot in this $\LaTeX$ package... I'll need to do something about that.) A quick diagram chase shows that this is the case.

Now is where things finally get interesting... and... I'm tired, and will finish this later today.

Tuesday, January 19, 2010

A few words on Balaam's Error

I'm not sure why I'm writing this down now, but: I don't agree that "Balaam's error" has anything to do with money. Based on his actions, monetary reward seems to be a small concern for him. His error comes from this: He is afraid to contradict the Moabites. He is too polite, too unwilling to offend.
Sometimes things need to be said which are offensive-- causing offense is rarely good in its own right, but offensive things are important. Balaam was too afraid (either socially, or for his life) to tell the Moabites something offense, "God says 'no!'"

We should learn from this.
Creative Commons License Cory Knapp.