Sunday, May 22, 2016

Spring cleaning, old notebooks, and a little linear algebra problem

Update 5/25/2016: Solution at the bottom

These days, I spend most of my time thinking about microscopes and gene regulation and so forth, which makes it all the more of a surprising coincidence that on the eve of what looks to be a great math-bio symposium here at Penn tomorrow, I was doing some spring cleaning in the attic and happened across a bunch of old notebooks from my undergraduate and graduate school days in math and physics (and a bunch of random personal stuff that I'll save for another day—which is to say, never). I was fully planning to throw all those notebooks away, since of course the last time I really looked at it was probably well over 10 years ago, and I did indeed throw away a couple from some of my less memorable classes. But I was surprised that I actually wanted to keep a hold of most of them.

Why? I think partly that they serve as an (admittedly faint) reminder that I used to actually know how to do some math. It's actually pretty remarkable to me how much we all learn during our time in formal class training, and it is sort of sad how much we forget. I wonder to what degree it's all in there somewhere, and how long it would take me to get back up to speed if necessary. I may never know, but I can say that all that background has definitely shaped me and the way that I approach problems, and I think that's largely been for the best. I often joke in lab about how classes are a waste of time, but it's clear from looking these over that that's definitely not the case.

I also happened across a couple of notebooks that brought back some fond memories. One was Math 250(A?) at Berkeley, then taught by Robin Hartshorne. Now, Hartshorne was a genius. That much was clear on day one, when he looked around the room and precisely counted the number of students in the room (which was around 40 or so) in approximately 0.58 seconds. All the students looked at each other, wondering whether this was such a good idea after all. For those who stuck with it, they got exceptionally clear lectures on group theory, along with by far the hardest problem sets of any class I've taken (except for a differential geometry class I dropped, but that's another story). Of the ten problems assigned every week, I could do maybe one or two, after which I puzzled away, mostly in complete futility, until I went to his very well attended office hours, at which he would give hints to help solve the problems. I can't remember most of the details, but I remember that one of the hints was so incredibly arcane that I couldn't imagine how anyone, ever, could have come up with the answer. I think that Hartshorne knew just how hard all this was, because one time I came to his office hours after a midterm when a bunch of people were going over a particular problem, and I said "Oh yeah, I think I got that one!" and he looked at me with genuine incredulity, at which point I explained my solution. Hartshorne looked relieved, pointed out the flaw, and all went back to normal in the universe. :) Of course, there were a couple kids in that class for whom Hartshorne wouldn't have been surprised to see a solution from, but that wasn't me, for sure.

While rummaging around in that box of old notebooks, I also found some old lecture notes that I really wanted to keep. Many of these are from one of my PhD advisor's, Charlie Peskin, who had some wonderful notes on mathematical physiology, neuroscience, and probability. His ability to explain ideas to students with widely-varying backgrounds was truly incredible, and his notes are so clear and fresh. I also kept notes from a couple of my other undergrad classes that I really loved, notably Dan Roksar's quantum mechanics series, Hirosi Ooguri's statistical mechanics and thermodynamics, and Leo Harrington's set theory class (which was truly mind-bending).

It was also fun to look through a few of the problem sets and midterms that I had taken—particularly odd now to look at some old dusty blue books and imagine how much stress they had caused at the time. I don't remember many of the details, but I somehow still vaguely remembered two problems, one in undergrad, one in grad school as being particularly interesting. The undergrad one was some sort of superconducting sphere problem in my electricity and magnetism course that I can't fully recall, but it had something to do with spherical harmonics. It was a fun problem.

The other was from a homework in a linear algebra class I took in grad school from Sylvia Serfaty, and I did manage to find it hiding in the back of one of my notebooks. A simple-seeming problem: given an n×n matrix A, formulate necessary and sufficient conditions for the 2n×2n matrix B defined as

B = |A A|
    |0 A|

to be diagonalizable. I'll give you a hint that is perhaps what one might guess from the n=1 case: the condition is that A = 0. In that case, sufficiency is trivial (B = 0 is definitely diagonalizable), but showing necessity—i.e., show that if B is diagonalizable, then A=0—is not quite so straightforward. Or, well, there's a tricky way to get it, at least. Free beer to whoever figures it out first with a solution as tricky (or more tricky) than the one I'm thinking of! Will post an answer in a couple days.

Update, 5/25/2016: Here's the solution!


5 comments:

  1. For the necessity part
    if B is diagnolizable => det(B-kI)=0 has distinct roots => det(A-kI)^2 = 0 has distinct roots(https://en.wikipedia.org/wiki/Determinant#Block_matrices) => det(A-kI) = 0 has distinct roots.

    ?

    ReplyDelete
    Replies
    1. Oh, jeez, I'm now realizing that I definitely do not remember all the things we tried that didn't end up working and why (we=my fellow grad students and I), and I barely remember my linear algebra in general. Here's a couple thoughts that may or may not be right. First, B diagonalizable !=> det(B-kI) has distinct roots (I assume this means 2n distinct roots); e.g., the identity matrix is diagonalizable with 1 root of x=1 of high multiplicity. I think that one can say B diagonalizable if and only if its *minimal* polynomial has distinct roots. Even if that part is amended, then det(A-kI) = 0 has distinct roots implies that A is diagonalizable, but showing that A=0 is harder because it's a stronger condition. Another way to look at it: note that this whole line of reasoning would look exactly the same if B = [A 0; 0 A], in which case A of course need not be identically 0 for B to be diagonalizable. So this argument alone cannot prove that A = 0.

      Delete
  2. Since B is diagnolizable we consider C^{-1}BC = /\ i.e. BC = C/\ for some invertible matrix C. Partition C (size 2nx2n) row-wise as [X_{2nxn}; Y_{2nxn}] Then, Ax_i + Ay_i = t_j x_i and 0x_i + Ay_i = t_j y_i where x_i, y_i are the i-th columns of X,Y respectively and /\ = diag(t_1, t_2, t_j ...)

    Since C is invertible, X,Y have full row ranks(n) and thus Ay_i = t_j y_i can be written as AY' = Y'D where Y consists of subset of column vectors y_k1 y_k2 y_k3 ... y_kn such that the corresponding nxn submatrix(Y') is full rank. (col rank Y_{2nxn} = row rank = n)

    Thus implying B is invertible.

    ReplyDelete
  3. Typo in the last line:

    'A' is invertible.

    ReplyDelete
    Replies
    1. Hmm, not sure I fully follow this, but I don't think this quite gets it. The goal is to show that B is diagonalizable -> A is identically 0, in which case A is most certainly not invertible. But maybe I'm missing something?

      Delete