Musings on Science and Life
Back when I was a tutor at the Bay Area Tutoring Center in San Ramon, I was often assigned to students that came in with calculus or pre-calculus concerns, leaving the middle school geometry and algebra to the regular turnover of younger tutors that starter with the easier courses. I had several students at this level, and the standard curriculum was just reaching the wonderful world of series, with students learning how to sum finite series and using the various rules and tests to determine whether a series converges (conditionally or not) or diverges. Most students found this chapter to be not-too-difficult, and were doing well with the assignments. One student was having difficulty with the material, so I asked him what type of series Grandi's Series was: \[\sum_{k=0}^{\infty}(-1)^k=1-1+1-1+\cdots\] Does it converge or diverge, how to prove it, and the like. After thinking about it for a minute, he asked "Well, doesn't it depend on whether infinity is odd or even?" A cute question; I gave him a wry smile and reminded him that infinity is not a real number, and that oddness or evenness can only apply to integers ("Is 1/7 odd or even?"). He nodded, then had a look of realization. "Oh, it's a geometric series, so we can use the formula." \[\sum_{k=0}^{\infty}r^k=\frac{1}{1-r}.\] "It must be convergent, since the answer is \(\tfrac{1}{2}\)." A gentle reminder about the radius of convergence of \(|r| \lt 1\) and an appeal to the \(n^{\text{th}}\)-term test prompts another response "OK, so it diverges." Indeed it does; I mean, just look at the partial sums. It alternates between \(0\) and \(1\) forever, so how can it settle to a limit value? I demonstrate that such series are pathological by proving that \(0=1\): \[\begin{align*} 0&=0+0+0+0+\cdots\\ &=(1-1)+(1-1)+\cdots\\ &=1+(-1+1)+(-1+1)+\cdots\\ &=1 \end{align*}\] "The associative rule doesn't work for divergent infinite sums. In fact it only works for absolutely convergent sums." That smacks of the Riemann Rearrangement Theorem, but since that technically only applies to conditionally convergent series, I bite my tongue. "But doesn't it still average to 1/2?" I frown. The student has come to realize that like so many other divergent series that do occasionally show up in math and physics, some of them may diverge, but still WANT to be a particular value. That is, there are ways to unambiguously assign unique finite values to otherwise divergent sums. "For now, we only need to distinguish between divergent and convergent." He understands that it is divergent, and we move on. That night, I think more about his question, and remember the ridiculous and yet useful fact that we may assign the sum of all natural numbers the value -1/12, using the peculiar properties of analytically continuing the Riemann zeta function. "What other such series might be useful? Should I enumerate them?" I decided to. Here is a list I've compiled: \[\begin{equation*} \begin{array}{>{\displaystyle}l@{}>{\displaystyle}l>{\displaystyle}l} 1-1+1-1+\cdots&{}=\sum_{k=0}^{\infty}(-1)^k&{}=\frac{1}{2}\\ 1+1+1+1+\cdots&{}=\sum_{k=0}^{\infty}1^k&{}=-\frac{1}{2}\\ 1+2+4+8+\cdots&{}=\sum_{k=0}^{\infty}2^k&{}=-1\\ 1-2+4-8+\cdots&{}=\sum_{k=0}^{\infty}(-1)^k2^k&{}=\frac{1}{3}\\ 1+z+z^2+z^3+\cdots&{}=\sum_{k=0}^{\infty}z^k&{}=\frac{1}{1-z}&\\ 1+2+3+4+\cdots&{}=\sum_{k=0}^{\infty}k&{}=-\frac{1}{12}\\ 1+4+9+16+\cdots&{}=\sum_{k=0}^{\infty}k^2&{}=0\\ 1+2^n+3^n+4^n+\cdots&{}=\sum_{k=0}^{\infty}k^n&{}=-\frac{B_{n+1}}{n+1}\\ 1-2+3-4+\cdots&{}=\sum_{k=0}^{\infty}(-1)^{k+1}k&{}=\frac{1}{4}\\ 1+1+2+6+24+\cdots&{}=\sum_{k=0}^{\infty}k!&{}=?\\ 1-1+2-6+24-\cdots&{}=\sum_{k=0}^{\infty}(-1)^kk!&{}=eE_1(1)\approx0.596\\ 1-z+2z^2-6z^3+\cdots&{}=\sum_{k=0}^{\infty}(-1)^kk!z^k&{}=\frac{e^{1/z}}{z}\Gamma(0,1/z)\\ 1+e^{i\theta}+e^{2i\theta}+\cdots&{}=\sum_{k=0}^{\infty}e^{ik\theta}&{}=\frac{1}{2}\left(1+i\cot\frac{\theta}{2}\right)\\ 1+\cos\theta+\cos2\theta+\cdots&{}=\sum_{k=0}^{\infty}\cos k\theta&{}=\frac{1}{2}\\ \sin\theta+\sin2\theta+\cdots&{}=\sum_{k=0}^{\infty}\sin k\theta&{}=\frac{1}{2}\cot\frac{\theta}{2}\\ 1-\cos\theta+\cos2\theta-\cdots&{}=\sum_{k=0}^{\infty}(-1)^k\cos k\theta&{}=\frac{1}{2}\\ \sin\theta-\sin2\theta+\cdots&{}=\sum_{k=0}^{\infty}(-1)^k\sin k\theta&{}=\frac{1}{2}\tan\frac{\theta}{2}\\ 1^{2n}-2^{2n}+3^{2n}-\cdots&{}=\sum_{k=0}^{\infty}(-1)^kk^{2n}&{}=0\\ 1^{2n+1}-2^{2n+1}+3^{2n+1}-\cdots&{}=\sum_{k=0}^{\infty}(-1)^kk^{2n+1}&{}=(-1)^n\frac{2^{2n+2}-1}{2n+2}B_{n+1}\\ 1^{2n+1}-3^{2n+1}+5^{2n+1}-\cdots&{}=\sum_{k=0}^{\infty}(-1)^k(2k+1)^{2n+1}&{}=0\\ e^{i\theta}-e^{3i\theta}+e^{5i\theta}-\cdots&{}=\sum_{k=0}^{\infty}(-1)^ke^{(2k+1)i\theta}&{}=\frac{1}{2}\sec\theta\\ 1^{2n}-3^{2n}+5^{2n}-\cdots&{}=\sum_{k=0}^{\infty}(-1)^k(2k+1)^{2n}&{}=\frac{(-1)^n}{2}E_n\\ \ln 2+\ln 3+\ln 4+\cdots&{}=\sum_{k=1}^{\infty}\ln k&{}=\ln\sqrt{2\pi}\\ 2\ln 2+3\ln 3+4\ln 4+\cdots&{}=\sum_{k=1}^{\infty}k\ln k&{}=\ln A-\frac{1}{12}\approx 0.1654 \end{array} \end{equation*}\] Some of these come from extending the geometric series to all \(r\neq 1\), some from zeta function continuation, some from integral transforms. Some involve Bernoulli numbers and Euler numbers, others exponential integrals or the incomplete gamma function. This craziness can even be used to show that \[1\times2\times3\times4\times\cdots=\prod_{k=1}^{\infty}k=\sqrt{2\pi}.\] Perhaps asking about the parity of infinity is not so crazy after all.
While Dr. Jones's survival of a nuclear detonation in a lead-lined refrigerator has brought many scientists' hands to their eyes with head-shaking disappointment, Indiana has more than radiation poisoning (or total vaporization) to worry about with the 1950's proliferation of fission testings around the globe. As many of you might know, one of the key means of dating archaeological and geological artifacts and specimens is through the use of radiometric dating. Various isotopes of the chemical elements are unstable, and will decay to (eventually) stable or long-lived products over the course of time. Since the temperature of even the deep interior of the Earth is far below the necessary energies for nuclear processes to occur, the rate of underground nuclear decays of unstable uranium, thorium, protactinium, argon, potassium, carbon, and other unstable isotopes has proceeded since the formation of the Earth unchanged. In the atmosphere, spallation of nitrogen, oxygen, argon, and other gases produces a series of radioactive nuclei, such as \({}^{14}C\), with a half-life of 5,715 years. Once \({}^{14}C\) is formed from atmospheric nitrogen, it oxidizes to \({}^{14}CO_2\), which is incorporated into living organisms through carbon fixation by plants and ingestion of those plants by animals. When an organism dies, it stops ingesting \({}^{14}C\), and the measured amount remaining can be used to calculate the time at which the organism died, providing a precise way to date many archaeological finds. However, surface testing of nuclear weapons in the 1950's has created an atmospheric surplus of \({}^{14}C\) which, although it cannot alter the half-life, does change the baseline for count rates. Wood from trees grown just before 1950 produce about 16 counts per minute (cpm) per gram of carbon (using a Geiger counter, say), while wood 5,715 years old produces 8 cpm, as expected. Modern wood produces substantially more decays per minute, and hence would give a much younger age than actuality. While this doesn't pose much worry for current paleontologists and anthropologists (50 year old trees can be dated by many other methods, and probably not of interest to paleoscientists), it will create an enormous amount of confusion for specialists hundreds or thousands of years from now: there will be an "atomic window" of time when nuclear testing gave artificially large doses of naturally occurring radioisotopes to living organisms, which may cause future archaeologists to date today's specimens as far younger than they should be.

I realize that I don't update this blog very often, but so what? I'm the one paying for the domain, so I can post whenever I damn well please. And tonight, I feel like posting!

I've been working with supersymmetry (SUSY) a lot lately, and several times people have asked me about what I mean when I refer to the Lorentz or Poincar? or supersymmetry algebra: what is an algebra? What do you mean it closes, or is isomorphic to a product algebra, or generates a Lie group? Well, I had a hard time describing it, so I'm going to try again tonight:

An algebra is a module over a commutative ring equipped with a bilinear product [\cdot,\cdot]: A\times A \rightarrow A. Okay...what does that mean? Essentially, an algebra extends the idea of a vector space by including some form of "vector product" that returns a vector of the original space. Indeed, the vector space \mathbb{R}^3 equipped with the cross product forms just such an algebra over the field of real numbers. In physics, we often use a certain type of algebra, a Lie algebra \mathfrak{g}, which has an additional product called the Lie bracket: it must be alternating, meaning [x,x]=0 for all x\in\mathfrak{g}, and must satisfy the Jacobi identity


for all x,y,z\in\mathfrak{g}. If the original algebra was equipped with a product, the the Lie bracket is identified with the commutator [A,B]=A\cdot B-B\cdot A. What does this have to do with physics? Well, it turns out that elements of a Lie algebra generate Lie groups, groups that can be infinitesimally varied, and hence are also manifolds. As such, for some Lie algebra element (called a generator of the Lie group) X, we define e^{tX} to be an element of the Lie group G associated with the Lie algebra \mathfrak{g}.

An instructive example is the Lie algebra \mathfrak{su}(2), which is the complexification or universal (double) covering algebra of \mathfrak{so}(3), the angular momentum algebra. Its generators, J_x,J_y,J_z satisfy the algebra

\begin{array}{l} \left[J_x,J_y\right] = i \hbar J_z\\ \left[J_y,J_z\right] = i \hbar J_x\\ \left[J_z,J_x\right] = i \hbar J_y \end{array}

The corresponding Lie group is SU(2) or, if you prefer, SO(3), the group of orthogonal matrices of unit determinant. The elements of this algebra generate rotations in three dimensional space. This is a subalgebra of the Lorentz algebra \mathfrak{so}(3,1), whose generators obey the Lie bracket


Here, \eta^{\mu\nu} is the Minkowski metric of special relativity, which is the 4 by 4 identity matrix except the first entry is -1 (the time component). The generators are related to the J_i by J_i=\tfrac{1}{2}\epsilon_{ijk}M_{jk} and K_i=M_{0i}, which generate the boosts in each direction. If we group them as A_i=\tfrac{1}{2}(J_i+iK_i) and B_i=\tfrac{1}{2}(J_i-iK_i), then we find that since

\begin{array}{l} \left[J_i,J_j\right]=i\epsilon_{ijk}J_k\\ \left[J_i,K_j\right]=i\epsilon_{ijk}K_k\\ \left[K_i,K_j\right]=-i\epsilon_{ijk}K_k \end{array}

we have

\begin{array}{l} \left[A_i,A_j\right]=i\epsilon_{ijk}A_k\\ \left[B_i,B_j\right]=i\epsilon_{ijk}B_k\\ \left[A_i,B_j\right]=0 \end{array}

So, we get two copies of an \mathfrak{su}(2) algebra, meaning that locally, we have the isomorphism \mathfrak{so}(3,1)\cong\mathfrak{su}(2)\oplus\mathfrak{su}(2)! Furthermore, there is a homeomorphism \mathfrak{so}(3,1)\simeq\mathfrak{sl}(2,\mathbb{C}). To see this, take a 4 vector and a corresponding 2 by 2 matrix:

\begin{array}{l} X=x_{\mu}e^{\mu}=(x_0,x_1,x_2,x_3)\\ \tilde{X}=x_{\mu}\sigma^{\mu}=\left(\begin{array}{cc} x_0+x_3 & x_i-ix_2\\ x_1+ix_2 & x_0-x_3\end{array}\right) \end{array}

where \sigma^{\mu} is the 4 vector of Pauli matrices (generating elements in the 2 dimensional spinor representation of \mathfrak{su}(2):

\sigma^{\mu}=\left\{\left(\begin{array}{cc} 1 & 0\\ 0 & 1\end{array}\right),\left(\begin{array}{cc} 0 & 1\\ 1 & 0\end{array}\right),\left(\begin{array}{cc} 0 & -i\\ i & 0\end{array}\right),\left(\begin{array}{cc} 1 & 0\\ 0 & -1\end{array}\right)\right\}

Transformations X\mapsto\Lambda X under SO(3,1) leaves the square |X|^2=x_0^2-x_1^2-x_2^2-x_3^2 invariant, while the mapping \tilde{X}\mapsto N\tilde{X}N^{\dagger} with N\in SL(2,\mathbb{C}) preserves the determinant \det\tilde{X}=x_0^2-x_1^2-x_2^2-x_3^2. We see the homomorphism, as well as the fact that since N=\pm 1 both correspond to \Lambda=1, the map is 2 to 1, and since SL(2,\mathbb{C}) is simply connected, it is the universal covering group.

Well, that's it for now...hopefully this was interesting to at least one person. If so, I'll continue with the Poincare group and its representations. Goodnight all!

Continuing with root vegetables, the rutabaga (or swede as it's known to other English speaking countries) is a close relative of the turnip, actually originating as a cross between the cabbage and the turnip. It was first noticed in the 1620's growing wild in Sweden, but likely originated much earlier in the Russian/Scandinavian region.

Eudicots; Brassicales; Brassicaceae; Brassica napobrassica

Perhaps most interesting (to me at least) is the vegetable's taxonomic history. Three ancestral species of Brassica (rapa, the turnip, nigra, black mustard, and oleracea, cabbage/broccoli/kale/Brussels sprouts/cauliflower) were able to interbreed, creating 3 allotetraploid (having four genomic copies from two different ancestral species) species: juncea (Indian mustard), napus or (rutabagas), and carinata (Ethiopian mustard). This hybridization (shown below) is called the Triangle of U after Korean botanist Woo Jang-choon (apparently mistranslated).

Triangle of U, genetic relationship between the six Brassica species

While most organisms are diploid (containing two homologous (paired) sets of chromosomes, one from each parent), polyploidy (having more than two sets) is sometimes observed, especially in plants. For instance, seedless watermelons are triploid, kiwifruit and sequoias are hexaploid, and certain strawberries are decaploid. It is estimated that as many as 80% of plant species are polyploid, likely due to the ability of plants to survive the various mechanisms by which polyploidy occurs, such as meiotic failures or fusion of unreduced gametes. In animals, polyploidy is more common among fishes and amphibians, with some frogs having as many as twelve sets of chromosomes.

At any rate, rutabagas are cooked in many ways, from being roasted with meats to baked in casseroles or boiled in soups. They are also often mashed or pure?d, and like turnips have been used since inaugural Halloween celebrations to make Jack-o-laneterns. The bitterness sometimes found in rutabagas is due to the presence of cyanoglucoside, which inhibits thyroid iodine transport, and may at high doses lead to hypothyroidism. The percieved bitterness is governed by a gene affecting one of our tastebud receptors, which also leads to percieved bitterness in turnip, broccoli, horseradish, and watercress. Luckily, I haven't yet found any of these vegetables bitter, so I believe I won't have to cross rutabagas off of my "edibles" list when I get a chance to eat them!

Turnips are root vegetables grown in temperate climate for their white, fleshy taproot. The bulb develops just at the level of the soil, with the above-ground part often being purple or reddish in color. Turnip leaves emerge directly from the top of the root, and are often eaten themselves (sometimes called "Chinese cabbage"). Smaller turnips are employed in salads or as garnish (like radishes), while the larger ones are often used as livestock feed. While the root is high in vitamin C, the leaves contain high amounts of vitamins C, A, and K, as well as calcium and folate.

Eudicots; Brassicales; Brassicaceae; Brassica rapa

The vegetable traces its origin to early Hellenistic times, and it is likely that the hot wild forms were domesticated in Asia Minor. Now it is grown in temperate climates around the world, and has even become incorporated into human culture: Halloween festivals in Ireland and Scotland feature turnip lanterns (Samhnag) with carved faces placed in windows to ward off harmful spirits. Food-wise, they can be baked or mashed or put into soups and stews, though I've yet to try many recipes out.

Chervil is an annual herb related to carrots and parsley. Primarily grown for seasoning mild dishes, like poultry, seafood, eggs, and soups, it is sometimes known as "gourmet's parsley". Native to the Caucasus, it was spread throughout the Roman Empire. Some types of chervil are grown as root vegetables (easy to understand given their relationship with carrots); though popular in the 1800's, it is now virtually forgotten in all but French cuisine.

Eudicots; Apiales; Apiaceae; Anthriscus cerefolium

Like many herbs, it is thought that chervil possesses many medical benefits, such as being a digestive aid or for lowering blood pressure. Supposedly, infusing it with vinegar cures hiccups, and it can be used as a slug repellant. For myself, I've always enjoyed chervil with eggs, and I find that to be its natural flavor setting. I'm growing my first batch now, and am eager to try it fresh from the plant.

First Monday Magnoliophyte in a long while, I know, but I'm going to try again to keep it up (perhaps by writing them ahead of time). This week I'd like to talk about sorrel, a perennial herb native to Europe, sometimes called spinach dock due to its similarity in appearance to the more common leafy vegetable. It belongs to the same family as rhubarb, and is used in many salads, soups, and stews, much like chard or spinach. A distantly related red Carribean variety exists that is used in flavoring jellies, tarts, and, certain drinks.

Eudicots; Caryophyllales; Polygonaceae; Rumex acetosa

All sorrels contain poisonous oxalic acid, and as such the flavor can range from what is described as "kiwifruit" in younger shoots, to more acidic older leaves. In large quantities, the leaves can prove fatal. The plant has been cultivated for centuries, from Eastern Europe through Africa. I've just started growing some of my own hydroponically, so we'll see how delicious (or awful) it really is!

Okay, I realize that is has been a long time since last posting (over a year), but life has been busy for me and it's hard to find the time sometimes to keep up on these things. Recently, however, I was asked a question about fermions vs. bosons from a mathematician. I relished the opportunity to explain an interesting physical topic without having to go easy on the math, but found myself doing a poor job of rigorously explaining it besides the usual spin/occupation number/symmetrization explanation. I knew what the two particle types were, but completely forgot how we came to derive their statistical properties (my friend is a statistician, so he was particularly interested in this topic). So I refreshed my memory, and would now like to explain it thoroughly.

It would first be great if we understood why bosons and fermions behave differently than their classical counterpart particles. What is the underlying difference? The source of the issue is indistinguishability. Whereas in classical mechanics, we can talk about THIS mass and THAT mass separately, but in the quantum picture, we simply cannot label one electron THIS one, and one THAT one. All quantum particles are exactly the same; I can't paint one red and one blue to tell them apart. As such, I cannot write the composite wavefunction as (ignoring spin) \psi(r_1,r_2)=\psi_a(r_1)\psi_b(r_2), separating out each particle distinctly. However, we can create a wavefunction that doesn't choose which is in which state:


So, two identical particles can be written as a linear combination of how they could be distinguished. Those with the plus sign are called bosons after the Indian physicist Satyendra Bose, while those with the minus sign are called fermions after Italian physicist Enrico Fermi. Using relativity, one can prove the spin-statistics theorem, which adds the additional information that bosons are integer spins, while fermions are half-integer. Also notice that the above implies the Pauli exclusion principle, because if two fermions occupied the same state, we would have


This restriction on fermions accounts for much of what we observe in nature, from the electronic structure of atoms to neutron stars. The statistics of both types of indistinguishable particles can be summed up by stating the (anti)-symmetrization requirement: \psi(r_1,r_2)=\pm\psi(r_2,r_1).

Now we can examine what happens when we have lots of these particles in a potential, say with energies E_1, E_2, E_3, ... with degeneracies d_1, d_2, d_3, .... Suppose there are N in all (all the same mass), and we distribute them so that there are N_1 with energy E_1, N_2 with energy E_2, and so on. The number of different ways this can be achieved, \Omega(N_1,N_2,N_3,...) depends on whether the particles are distinguishable, identical fermions, or identical bosons, labeled \Omega_D, \Omega_F, \Omega_B. For distinguishable particles, we first ask how many ways we can select N_1 particles from the N available to place in the first "bin" E_1. The answer is the combination


Inside of this bin, we can arrange the particles into d_1 different choices, so there is a total of


options. The same goes for bin two, except now there are only (N-N_1) options. Hence, repeating the procedure, we obtain


Next we consider the case of fermions. This one is easy: because they can't be distinguished, it doesn't matter which particle is in which state. There is just one N-particle state, and only one particle can occupy each state (Pauli exclusion principle). Since there are


ways to choose the states in the nth bin, we have


The hardest case is for bosons, where there is no restriction on the number of particles that can occupy each state. It really comes down to distributing N_i particles into d_i partitions, implying we should use




To determine the most probable configuration, we want to maximize the function \Omega subject to the two constraints

\sum_{i=1}^{\infty}N_i=N and \sum_{i=1}^{\infty}N_iE_i=E

This is best done using Lagrange multipliers, where we construct a new function


and set the N_i derivative to zero. To do this, we assume Stirling's approximation,

\ln(z!)\approx z\ln z-z

for z\gg 1. This will be essentially true as long as the occupation numbers N_i are very large (large enough to assume that statistics will work at all). In the identical fermion case, we must also assume that the degeneracies d_i are very large as well (not true in one dimension, but the degeneracies in three dimensions usually increase rapidly with the energy level. In the hydrogen atom, for example, d_n=n^2.) Using this approximation, we get for the distinguishable case

G\approx\sum_{i=1}^{\infty}[N_i\ln d_i-N_i\ln N_i+N_i-\alpha N_i-\beta E_iN_i]+\ln N!+\alpha N+\beta E

so that

\frac{\partial G}{\partial N_i}=\ln d_i-\ln N_i-\alpha-\beta E_i

which, when equal to zero, gives us the most probable occupation numbers of each energy level to be

N_i=d_i e^{-(\alpha+\beta E_i)}

This is known as the Maxwell-Boltzmann distribution, applicable only to distinguishable particles. Doing the same procedure for fermions, we obtain

N_i=\frac{d_i}{e^{(\alpha+\beta E_i)}+1}

while for bosons we get

N_i=\frac{d_i}{e^{(\alpha+\beta E_i)}-1}

Comparing these results with the equipartition theorem, we conclude that our Lagrange multipliers must be defined as \beta=\frac{1}{k_BT} and \alpha=-\frac{\mu(T)}{k_BT} with \mu the chemical potential of the system (essentially measuring how much the particles want to move down the concentration gradient). Hence we have the distributions for the three classes of particles

n(\epsilon)=\left\{\begin{array}{ll} e^{-(\epsilon-\mu)/k_BT} & \text{Maxwell-Boltzmann}\\ \frac{1}{e^{(\epsilon-\mu)/K_BT}+1} & \text{Fermi-Dirac}\\ \frac{1}{e^{(\epsilon-\mu)/K_BT}-1} & \text{Bose-Einstein} \end{array}\right.

From these, lots of physics can be discussed, such as Fermi energies, thermodynamics, Bose-Einstein condensation, the blackbody spectrum, neutron stars, white dwarfs, harmonic oscillators...indeed the great majority of statistical mechanics.

Pineapples are herbaceous perennial bromeliads that produce sweet "multiple fruits", in that what we perceive as a single fleshy fruit body is actually a compression of adjacent, helically arranged fruits. The plant can grow to 5 feet tall, and is primarily pollinated by hummingbirds. In Hawaii, due to the extent of the agricultural cultivation of the plant, hummingbird importation is forbidden. The name "pineapple" was actually first recorded in 1398 to describe what we now call "pine cones". The term was redefined in 1664 after European explorers had discovered the tropical fruit, native to southern Brazil and Paraguay. We largely owe the modern distribution of pineapple to the Spanish, who spread it outside of Latin America (Columbus found it in the Caribbean) to Hawaii, the Philippines, Guam, and Zimbabwe. Pineapples are one of the relatively few plants that carry out crassulacean acid metabolism (CAM photosynthesis), whereby carbon dioxide is stored at night as the four-carbon acid malate and then released near the enzyme RuBisCO during the day, improving the efficiency of photosynthesis.

Monocots; Commelinids; Poales; Bromeliaceae; Ananas comosus

Pineapples contain large amounts of the protease bromelain, and as such raw pineapple should not be consumed by those suffering from various protein deficiencies, nor those with hemophilia, kidney, or liver diseases (or the occasional canker sore, as I'm well aware!). Other than that, the fruit is high in vitamin C and manganese, and helps with some intestinal disorders and may induce labor when a child is overdue. Due to the presence of the protease and the natural acidity of the fruit, pineapple makes a wonderful marinade for meats as a tenderizer, and is commonly used in pork dishes. It's sweetness, though, helps it get into many desserts, juices, smoothies, salads, etc. as well.

Basil is a low-growing annual herb of the mint (Lamiaceae) family, used frequently in Italian and Southeast Asian cuisine. The plant is native to South and Southwest Asia, having been cultivated there for over 5,000 years. Due to the abundance of phytochemicals in varying quantities, many cultivars exist with intrinsically different scents and flavors, depending on the chemical concentrations. The standard "sweet" or "Genovese" basil scent comes from eugenol, a phenylpropanoid compound also present in cloves, while lemon basil contains high amounts of citral, and licorice basil contains anethole. There are over 100 types of basil, each with its own unique fragrance.

Eudicots; Asterids; Lamiales; Lamiaceae; Ocimum basilicum

The word basil comes from the Greek word for "king", and was believed to have grown where St. Constantine and Helen discovered the biblical cross. Other religions around the world place importance on the plant, involving it with ritual and belief. The name is fitting, considering its designation as "the king of herbs" by many culinary experts. While many dishes use the herb as a flavoring additive, Italian pesto requires large amounts of the leaves blended with oil and pine nuts to create a thick sauce, and sometimes basil is used in fruit jams and jellies. And of course, as with many aromatic herbs, the plant has several health benefits, such as having antioxidant and anti-microbial properties.

On a cultivation side note, I grow a great deal of basil, and not always by choice; the plant, when exposed to the right amount of light, watering, and pruning, can grow VERY rapidly, often to the point of over-shading nearby herbs. I suppose this is not actually a problem, as it just means I need to make more pesto! Below is a recipe:

Italian Pesto
  • 2 cups fresh Genovese (sweet) basil leaves
  • 1/3 cup pine nuts
  • 3 cloves garlic (I like garlic)
  • 1/2 cup olive oil
  • 1/2 cup Parmesan cheese
Pulse garlic with pine nuts in food processor. Add basil, and pulse again. While on, pour oil in steady stream into mixture, scraping down sides with spatula (don't let it hit the blades!). Add cheese and pulse until blended, with dash of salt and pepper.

After watching the recent South Park episode on the revised use of the word "faggot", I was reminded of how alive language is, and how neologisms and alteration of meaning can take place in a relatively short time frame. This got me thinking about the phrase "all right", which my uncle is always keen on reminding me is the accepted grammatically correct spelling. I admit that in formal writing, one should use the two word phrase. However, I do see a reason why the less common (but becoming more so) word "alright" should be adopted as a "real" word in its own right. And keep in mind that unlike most other languages of the world, English has no national or international academy or regulatory body that determines worthiness or correctness of words; there is only a loose confederation of dictionary publishers and a collection of traditionalists. Not one legal policy or official statute exists concerning purity of English (a far cry from French and Mandarin, whose language elite are, to put it bluntly, xenophobic and parochial).

The need for "alright" arises in the following context:

"Your answers to the problems were all right."

Here, there is some ambiguity over whether the speaker means that the answers were collectively correct (i.e., none of them were wrong), or that the solutions were satisfactory or mediocre. The difference can be ascribed to the fact that, in the former case, the two word phrase may be broken up: "All of your answers to the problems were right." In the latter case, "alright" should be used to imply that the answers were acceptable, though perhaps did not meet higher expectations, and cannot be recast as a split phrase. I feel that this difference, as well as the growing usage of the word in informal writing and dialogue, necessitates the acceptance of "alright" as a legitimate English word in the near future, if not already.

The plant grows as a small tree or woody vine and produces a fleshy berry with small edible seeds. The seeds (and indeed much of the fruit itself) can be bitter due to the presence of nicotinoid alkaloids, which are often found in Solanaceae (Nightshade family) plants like tomatoes, potatoes, peppers, and tobacco. The plant is native to India, and has appeared in the written record since 544 CE. Surprisingly, it was unknown to the Western world until about 1500 CE, and even then was not used widely due to concerns of the plant's toxicity, being a relative of nightshade. In British English the fruit is referred to as an "aubergine", whose origin can be traced through French, Catalan, Arabic, Persian, and Sanskrit. In the US, Canada, Australia, and New Zealand, the fruit is called an "eggplant" due to the appearance of some cultivars in 18th century England, which were yellow or white and resembled goose eggs. Today there are many varieties of different sizes, shapes, and colors, though the most common in the US are elongate ovoids with dark purple skin.

Eudicots; Asterids; Solanales; Solanaceae; Solanum melongena

The plant is used in cuisine around the globe, from Japan and India to Spain and Turkey. The fleshy part of the berry is able to absorb large amounts of fats and liquids, making it an ideal addition to sauces. Though the plant can be bitter, this can be alleviated by salting and rinsing the sliced fruit, and some cultivars are not bitter at all. The thin leathery skin is also edible, so peeling is not required. The plant could help with high blood pressure and free radical formation, and is a good source of folic acid and potassium. It has more nicotine than any other edible plant, though 20 pounds of eggplant would have to be eaten to match that in one cigarette.

Pomegranates are deciduous shrubs or small trees of the crape myrtle family, bearing grapefruit-sized berries with edible seeds and surrounding flesh (arils). A single fruit may contain up to 600 arils. The plant is native to Southwest Asia, and has been cultivated since ancient times, referenced often in Judaic and Greek mythos. In fact, dried pomegranates have been found in Egyptian tombs from the third millennium BCE, and the biblical city of Jericho. Today they are grown around the world wherever the climate is not too wet, due to their propensity for root fungus.

Eudicots; Rosids II; Myrtales; Lythraceae; Punica granatum

The word pomegranate comes from Latin, meaning "seeded apple", which has influenced the name in other languages as well. Our words "grenade" and "grenadine", as well as the Spanish city Grenada, owe their origin to the pomegranate, which was once used to make grenadine, and gave the city its name during the Moorish period. I assume the term grenade comes from the fruit's ability to blow its arils in every direction when thrown against a solid object. Judaism has had a long history of admiring the fruit, and associates it with righteousness as one of the seven foods special to the land of the Hebrews. In Greek mythology, Persephone was forced to stay in Hades four months out of the year, because she had been tricked into eating four pomegranate seeds. This explained the onset of winter, when her mother Demeter (goddess of the harvest) would mourn Persephone's return to the underworld. Pomegranates are also a common motif in Christian religious decoration as a symbol of Jesus' suffering and resurrection.

The fruit has many culinary uses, though the arils are often eaten raw. Pomegranate juice is a common ingredient in several Persian and Caucasian dishes and soups, or may be drunk on its own. When thickened and sweetened, it forms grenadine, included in many cocktails. The acidic tannins in the juice, along with the natural sugars, make it an excellent basis for glazes and sauces, applied to duck or other poultry. The fruit contains many beneficial phytochemicals, such as vitamin C, pantothenic acid, potassium, and antioxidant polyphenols. As such, it seems to inhibit many health problems, such as heart disease, high blood pressure, dental plaque, breast cancer, prostate cancer, diabetes, lymphoma and rhinovirus infection! For me, they are fun to eat, and the juice's citrus/berry flavor can't be beat.

Broccoli is a cultivar group of the cabbage family Brassicaceae, but amazingly is the same species as several other seemingly unrelated plants. Brassica oleracea also includes kale, collard greens, brussels sprouts, cabbage, kohlrabi and others, indicating that the species has undergone extensive cultivation from its wild form. The native variant is indigenous to limestone sea cliffs of Southern Europe, and is known to have first been cultivated about 2,000 years ago by the Roman Empire.

Eudicots; Rosids; Brassicales; Brassicaceae; Brassica oleracea

The part of the plant that we eat is the mass of flower buds that forms on the "head" (think cabbage). It may be boiled, steamed, or eaten raw, though boiling is discouraged to prevent loss of nutrients. It's high in vitamins A, C, and K, as well as dietary fiber. The vegetable also has several anti-viral, anti-bacterial, and anti-cancer properties as well. Being a cruciferous vegetable, the plant is potentially goitrogenic (can induce goiter formation), and should be thoroughly cooked before being eaten by people with thyroid problems or iodine deficiencies. Individuals sensitive to PTC (see cilantro entry) may find the flavor of Brassica members distasteful, but broccoli is one of my favorite vegetables. An excellent snack:

  • 1 cup mayonnaise
  • 1-2 tablespoons lemon juice
  • 1 tablespoon curry powder
  • raw broccoli
Mix the first three and dip broccoli. Quite tasty.

Now I'd like to examine physical theories that are more statistical in nature: statistical mechanics and quantum mechanics. Instead of assuming our system progresses exactly along the path of minimum action, we assign a probability distribution to weight the various paths it can take through phase space, with the most probable path being the one of stationary action. This formulation of statistical mechanics is much akin to Feynman's path integral approach to quantum mechanics, which I'll cover next time.

Let's start with a definition of the average action, , where is a probability distribution over all possible field configurations on the manifold of interest (spacetime), is our usual action functional along the path, and indicates that the integration is to be performed over all possible field configurations over all of the manifold. By definition, we must have . We may also define the entropy from Shannon's theory as . Employing the method of Lagrange multipliers to impose the constraints, we find that , where is the partition function for normalizing the probability, and is the Lagrange multiplier, with units of inverse action (in quantum mechanics, ). We also obtain , and . The most important result, though, comes from using Hamilton's Principle, . This means

But , so . Hence Hamilton's Principle applies to the mean action as well (which we would expect). This immediately yields for the expected path: the entropy change is maximized for the field configuration path we expect. This is essentially the second law of thermodynamics. In this sense, the second law and Hamilton's Principle are equivalent: a statistical process that extremizes action extremizes entropy change. Though the action is in terms of a Lagrangian, we can see that under certain circumstances, we may make a Legendre transform to put it in the form of a Hamiltonian. Additionally (the derivation is a bit too long), we can find that in that case the Lagrange multiplier is related to the temperature by defining , and then from the partition function we may derive the many laws and formulae of thermodynamics.

Perhaps the most beautiful and easy to understand field theory is that of electrodynamics. In the late 1800's, James Clerk Maxwell unified the rather disparate and seemingly unrelated topics of magnetism and electricity into a single theory, codified in the 4 equations below, Maxwell's equations:

The first equations is Gauss's Law, the second precludes the existence of magnetic monopoles, the third is Faraday's Law of induction, and the fourth is the Maxwell-Ampère law with a displacement current. These four partial differential equations can be modified slightly to be used in polarized, dielectric or magnetic materials, but this is essentially the entire theory of electromagnetism. To put these in a covariant framework (for relativistically invariant physics), we can introduce the rank 2 antisymmetric field tensor:

Using this and the definition , Maxwell's equations are and . In terms of the 4-potential , where , it simplifies even more: .

But how can we get this from a Lagrangian? Once again, we must construct some kind of scalar from the fields. From the field tensor, there is only one Lorentz invariant we may make: . (There is also a pseudo-scalar invariant related to the angle between the electric and magnetic fields, but the Lagrangian must be a true scalar). The only scalar we can form from the current is , so we choose as our Lagrangian , where the constants are chosen to conform to experiment. Invoking Hamilton's Principle by finding a stationary action,

Using the Euler-Lagrange equations exactly return the results above, so that this Lagrangian completely encapsulates electromagnetism. It is straightforward (if cumbersome) to generalize E&M to curved spacetime in the theory of general relativity.

First I'd like to apply Hamilton's Principle to Einstein's theory of General Relativity. We need to find a Lagrangian that incorporates the effect of mass on the manifold's metric. It must, as usual, be a scalar, and we suppose it is a functional of the metric and the matter fields. Hence, we suppose for free space

where is the invariant 4-volume element of our Riemannian manifold (and is the determinant of the metric), is a universal constant, and is the Ricci scalar (the simplest curvature invariant of a Riemannian manifold). If we were deriving the Einstein field equations for the first time, we would not know what these constants and scalars were, but we would still be able to write it in this form based on our assumptions of locality, isotropy of free space, etc.

Let us now suppose that the full action is this free-space action (the Einstein-Hilbert action) plus a term that describes matter fields:

When we apply the Euler-Lagrange equations from last time, we find that

We define the right hand side as ( times) the stress-energy tensor responsible for the curvature of spacetime due to the presence of matter. The left hand side is completely geometric (no physics is required), and from the theory of differential geometry, can be shown to equal , the Einstein tensor, in terms of the Ricci tensor, the Ricci scalar, and the metric. The Einstein field equations, then, are

This rather beautiful relation between geometry (left hand side) and physics (right hand side) can be viewed in either direction: a curving geometry tells matter and energy how to move; or equivalently, the presence of matter and energy curves spacetime. The theory does not, however, give us either one a priori, and the constant must be determined by experimental results. We can place restrictions on the stress-energy tensor, however: as the source of the gravitational field, we expect a particular simplicity in the non-relativistic limit (Newton's law of gravitation). Because of the symmetries of spacetime, we can apply Noether's theorem to obtain the conservation law . In many cases (where the spin tensor is zero), the tensor is symmetric and angular momentum is conserved as well. The radiation portions of the tensor can be gotten from Maxwell's theory, which I'll discuss next time. From the field equations, one can extract information about the nature of black holes, galactic rotation, etc., and as such these equations encapsulate most, if not all, of mechanics in the non-quantum regime.

Of late I have been intrigued by the idea that all differential equations that make up the bulk of our understanding of physics can be recast as integral equations, subject to some stationary condition. In fact, these all essentially reduce to a condition called Hamilton's Principle, stating that as a system evolves, the "action" is stationary. I will define these ideas in a bit. As I researched the subject, I found that not only classical mechanics could be formulated this way, but also optics, special relativity, general relativity, electromagnetism, quantum mechanics, and even statistical mechanics, essentially all branches of physics. Even string theory can be started from an action principle. The implications are amazing: essentially all of physics can be derived from one principle.

I suppose that now I should formalize: suppose we have an -dimensional manifold and a target manifold . Let be the set of smooth functions . Now consider a functional , where is a field. Now we must make some assumptions. First, by one postulate of quantum mechanics, the action must map to the field of real numbers , since observables (such as action) have real eigenvalues. Also, from relativity, we assume locality, as required for causality. Quantum entanglement does not pose a problem because information cannot be transmitted nonlocally instantaneously. Locality implies that if , we can assume depends only on a function of and its derivatives over the manifold . Hence, :

The function is called the "Lagrangian" function. The Euler-Lagrange equations discussed below can be modified to include higher order derivatives of the Lagrangian*, but through a substitution, the Euler-Lagrange equations can always be reduced to derivatives with respect to the function and its first derivative, nothing higher (at the expense of additional equations to solve). The Euler-Lagrange equations are derived using the calculus of variations, based on the functional derivative. We can imagine varying the function until the action integral is maximal or minimal or inflected (ie, "stationary"). Finding this stationary action is the essence of the functional derivative: normally we find a stationary point by setting the derivative of a function with respect to a variable to zero. Here we set the derivative of a functional with respect to a function to zero. There is an issue, in the derivation, of boundary conditions, so that we must specify on if is compact, or place some limit on it as . This gives us the Euler-Lagrange equations:

where for each of the dimensions of the manifold. So what is ? These are the physical fields of interest. In classical Lagrangian mechanics, they are the coordinates themselves, expressed as functions of time. In field theory, they are physical fields as functions of spacetime. Hence the target manifold is the set of field values at a given point. For example, in classical mechanics, we might have , so that

Hence, with a suitable choice of Lagrangian, all the laws of physics may be derived (though of course in practice this is definitely not the way to always proceed). The question that naturally arises is, "What is the total Lagrangian?" Well, it must be a scalar to preserve isotropy of space and homogeneity of spacetime (that is, empty space looks the same in all directions, and there is no difference between one point and the next). Aside from this, it may be chosen to describe the physics involved. Over a few more posts, I'll derive some important equations in physics from Hamilton's Principle.

*The Euler-Lagrange equations for a Lagrangian dependent on the first derivatives of are:

Well, I've completed the qualifying exam for physics (a two-day, 4-hour-each nightmare), and if I passed, I'll get my Master's degree. Besides the test, this weekend has been rather packed: Peter and Rika came down and spent a few days with us, Rachael came down, and Phil's parents visited. We played many games, celebrated Philip's 23rd birthday, saw "9", went to the Wild Animal Park, cooked a lot of food, watched Batman Begins and The Dark Knight...quite exhausting but enjoyable. Now it's time to continue with my research, and start classes on Thursday: non-equilibrium statistical physics, field theory, and solid state physics. It should be a good quarter. This weekend I'll be visiting the family and going to Catalina, and the next weekend Rachael will come down again for a whole week! And I might even have a car by then...

Apples are an example of what botanists call a pome: an accessory fruit produced by members of the subfamily Maloideae of the rose family. A pome's exocarp and mesocarp are formed from the carpels and make up the fleshy part of the fruit, while the endocarp forms a leathery or stony case around the seeds, commonly called the core. The end of the fruit opposite the stem is the calyx, where one can often see the remains of the flower's sepals, style, and stamens. The trees themselves are deciduous, between 10 and 40 feet tall, and produces five-petaled white and pink flowers in the spring. First being noticed in the wild at least 2500 years ago in Central Asia, there are now over 7,500 different cultivars, created over centuries of selection. Indeed, the apple tree may be the earliest domesticated tree. Part of this diversity may be due to the extreme heterozygous nature of the plants: apples grown from seed may be radically different from their parents, so that most varieties today are grown from grafting.

Eudicots; Rosales; Rosaceae; Maloideae; Malus domestica

Perhaps due to its long history, the apple has had an interesting impact on human culture. Many pagan religions make reference to apples as mystical or forbidden fruit, though the term "apple" was used to describe all fruits as late as the 17th century. The Latin word for apple, malus, is similar to that for evil: malum. This may provide the Christian myth of the "forbidden, evil apple" in the Garden of Eden, and how a man's "Adam's apple" comes from the fruit sticking in Adam's throat.

Short of choking to death, apples actually provide numerous health benefits, to the point that "An apple a day keeps the doctor away." It has been shown to decrease risks of lung, colon, and prostate cancer, as well decline in mental faculties. It can also help with weight loss and cholesterol levels. The seeds, however, are mildly poisonous, and should not be consumed in bulk. Their bitter flavor comes from amygdalin, the same cyanide compound found in bitter nuts. The fruits can be used in a wide number of culinary applications, from its juice (and ciders, sprits, and vinegars) to apple cakes, pies, crumbles, crisps, butters, jellies and sauces. They can be served spiced, caramelized, in salads, or just eaten raw (my favorite). So indulge in this "sinful(?)" fruit and enjoy it.

1 2 3  Next»