San José State University |
---|

applet-magic.comThayer WatkinsSilicon Valley & Tornado Alley USA |
---|

On an Analytical Solution to the GeneralizedHelmholtz Equation of One Dimension |
---|

The Helmholtz equation arises in many contexts in the attempt to give a mathematical explanation of the physical world. These range from Alan Turing's explanation of animal coat patterns to Schrödinger's time-independent equation in quantum theory. The quantum mechanical probability density function for a harmonic oscillator with a principal quantum number of 60 is shown below.

The heavy line is the time-spent probability density function for classical harmonic oscillator of the same energy. As can be seen the spatial average of the quantum mechanical quantities is at least approximately equal to the classical values.

The Helmholtz equation *per se* is

where k is a constant. The Generalized Helmholtz equation is that equation with k being a function of the independent variable(s).

In one dimension the Helmholtz equation is

It just has the sinusoidal solution of φ(x) = A·sin(kx)+B·cos(kx). In one dimension the Generalized Helmholtz equation has a sinusoidal-like solution of varying amplitude and wavelength.

A sinusoidal solution is an exponential function of ikx where i is the imaginary unit. This suggests that the solution of the generalized equation may be a function of

hence

dX = ik(x)dx

and

(dX/dx) = ik(x)

Then

and

(d²φ/dx²) = (d²φ/dX²)(−k²(x)) + i(dφ/dX)(dk/dx)

or, equivalently

(d²φ/dx²) = −(d(dφ/dX)/dX)k²(x) + i(dφ/dX)(dk/dx)

Since (d²φ/dx²) is equal to −k²φ the above equation can be reduced upon division by −k² to

or, equivalently

since d(1/k)/dx=−(dk/dx)/k²

(d(dφ/dX)/dX) + i(dφ/dX)(d(1/k)/dx) = φ

Let (dφ/dX) be denoted as ψ and (d(1/k)/dx) as γ. Then

(dψ/dX) = φ − iγψ

In matric form

where

Φ = | | φ | |

| ψ | |

M = | | 0 | 1 | |

| 1 | −iγ | |

Note that γ is a function of x and hence also of X and so is the matrix M.

The matrix M can be decomposed into (J−iγK) where J is the 2×2 matrix with zeroes on the principal diagonal and 1's on the other places and K is the 2×2 matrix of all zeroes except for 1 in the (2,2) position.

For the analogous scalar differential equation the solution would go as follows:

(1/y)(dy/dx) = μ(x)

Integrating

from 0 to x

gives

ln(y(x)) − ln(y(0)) = ∫

hence

y(x) = exp(∫

This suggests that the solution to the matrix equation might be

The RHS is in fact the first term of a Magnus series solution for the equation. Let us now consider the function

where

Λ = | | λ | |

| μ | |

and μ(x)=(dλ/dx).

The integral of the matrix M is the following matrix

∫_{0}^{X}M(Z)dZ = | | 0 | X | |

| X |
− i∫_{0}^{x}γ(z)dz | |

which is the same as (XJ−i∫_{0}^{x}γ(z)dzK).

The integral of γ expressed as a function of X is the same as its integral expressed as function of x over corresponding ranges. But the integral of γ over the range of 0 to x is [1/k(x)−1/k(0)].

The solution is therefore

| λ(X) | | | 0 | X | | | λ(0) | | |||||

| | | = Exp | { | } | |||||

| μ(X) | | | X | −i∫_{0}^{x}γ(z)dz
| | |μ(0) | |

Let Z=∫_{0}^{x}k(z)dz so X=iZ. Then the solution can be represented as

| λ(Z) | | | 0 | iZ | | | λ(0) | | |||||

| | | = Exp | { | } | |||||

| μ(Z) | | | iZ | −i∫_{0}^{x}γ(z)dz
| | |μ(0) | |

or, equivalently

Λ(Z) = Exp(iZJ−iLK)Λ(0)

where

L = ∫

For the matric exponential function Exp(A+B)=Exp(A)Exp(B)=Exp(B)Exp(A) if and only if AB=BA; i.e., if A and B commute. The matrices J and K do not commute; i.e., JK≠KJ.

JK = | | 0 | 1 | |

| 0 | 0 | |

KJ = | | 0 | 0 | |

| 1 | 0 | |

Obviously iZJ and iLK do not commute because J and K do not commute. Therefore the above solution

does not reduce to

Exp( iZJ)Exp(−iLK)]Λ(0)

except as a first

approximation

However the function

is of interest and ultimately can be related to Λ(Z).

Ω is defined as

Ω = | | ω | |

| ζ | |

and ζ(x)=(dω/dx).

Again note that J is the matrix with 1's on the off diagonal and

iLK = | | 0 | 0 | |

| 0 |
−i∫_{0}^{x}γ(z)dz | |

Note that

(iLK)^{n} = | | 0 | 0 | |

| 0 |
[−i∫_{0}^{x}γ(z)dz]^{n} | |

therefore Exp(iLK) is given by

Exp(iLK) = | | 0 | 0 | |

| 0 |
exp(−i∫_{0}^{x}γ(z)dz ) | |

The oscillatory aspect of the solution for Ω(x) is given by Exp(iZJ) and the moving average part by Exp(iLK), which amounts to

Since γ is equal to (d(1/k)/dx) the integration of γ from 0 to x will give [1/k(x)−1/k(0)] and hence the moving average part by

Constant factors are irrelevant in determining probability density distributions because they cancel out in normalization.

For matrices A and B which do not commute the Baker-Campbell-Hausdorf formula gives a product representation of Exp(A+B). The first term of the series is Exp(A)Exp(B). The second term is −½Exp([A,B]) where [A,B] is the commutator of A with B; i.e., AB−BA. Thus the second approximation of Exp(A+B) is

For the preceding

[J, K] = | | 0 | 1 | |

| −1 | 0 | |

and [iZJ, −iLK] is equal to ZL[J, K].

(To be continued.)

applet-magic HOME PAGE OF Thayer Watkins, |