﻿ On an Analytical Solution to the Generalized Helmholtz Equation of One Dimension
San José State University

applet-magic.com
Thayer Watkins
Silicon Valley
USA

On an Analytical Solution to the Generalized
Helmholtz Equation of One Dimension

The Helmholtz equation arises in many contexts in the attempt to give a mathematical explanation of the physical world. These range from Alan Turing's explanation of animal coat patterns to Schrödinger's time-independent equation in quantum theory. The quantum mechanical probability density function for a harmonic oscillator with a principal quantum number of 60 is shown below. The heavy line is the time-spent probability density function for classical harmonic oscillator of the same energy. As can be seen the spatial average of the quantum mechanical quantities is at least approximately equal to the classical values.

The Helmholtz equation per se is

#### ∇²φ = −k²φ

where k is a constant. The Generalized Helmholtz equation is that equation with k being a function of the independent variable(s).

## The One Dimensional Case

In one dimension the Helmholtz equation is

#### (d²φ/dx²) = −k²φ(x)

It just has the sinusoidal solution of φ(x) = A·sin(kx)+B·cos(kx). In one dimension the Generalized Helmholtz equation has a sinusoidal-like solution of varying amplitude and wavelength.

### Change of Variable

A sinusoidal solution is an exponential function of ikx where i is the imaginary unit. This suggests that the solution of the generalized equation may be a function of

Then

#### (dφ/dx) = (dφ/dX)(dX/dx) = (dφ/dX)ik(x) and (d²φ/dx²) = (d²φ/dX²)(−k²(x)) + i(dφ/dX)(dk/dx) or, equivalently (d²φ/dx²) = −(d(dφ/dX)/dX)k²(x) + i(dφ/dX)(dk/dx)

Since (d²φ/dx²) is equal to −k²φ the above equation can be reduced upon division by −k² to

## A Matric Equation

Let (dφ/dX) be denoted as ψ and (d(1/k)/dx) as γ. Then

In matric form

where

#### Φ = | φ | | ψ |   M = | 0        1 |  | 1      −iγ |

Note that γ is a function of x and hence also of X and so is the matrix M.

The matrix M can be decomposed into (J−iγK) where J is the 2×2 matrix with zeroes on the principal diagonal and 1's on the other places and K is the 2×2 matrix of all zeroes except for 1 in the (2,2) position.

For the analogous scalar differential equation the solution would go as follows:

#### (dy/dx) = μ(x)y (1/y)(dy/dx) = μ(x) Integrating from 0 to x gives ln(y(x)) − ln(y(0)) = ∫0xμ(z)dz hence y(x) = exp(∫0xμ(z)dz)y(0))

This suggests that the solution to the matrix equation might be

#### Φ(X) = Exp(∫0XMdZ)Φ(0)

The RHS is in fact the first term of a Magnus series solution for the equation. Let us now consider the function

where

#### Λ = | λ | | μ |

and μ(x)=(dλ/dx).

The integral of the matrix M is the following matrix

#### ∫0XM(Z)dZ = |   0             X       |  | X   − i∫0xγ(z)dz |

which is the same as (XJ−i∫0xγ(z)dzK).

The integral of γ expressed as a function of X is the same as its integral expressed as function of x over corresponding ranges. But the integral of γ over the range of 0 to x is [1/k(x)−1/k(0)].

The solution is therefore

#### | λ(X)   |    |   0             X         |       | λ(0) | || = Exp{} | μ(X)   |    |   X    −i∫0xγ(z)dz     |     |μ(0) |

Let Z=∫0xk(z)dz so X=iZ. Then the solution can be represented as

#### | λ(Z) |    |   0             iZ         |       | λ(0) | || = Exp{} | μ(Z) |    | iZ  −i∫0xγ(z)dz     |     |μ(0) | or, equivalently   Λ(Z) = Exp(iZJ−iLK)Λ(0) where L = ∫0xγ(z)dz

For the matric exponential function Exp(A+B)=Exp(A)Exp(B)=Exp(B)Exp(A) if and only if AB=BA; i.e., if A and B commute. The matrices J and K do not commute; i.e., JK≠KJ.

#### JK = | 0        1 |   | 0      0 | KJ = | 0        0 |  | 1      0 |

Obviously iZJ and iLK do not commute because J and K do not commute. Therefore the above solution

#### Λ(Z) = Exp( iZJ −iLK)Λ(0) does not reduce to Exp( iZJ)Exp(−iLK)]Λ(0) except as a first approximation

However the function

#### Ω(Z) = [Exp( iZJ)Exp(−iLK)]Ω(0)

is of interest and ultimately can be related to Λ(Z).

Ω is defined as

#### Ω = | ω | | ζ |

and ζ(x)=(dω/dx).

Again note that J is the matrix with 1's on the off diagonal and

Note that

#### (iLK)n = | 0               0        |   | 0      [−i∫0xγ(z)dz]n |

therefore Exp(iLK) is given by

#### Exp(iLK) = | 0               0                   |  | 0       exp(−i∫0xγ(z)dz ) |

The oscillatory aspect of the solution for Ω(x) is given by Exp(iZJ) and the moving average part by Exp(iLK), which amounts to

#### exp(−i∫0xγ(z)dz )ζ(0)

Since γ is equal to (d(1/k)/dx) the integration of γ from 0 to x will give [1/k(x)−1/k(0)] and hence the moving average part by

#### exp(−i/k(x))exp(i/k(0))ζ(0)

Constant factors are irrelevant in determining probability density distributions because they cancel out in normalization.

For matrices A and B which do not commute the Baker-Campbell-Hausdorf formula gives a product representation of Exp(A+B). The first term of the series is Exp(A)Exp(B). The second term is −½Exp([A,B]) where [A,B] is the commutator of A with B; i.e., AB−BA. Thus the second approximation of Exp(A+B) is

#### Exp(A)Exp(B)Exp(−½Exp([A,B]).

For the preceding

#### [J, K] = | 0        1 |   | −1      0 |

and [iZJ, −iLK] is equal to ZL[J, K].

(To be continued.)