﻿ Dirac's Bra and Ket Notation in Quantum Mechanics
San José State University

applet-magic.com
Thayer Watkins
Silicon Valley
U.S.A.

Dirac's Bra and Ket Notation in Quantum Mechanics

Paul Dirac developed an esoteric but brilliant notation for vectors and expected values that is convenient for quantum physics. In quantum physics systems have discrete states. The values of some quantity, say energy, can be expressed as a vector, say a column vector V, over the possible states of a system. Likewise the probabilities of being in the various states can also be expressed as a column vector P. The expected value of the energy in terms of matrix notation is then

#### <V> = PTV

where <V> denotes the expected value of the quantities expressed in V and PT dentotes the transpose of P, the probabilities expressed as a row vector.

## Probabilities

In physics the probabilities are often expressed in terms of a complex valued wave function ψ where a probability is equal to the product of the wave function with its complex conjugate ψ*. Thus the probability of the system being in state i is

#### pi = ψ*iψi

The expected value of the energy V is then

#### <V> = Σψ*iViψi

This is not conveniently expressable in standard matrix notation. Paul Dirac took the notation <V> for the expected value which could be called bracket V and expanded it. He used |ψ> for the wave function vector and called it the ket and dentoted the transpose of the complex conjugate of the wave function as <ψ| and called the bra. The expected value of V is then <ψ|V|ψ>. For now the matter of probabilities and expected values will be left aside and the algebraic properties of kets and bras will be considered.

## Linear Spaces

The number of components of a ket constitutes its dimensionality. As with other representations of vectors, the sum of two kets, say |α> and |β>, is another ket, called it |γ>; i.e.,

#### |α> + |β> = |γ>

The set of kets is thus closed under the operation of addition.

Likewise

#### |β>+ |α> = |γ> = |α> + |β>

The addition of kets is said to be commutative. The addition of kets is also associative, i.e.,

#### (|β>+ |α>) + |γ> = |α> + (|β> + |γ>)

Furthermore there is an additive identity |0>, the ket all of whose components are zero, called the null ket, such that for any ket |β>

#### |β> + |0> = |β>

Any ket can be multiplied by a scalar to get another ket; i.e.,

#### c(|α>) = |β>

Each component of the ket is multiplied by the scalar.

The field of scalars for kets is usually the set of complex numbers.

Thus the set of kets of any dimensionality constitutes a linear vector space.

## The Inner Product of Two Vectors

The inner product of two complex-valued vectors (a1, a2, …, an) and (b1, b2, …, bn) is defined as

#### Σ a*ibi

where a*i denotes the complex conjugate of ai and the sum runs from i=1 to i=n. This definition of course also applies to kets. The vector of the complex conjugates of the components of a ket |α> is called its bra and is written as <α|. Note that the bra of c|α> is equal to c*<α|. The inner product of two kets, |α> and |β>, is written as <α, β>.

## Operators

An operator is simply a function from a vector space to the same vector space; e.g.,

#### K(|α>) = |β>

As a function K can be thought of as a list is which the arguments of K are in one column and the results in another column.

Usually such a function is expressed as

#### K|α> = |β>

and K is said to operate on |α> to produce |β>.

The operators being considered are those that are linear; i.e.,

## Bases of Linear Vector Spaces

The linear vector space of kets of a particular dimensionality has a basis; i.e., a set of kets which are normal to each other and each of unit magnitude. A set of kets {|αi; for i=1, 2, …, n} is orthonormal if

#### <αi|αj> = δij

where δij=0 if i≠j and 1 otherwise.

Any ket |β> can be expressed as a linerar combination of the elements of the basis. This means that for any |β> there exists a set of coefficients {βi; i=1, 2, …, n} such that

#### |β> = Σβi|αi>

Consider the results of operating with K on the basis vectors.

#### K|αi> = |γi>

Then K|β> = Σβii>.

Let Γ be the matrix created by adjoining the kets |γi> for i=1 to i=n as column vectors.

The transformation of |β> by the operator K can represented as

#### K|β> = ΓB

where B is the column vector of the set of coefficients βi for i=1 to i=n.

Thus linear operators are simply equivalent to matrices, but note that the particular matrix representing K depends upon the particular basis for the vector space of the kets.

## Hermitian Matrices

A Hermitian matrix is one such that it is equal to the transpose of its complex-conjugate matrix. Let M be a square matrix with complex elements and M* the matrix of complex conjugates of the elements of M. Then M is Hermitian if

#### (M*)T = M

where NT denotes the transpose of N. A matrix of strictly real elements is Hermitian if it is symmetric. Thus Hermitian is a generalization of the symmetry of a matrix.

An operator K is Hermitian if its matrix representation in any basis is Hermitian.

## The Eigenvalues and Eigenkets of an Operator

The are some kets which an operator K transforms into multiples of themselves; i.e.,

#### K|γ> = λ|γ>

such a ket is known as an eigenket of the operator K and the scalar λ is known as an eigenvalue of K.

If Γ is the matrix representation of K then the condition for the existence of an eigenket of K is

#### Γ|γ> = λ|γ> which can be also expressed as (Γ−λI)|γ> = |0>

where I is the n×n identity matrix and |0> is the null-ket for the kets; i.e., a vector all components of which are zero.

If (Γ−λI) has an inverse then |γ> would have to be the null-ket. For |γ> to be something other than the null-ket the matrix (Γ−λI) must be singular; i.e., does not have an inverse. The condition for a matrix to not have an inverse is that its determinant is equal to zero.

#### det(Γ−λI) = 0

This condition reduces to a polynomial equation of degree n. That equation has, counting multiplicities, n solutions.

For the moment assume that the eigenvalues are all separate. There is then one eigenket of unit magnitude associated with each eigenvalue. (The magnitude of a ket is the positive square root of the inner product of the ket with itself. Dividing the ket by its magnitude produces a ket with unit magnitude.) The practive in quantum physics is to label the eigenkets with their eigenvalue; i.e., if an operator has an eigenket with an eigen value of λ then that eigenket is denoted as |λ>.

## The Orthogonality of the Eigenkets of an Operator

Let μ and ν be two eigenvalues of an operator K which has a matrix representation of Γ. (These eigenvalues can be the same or different.) The corresponding eigenkets are |μ> and |ν>. The defining equations are

#### Γ|μ> = μ|μ> and Γ|ν> = ν|ν>

The bra forms of these equations are

#### <μ|ΓH = μ*<μ| <ν|ΓH = ν*<ν|

If Γ is Hermitian then ΓH is equal to Γ and the two above equations reduce to

#### <μ|Γ = μ*<μ| <ν|Γ = ν*<ν|

From the ket and bra forms select the following two equations

#### Γ|μ> = μ|μ> <ν|Γ = ν*<ν|

Now the first equation is multiplied on the left by <ν| to obtain

#### <ν|Γ|μ> = μ<ν|μ>

When the second equation above is multiplied on the right by |μ> the result is

#### <ν|Γ|μ> = ν*<ν|μ>

Equating the two expressions which are both equal to <ν*|Γ|μ> gives

#### μ<ν|μ> = ν<ν|μ> which is equivalent to (μ−ν*)<ν|μ> = 0

If μ and ν are the same then

#### (μ−μ*)<μ|μ> = 0

Since <μ|μ> is not equal to zero (μ−μ*) must be equal to zero and hence

#### μ = μ*

In other words, the eigenvalues of a Hermitian matrix must be real numbers.

If μ and ν are different then (μ−ν*) is not equal to zero so the inner product of |μ> and |ν> must be zero:

#### <ν|μ> = 0

and hence |μ> and |ν> are orthogonal; i.e., the eigenkets of a Hermitian operator are orthogonal. This raises the possibility that the eigenkets of a Hermitian operator on the linear vector space of kets can serve as a basis for the space. The only thing that needs to be dealt with is the possibility that two different eigenkets may have the same eigenvalue.

If multiple eigenkets have the same eigenvalue then they span a subspace of the kets and an orthogonal basis for this subspace can be chosen. This basis adjoined to a basis for the rest of the vector space of kets provides a complete basis for the vector space of kets.