Quantum mechanics

I'm not sure which textbook this is from, but the following excerpt has become a popular internet meme and is both funny and summarizes modern physics:


1.1 History

Aristotle said a bunch of stuff that was wrong. Galileo and Newton fixed things up. Then Einstein broke everything again. Now, we've basically got it all worked out, except for small stuff, big stuff, hot stuff, cold stuff, fast stuff, heavy stuff, dark stuff, turbulence, and the concept of time.

All that “stuff” has more academic terminology, but we'll focus on the small stuff: Quantum physics, or in other words, the physics of individual particles like electrons and quarks.

This page should cover the equivalent of a one semester Junior level course in introductory quantum mechanics. The mathematics may seem complicated at points, but its purpose is to equip us with the necessary methods to simplify those which would otherwise be even more complicated problems. See units if you're confused about symbols.

1   The Wave Function

In classical mechanics, you might consider a particle of mass \(m\) and try to determine its position, \(x(t)\). To do this, you could use a number of characteristics, like velocity \(v = \dot x\), momentum \(p=mv\), or kinetic energy \(T=\frac 1 2 mv^2\).

In wholly conservative systems, we will then have Newton's second law equivalent to

\[ F = m\ddot x = -\frac{\partial V}{\partial x}. \]

In quantum mechanics, there isn't really a notion of \(x(t)\), but we can accomplish something similar by looking at the wave function \(\Psi(x,t)\) of the particle.

1.1   The Schrödinger Equation

The good news is we have a useful analogue to Newton's second law:

\[ \boxed{ i\hbar \frac{\partial \Psi}{\partial t} = -\frac{\hbar^2}{2m} \frac{\partial^2 \Psi}{\partial x^2} + V\Psi } \]

With the right initial conditions, for example \(t_0=0\), we can use \(\Psi(x,0)\) to find a solution for all \(\Psi(x,t)\). But we're getting ahead of ourselves:

What is a “wave function”?

1.2   The Statistical Interpretation

We think of particles as existing in a specific place at a specific time, so why is the wavefunction spread out on all space and time for a single particle? Because in reality particles do not behave as though they are in a specific place.

Particles behave like waves, and so the wave function provides a simple mathematical description of these wavelike properties.

If we follow Born's statistical interpretation of quantum mechanics, it seems that rather there is a certain probability of finding a particle at any point in space, namely, is

\[ \boxed{ \begin{matrix}\text{the probability of finding our particle} \\\text{between points }a\text{ and }b\text{ at time } t\end{matrix} \:\: = \int_a^b |\Psi(x,t)^2|\,dx } \]

So the integral over a region of \(|\Psi|^2\) gives the probability of finding a particle in that region: hence, we will likely find a particle near points where \(|\Psi|^2\) is bigger.

Note that this act of “finding” a particle at a position is an act of observing the particle: and thought it may not feel like it when you are looking at things without touching them, all observations are physical interactions.

The interesting thing is that if you measure the position of the particle twice, it will be in the same place both times. This is the collapse of the wave function, as the wave function immediately collapses into a sharply peaked distribution about the point where it was observed to be located.

1.3   Probability

Before we continue, let's cover some basic probability terminology:

1.3.1   Discrete Variables

We can find the average value of a function \(f\) on discrete indices \(j\) with

\[ \lang f(j)\rang = \sum f(j)P(j) \]

where \(P(j)\) is the number of \(j\) divided by the total number of indices. Furthermore, we have the expected value of \(j\):

\[ \lang j\rang = \sum j\,P(j), \]

and we can define the variance \(\sigma^2\) and standard deviation \(\sigma\) by the following equation, where \(\Delta j = j - \lang j\rang\):

\[ \sigma^2 = \lang (\Delta j)^2\rang = \lang j^2\rang - \lang j\rang^2 \]

1.3.2   Continuous Variables

For functions on continuous distributions, we can define probability density \(\rho(x)\) as the probability of finding the variable between \(x\) and \(x+dx\), where \(dx\) is infinitesimally small.

Thereby, we can find the probability of finding \(x\) between two points \(a\) and \(b\) by integrating:

\[ P_{a,b} = \int_a^b \rho(x)dx. \]

We can now find some useful rules. Firstly, it should be obvious that finding \(x\) to have any value should be guaranteed by probability:

\[ \int_{-\infty}^\infty \rho(x)dx = 1 \]

Next, we can once again find the expected value of our variable \(x\) or a function \(f(x)\) by:

\[ \boxed{ \lang f(x)\rang = \int_{-\infty}^\infty f(x)\rho(x)dx } \]

and of course, the variance:

\[ \boxed{ \sigma^2 = \lang x^2\rang - \lang x\rang^2 . }\]

1.4   Normalization

So, we now know our prior equation for the probability of finding a quantum particle shows that \(|\Psi^2|\) is the probability density for finding the particle at a certain place and time. Therefore, it must follow that all wave functions must satisfy the following requirement:

\[ \boxed{ \int_{-\infty}^\infty |\Psi(x,t)|^2\,dx = 1. } \]

This is called normalization. Hereby we can see that only solutions to the Schrödinger Equation which can be normalized by the above equation possible physical systems.

(If you're worried about solutions no longer being solutions when they are normalized, do not fret: constants carry through differentiation, and so if the Schrödinger Equation applies to \(\Psi\), it also applies to \(k\Psi\) for any complex constant \(k\in\Complex\).)

1.5   Momentum

If a particle has waveform \(\Psi\), then we expect it to be located at

\[ \lang x\rang = \int_{-\infty}^\infty x|\Psi(x,t)|^2\,dx \]

Now, we might want to know how fast this expected location is moving, but actually it is easier to work with momenta than velocities. (While of course, you can always divide \(p\) by \(m\) to find \(v\))

The Ehrenfest Theorem gives us insight to the expectation values of momentum and potential energy:

\[ \boxed{ m\ddot{\lang x\rang} = \lang p\rang,\quad \ddot{\lang p\rang} + \lang V'(x)\rang = 0 } \]

By combining Schrödinger's Equation and the expression for \(\lang x\rang\), we can find:

\[ \lang p\rang = m\frac{d\lang x\rang}{dt} = -i\hbar \int \Psi^*\frac{\partial \Psi}{\partial x} \,dx \]

An easier way to look at this is using the notion of quantum operators, so let us define our position operator \(\hat X = \left[ x \right]\):

\[ \boxed{ \lang \hat X\rang = \int \Psi^* \left[ x \right] \Psi\, dx = \int \Psi^* \hat X \Psi\, dx } \]

and our momentum operator \(\hat P = \left[ -i\hbar (\partial / \partial x) \right] \):

\[ \boxed{ \lang \hat P\rang = \int \Psi^* \left[ -i\hbar (\partial / \partial x) \right] \Psi\, dx = \int \Psi^* \hat P \Psi\, dx } \]

As you can see, a pattern is emerging.

By expressing all quantities in terms of position and momentum operators, we can use the following integral to find the expected value of any “quantum operator” \(\hat \mathcal O\):

\[ \boxed{ \lang \hat \mathcal O (x,p)\rang = \int \Psi^* \left[ \hat \mathcal O (x, -i\hbar (\partial/\partial x)) \right] \Psi\, dx } \]

For example, using our Newtonian background, we know \(T = \frac 1 2 mv^2 = p^2/2m \), so:

\[ \lang \hat T \rang = \frac{1}{2m} \lang \hat P^2 \rang = - \frac{\hbar^2}{2m} \int \Psi^* \frac{\partial^2 \Psi}{\partial x^2}\, dx \]

1.6   Heisenberg's Uncertainty Principle

\[ \boxed{ \sigma_X\sigma_P \geq \frac \hbar 2 } \]

2   Time-Independent Schrödinger Equation

Recall the Schrödinger Equation. We are now actually going to use it to find the wave function, and we will begin in the easy case where \(V\) is independent of \(t\), i.e. \(\dot V(x,t) = 0 \).

2.1   Stationary States

Now, we will also look for solutions to Schrödinger that are specifically products,

\[ \boxed{ \Psi(x,t) = \psi(x)\varphi(t), } \]

Such that we have separated the function of two variables into two single variable functions. This makes evaluating the partial derivatives much easier:

\[ \frac{\partial \Psi(x,t)}{\partial t} = \psi(x)\dot\varphi(t), \qquad \frac{\partial^2 \Psi(x,t)}{\partial x^2} = \psi''\!(x)\varphi(t), \]

Which yields a very useful result by dividing the Schrödinger by \(\psi\varphi\), one in which the left and right sides are solely functions of time and position, respectively:

\[ i\hbar\frac{\dot\varphi(t)}{\varphi(t)} = \frac{-\hbar^2}{2m}\frac{\psi''\!(x)}{\psi(x)} + V(x)\]

This, of course, means both sides are equal to some constant, which we call \(E\):

\[ i\hbar\frac{\dot\varphi}{\varphi} = E \implies \dot \varphi = \frac{-iE}{\hbar} \varphi \]

If you don't already see it, this one of the simplest differnetial equations: a derivative equal to a constant times the original function. The solution is \(\varphi = ke^{iEt/\hbar}\), for any \(k\), but we can just let \(k=1\) since we only need to normalize the product \(\psi\varphi\), so, we have

\[ \boxed{ \varphi(t) = e^{iEt/\hbar} \quad\text{ and }\quad \underbrace{\frac{-\hbar^2}{2m}\psi'' + V\psi = E\psi}_{\text{Time-independent Schr\"odinger Equation}} } \]

It should be noted that these seperable solutions make up a very small portion of all possible solutions, but it yields a very useful result: the probability density of such a system does not depend on \(t\) at all, as we have:

\[ \Psi(x,t) = \psi(x)e^{-iEt/\hbar} \implies |\Psi(x,t)|^2 = \Psi^*\Psi = \psi^*\psi\cdot e^0 = |\psi(x)|^2 \]

Which allows us to simplify the general operator definition to:

\[ \lang\hat \mathcal O(x,p)\rang = \int \psi^*\left[ \hat \mathcal O (x,-i\hbar(d/dx)) \right]\psi\,dx. \]

2.1.1   The Hamiltonian operator

We can simplify even further with the Hamiltonian \(\hat H\) representing total energy:

\[ \hat H = -\frac{\hbar^2}{2m}\frac{\partial}{\partial x^2} + V,\]

Which gives us a very short writing of the time-independent Schrödinger Equation:

\[ \boxed{ \hat H\psi = E\psi, } \]

which yields \(\lang \hat H\rang = E\), and \(\lang \hat H^2\rang = E^2\), giving us \(\sigma^2_H = 0 \), and therefore the total energy is definite everywhere, all the time.

The General Solution for the Schrödinger Equation happens to be a linear combination of solution wave-functions \(\Psi_n\) for each allowed energy \(E_n\):

\[ \boxed{ \Psi(x,t) = \sum_{n=1}^\infty c_n\psi_n(x)e^{-iE_nt/\hbar} = \sum_{n=1}^\infty c_n\Psi_n(x,t)} \]

Which is even simpler when \(t=0\), as \(\Psi(x,0) = \sum c_n\psi_n(x)\).

It is very important to note, that each seperable solution \(\Psi_n\) is stationary in the sense that all probabilities are time-invariant, but this is not true about the general solution, a linear combination of stationary states: it is not necessarily stationary.

This miraculous property means that given \(\Psi(x,0)\) and time-independent \(V(x)\), we can find the time-dependent \(\Psi(x,t)\) as a linear combination of stationary states. From the linear coefficients, \(|c_n|^2\) is the probability that measuring the total energy would find \(E_n\), and since we will always find one of the allowed values, we have an expected value:

\[ \boxed{ \lang \hat H\rang = \sum_{n=1}^\infty |c_n|^2E_n. } \]

(Note that as \(c_n\) are time-invariant, so is the expected value of the Hamiltonian, this is the conservation of energy.)

2.2   The Infinite Square Well

Imagine a particle that is entirely free to move within an interval, but is unable to escape due to an infinite force at the boundaries. We can define such a system by

\[ V(x) = \begin{cases} 0 & 0\leq x \leq a, \\ \infty & \text{otherwise.} \end{cases} \]

In the region outside the interval \(\psi(x)=0\), and inside the interval the Schrödinger simplifies to a differential equation you might recognize:

\[ \psi''(x) = -k^2\psi(x), \quad k:= \frac{\sqrt{2mE}}\hbar \]

This is a simple harmonic oscillator, with a general solution

\[ \psi(x) = A\sin kx + B\cos kx,\]

and assuming \(\psi(x)\) is continuous this means \(\psi(0)=0=\psi(a)\), so \(B=0\) and since then \(A\) cannot be zero, we have distinct solutions for \(k\):

\[ k_n = \frac{n\pi}a ,\quad n\in\N, \]

so we end up with a set of allowed values for \(E\):

\[ \boxed{ E_n = \frac{\hbar^2k_n^2}{2m} = \frac{n^2\pi^2\hbar^2}{2ma^2} } \]

By normalizing \(\psi\), we can find \(A = \sqrt{2/a} \) and hence solutions

\[ \boxed{ \psi_n(x) = \sqrt\frac{2}{a}\sin\left(\frac{n\pi}a x\right) } \]

These are stationary states: \(\psi_1\) is called the ground state and the others are called excited states. These solutions have several interesting properties:

  1. States alternate between even and odd (with respect to the center point \( a/2 \))
  2. Each state \(\psi_n\) has \(n-1\) nodes (zeroes), not including the end points.
  3. States are mutually orthogonal, \[ \int \psi_m(x)^*\psi_n(x)\,dx=\delta_{mn} = \begin{cases} 0 & m\neq n, \\ 1 & m=n. \end{cases}\] (This is the Kronecker delta function \(\delta\) – not the Dirac delta.)
  4. Any function \(f(x)\) (yes, any function) can be written as a linear combination of these states (they form a Fourier series): \[ f(x) = \sum c_n\psi_n(x), \qquad c_n = \int \psi_n(x)^* f(x)\, dx \]

In closing this section, we have the full \(n\)-th stationary states for an infinite square well:

\[ \boxed{ \Psi_n(x,t) = \sqrt\frac{2}{a}\sin\left(\frac{n\pi}{a}x\right)e^{\displaystyle -i(n^2\pi^2\hbar/2ma^2)t} } \]

2.3   The Harmonic Oscillator

Consider a continuous potential \(V(x)\). At any local minimum, the potential is approximately quadratic. Why? It would have the Taylor series \( V(x)-V(x_o) = V'(x_o)(x-x_o) + \frac 1 2 V''(x_o)(x-x_o)^2 + \dotsm \), and of course \(V'(x_o)=0\) at a local minimum and we can ignore \(V(x_o)\) as adding and subtracting constant potentials doesn't change the force. We are left with

\[ V(x) \approx \frac{V''(x_o)}{2}(x-x_o)^2 \]

Which is basically an approximation of almost any oscillating system into a simple harmonic oscillator with an effective “spring constant”. So, we'd like to figure out how to solve the Schrödinger for quadratic potentials:

\[ \boxed{ V(x) = \frac 1 2 m\omega^2x^2 \implies \underbrace{ \frac{-\hbar^2}{2m}\psi''+\frac 1 2 m\omega^2x^2\psi = E\psi}_{\text{ Time-independent Schr\"odinger }} }\]

(Note, it's convention to use “frequency” \(\omega = \sqrt{k/m}\) instead of spring constant \(k\).)

2.3.1   Algebraic Method

Our approach will employ a clever trick and “ladder operators”. First, we can rewrite the Schrödinger with the Hamiltonian:

\[ \hat H\psi = E\psi, \quad \hat H = \frac{1}{2m} \left[ \hat P^2 + (m\omega \hat X)^2 \right], \qquad \hat P = -i\hbar (d/dx)\]

Now, we cannot simply factor this sum of two squares as if they were numbers, because they are operators and not necessarily abelian. However, let's act as though we might, and see what happens. Consider the Ladder Operators:

\[ \hat a_\pm = \frac 1 {\sqrt{2\hbar m\omega}} \left(m\omega \hat X \mp i\hat P\right) \]

If these were numbers, their product would give us our Hamiltonian, but we have

\[ \hat a_-\hat a_+ = \frac{1}{2\hbar m\omega}\left( \hat P^2 + (m\omega \hat X)^2 - im\omega(\hat X\hat P - \hat P\hat X) \right) \]

The annoying difference at the end preventing us from factoring is the commutator \([\hat X,\hat P]\) of \(\hat X\) and \(\hat P\), because it is a measure of their failure to commute. If we carefully expand the operators, we are left with a simple expression:

\[ \boxed{ \underbrace{ [\hat X,\hat P] = \hat X\hat P - \hat P \hat X = i\hbar }_{\text{Canonical commutation relation}} } \]

This allows us to simplify our previous equation:

\[ \hat a_-\hat a_+ = \frac{1}{\hbar \omega}\hat H + \frac 1 2 \implies \hat H = \hbar\omega\left(\hat a_-\hat a_+ - \frac 1 2\right)\]

Note that the ladder operators are not commutative either, so we have alternatively:

\[ \hat a_+\hat a_- = \frac{1}{\hbar \omega}\hat H - \frac 1 2 \implies \hat H = \hbar\omega\left(\hat a_+\hat a_- + \frac 1 2\right)\]

Anyways, we can now use this to simplify our Schrödinger:

\[ \hbar\omega\left( \hat a_\pm\hat a_\mp \pm \frac 1 2 \right)\psi = E\psi \]
The “Ladder”

The key concept here is raising and lowering energy states, from the property that if \(\psi\) is a solution to the Schrödinger with energy \(E\), then \(\hat a_\pm\psi\) is a solution with energy \(E\pm\hbar\omega\).

\[ \boxed{ \hat H\psi = E\psi \implies \hat H(\hat a_\pm \psi) = (E\pm \hbar\omega)(\hat a_\pm\psi) } \]

As we descend down the ladder, we will eventually reach the ground state \(\psi_0(x)\), where the energy cannot go any lower, with a normalized expression

\[ \boxed{ \psi_0(x) = \left(\frac{m\omega}{\pi\hbar}\right)^{\frac 1 4} e^{\textstyle\small -(m\omega/2\hbar)x^2 }, \qquad E_0 = \frac 1 2 \hbar\omega } \]

With some clever algebra, we can then find normalized expressions for the excited states:

\[ \boxed{ \underbrace{\hat a_+ \psi_n = \sqrt{n+1}\psi_{n+1}}_{\text{Raising operator}},\quad \underbrace{\hat a_- \psi_n = \sqrt{n}\psi_{n-1}}_{\text{Lowering operator}}} \implies\boxed{ \underbrace{\psi_n = \frac{1}{\sqrt{n!}}(\hat a_+)^n\psi_0}_{\text{Excited states}}} \]

Much like the infinite square well, these stationary states are orthogonal, so if \(m\neq n\) then \(\int \psi_m^*\psi_n\,dx = 0\). This carries again the property that we can write \(\Psi(x,0)\) as a linear combination of such stationary states, and \(|c_n|^2\) is the probability of a total energy \(E_n\).

Finally, we will finish with a useful expression for \(\hat X,\hat P\):

\[ \hat X = \sqrt{\frac h 2 \frac 1 {m\omega}}(\hat a_+ +\hat a_-) ,\quad \hat P = i\sqrt{\frac h 2 m\omega}(\hat a_+ -\hat a_-) \]

2.3.2   Analytic Method

There is a second common approach to the harmonic oscillator, using a straightforward but tedious power series solution to the Schrödinger. We won't cover this right now, but it should be noted that this approach is more widely applicable to other potentials.

2.4   The Free Particle

What if we assume zero potential everywhere? This simplifies the Schrödinger quite a lot, as we have

\[ \psi''\!(x) = -k^2\psi(x), \quad k:= \frac{\sqrt{2mE}}\hbar \]

We could now write the general solution as we did in the infinite square well, but instead we can use exponential form:

\[ \psi(x) = Ae^{ikx} + Be^{-ikx} \]

Since we don't have any boundary conditions, all (positive) values of \(E\) are possible. Now, adding our time-dependence,

\[ \boxed{ \Psi(x,t) = \psi(x)\varphi(t) = Ae^{\textstyle\small ik\left[x-\frac{\hbar k}{2m}t\right]} + Be^{\textstyle\small -ik\left[x+\frac{\hbar k}{2m}t\right]} } \]

Note that the variables here are in the form \(x\pm vt\) where \(v = (\hbar k/2m) = \) constant, so we have a wave of unchanging shape traveling in the \(\mp x\)-direction with speed \(v\). Therefore the first sinusoidal wave above is traveling rightwards, and the second is traveling leftwards. Thereby we may as well simply write:

\[ \Psi_k(x,t) = Ae^{\textstyle\small i\left( kx - \frac{\hbar k^2}{2m}t \right)}, \] \[ k:=\pm\frac{2mE}{\hbar}, \quad \begin{cases} k\gt 0 & \text{rightwards wave} \\ k\lt 0 & \text{leftwards wave}\end{cases} \]

We can find the momentum via the de Broglie formula, to be \( p = \hbar k \), and the speed to be \( v_{\text{Quantum}} = \sqrt{E/2m} = \frac 1 2 v_{\text{Classical}}\). We can now look at some interesting complications:

These seperable solutions are still, however, useful, and we can find a general solution to the time-dependent Shcrödinger here by integrating over a continuous linear combination with continuous variable \(k\), which can be normalized (depending on \(\phi(k)\)):

\[ \boxed{ \underbrace{ \Psi(x,t) = \int_{-\infty}^\infty \frac{\phi(k)}{\sqrt{2\pi}}e^{\textstyle\small i\left( kx - \frac{\hbar k^2}{2m}t \right)}dk }_\text{General solution for free particle} } \]

Note that this must now carry a range of \(k\)'s, and so a range of energy and speed. We call this a wave packet. The question becomes, given any \(\Psi(x,0)\), how can we determine \(\phi(k)\) to match the initial wave function?

This can be done with Fourier Analysis and Plancherel's theorem: \[ f(x)=\frac 1 {\sqrt{2\pi}} \int_{-\infty}^\infty F(k)e^{ikx}dk \iff F(k)=\frac 1 {\sqrt{2\pi}} \int_{-\infty}^\infty f(x)e^{-ikx}dx \] Which yields our result: \[ \boxed{ \phi(k) = \frac 1 {\sqrt{2\pi}} \int_{-\infty}^\infty \Psi(x,0) e^{-ikx}dx } \]

Finally, why was the quantum speed twice the classical speed of the particle? This is because there are really two velocities of concern: the group velocity of the “envelope” within which the wave packet travels, and the phase velocity of the waves themselves.

Let's consider a general wave packet narrowly peaked around some value \(k_0\), with form

\[ \Psi(x,t) \frac 1 {\sqrt{2\pi}} \int_{-\infty}^\infty \phi(k)e^{i(kx-\omega t)}dk \]

By taking a change of variables from \(k \to s := k-k_0 \), we find

\[ \Psi(x,t) \approx \frac 1 {\sqrt{2\pi}} e^{i(k_0x-\omega_0t)} \int_{-\infty}^\infty \phi(k_0+s)e^{is(x-\omega'_0 t)}ds \] \[ v_{\text{Phase}} = \frac{\omega }{ k} , \qquad v_{\text{Group}} = \frac{d\omega }{ dk} \]

2.5   Delta-function Potential

The Dirac delta function \(\delta(x)\) is zero everywhere except for an infinitely high spike at the origin, with the important property that the total area is 1.

\[ \int_{-\infty}^\infty \delta(x)\,dx = 1, \quad \delta(x) = \begin{cases}0&x\neq 0,\\\infty& x=0\end{cases} \]

One useful property here is that \(f(x)\delta(x-a) = f(a)\delta(x-a)\), so the integral of \(f(x)\delta(x-a)\) over all space is just \(f(a)\).

2.5.1   Bound States and Scattering States

When it comes to a potential well, there are effectively two possible scenarios for a particle:

  1. Bound state: the particle is trapped inside the well,
  2. Scattering state: the particle comes in from “infinity” and reflects back or transmits over the well

2.5.2   Delta-function Well

Now we may consider a potential well with \(V(x) = -\alpha \delta(x)\), for which we have a Schrödinger:

\[ \frac{-\hbar}{2m}\psi''\!(x)-\alpha\delta(x)\psi = E\psi \]

Now, there is exactly one bound state \(( E\lt 0 )\), which is

\[ \boxed{ \psi(x)=\frac{\sqrt{m\alpha}}{\hbar} e^{\textstyle\small -m\alpha|x|/\hbar^2}, \qquad E = \frac{-m\alpha^2}{2\hbar^2} } \]

For scattering states, we have a general solution

\[ k:=\frac{\sqrt{2mE}}\hbar,\quad \psi(x) = \begin{cases} Ae^{ikx}+Be^{-ikx} & x\lt 0 \\ Fe^{ikx}+Ge^{-ikx} & x\gt 0 \end{cases} \]

Where \(A\) is the amplitude of the wave coming from the left, \(G\) of the wave coming in from the right, \(B\) of the wave returning to the left, and \(F\) of the wave returning to the right. Therefore, if we assume this an experiment where particles are only sent from the left, we can let \(G=0\).

So now \(A\) is the amplitude of the incident wave, \(B\) the reflected wave, and \(F\) the transmitted wave. If we set our boundary conditions to require \(\psi(x)\) be continuous and \(\psi'\!(x)\) be continuous except where \(V(x)=\infty\), we have

\[ \beta:=\frac{m\alpha}{\hbar^2k} ,\quad B = \frac{i\beta}{1-i\beta}A ,\quad F = \frac{1}{1-i\beta}A \]

Since we know the probability of finding a particle at \(x\) is given by \(|\Psi|^2\), we can find the relative probabilities that the particle will either be reflected back or transmitted by:

\[ \boxed{ \begin{gathered} \text{Reflection coefficient } R=\frac{|B|^2}{|A|^2}=\frac{1}{1+(2\hbar^2E/m\alpha^2)} \\ \text{Transmission coefficient } T=\frac{|F|^2}{|A|^2}=\frac{1}{1+(m\alpha^2/2\hbar^2E)} \end{gathered} } \]

The really interesting part is that if we instead assume a delta barrier where \(V(x)=\alpha\delta(x)\), as in an infinite potential hill, the bound state (obviously) no longer exists, but the reflection and transmission coefficients are exactly the same, implying

It should be noted that the wave functions here cannot be normalized, and of course delta potentials are not realizable physical states: however, these same effects can be closely modeled by a linear combination of stationary states which are physically possible.

2.6   The Finite Square Well

Now we will consider a finite square well scenario, where

\[ V(x) = \begin{cases} -V_0 & -a\leq x\leq a,\\ 0 & |x|\gt a, \end{cases} \]

for some positive constant \(V_0\). Much like the delta-function well, we can have both bound states \((E\lt 0)\) and scattering states \((E\gt 0)\).

In both cases, we will impose boundary conditions that \(\psi\) and \(d\psi/dx\) are continuous.

2.6.1   Bound States

WLOG, we can assume solutions are going to be either even or odd, and as the well is even anyways, let's assume the solutions are even, so we have a waveform:

\[ \psi(x) = \begin{cases} Fe^{-\kappa x} & x\gt a \\ D\cos(\ell x) & 0 \lt x \lt a \\ \psi(-x) & x\lt 0 \end{cases} \] \[ \ell := \frac{\sqrt{2m(E+V_0)}}{\hbar}, \quad \kappa := \frac{\sqrt{-2mE}}{\hbar} \]

By establishing a nicer variable \(z\), we can now solve for the allowed energies \(E\), by using

\[ z := \ell a, \qquad z_0 := \frac{a}{\hbar}\sqrt{2mV_0}. \] \[ \kappa a = \sqrt{z_0^2-z^2}, \quad \tan z = \sqrt{(z_0/z)^2-1} \]

and looking for intersections:

As you can imagine, as \(z_0\) becomes larger/smaller, there will be more/fewer acceptable bound states (although interestingly enough, there will always be one, no matter how weak the potential well), and if the well is quite wide and deep (large \(z_0\)), we have approximate intersections:

\[ E_n+V_0 \approx \frac{n^2\pi^2\hbar^2}{2m(2a)^2}, \quad n\text{ odd.} \]

2.6.1   Scattering States

In the region to the left with no potential, we have \(x\lt-a\), and \[k:=\sqrt{2mE}/\hbar,\quad \psi(x)=Ae^{ikx}+Be^{-ikx}\]

Inside the well, we have \[ \ell := \frac{\sqrt{2m(E+V_0)}}{\hbar}, \quad \psi(x)=C\sin(\ell x)+D\cos(\ell x) \]

Assuming there is no inbound wave from the right, we have \(\psi(x)=Fe^{ikx}\). Applying our boundary conditions, we can obtain a transmission coefficient

\[ T^{-1} = 1 + \frac{V_0^2\sin^2\left(\frac{2a}{\hbar}\sqrt{2m(E+V_0)}\right)}{4E(E+V_0)},\]

which may seem messy but can yield useful information: \(T=1\), as in transmission is guaranteed, whenever the sine wave is zero, i.e. whenever \( 2a\sqrt{2m(E_n+V_0)} = n\pi\hbar \). This is called perfect transmission, which happens at none other than the allowed energies for the infinite square well:

\[ E_n + V_0 = \frac{n^2\pi^2\hbar^2}{2m(2a)^2} \]

2.7   Arbitrary Localized Potentials

As with the examples we've discussed so far, we can also find scattering “ratios” (reflection and transmission coefficients) for any localized potential, that is, a potential distribution where \(V(x)=0\) everywhere but some local region \((a-\epsilon,\,a+\epsilon)\).


Back to top
Symbol Unit Equivalent to
\(h\) Planck's constant \( 2\pi\,E_P t_P, \quad 6.62607\times 10^{-34} \text{Js} \)
\(\hbar\) Reduced Planck constant \( h/2\pi, \quad \,E_P t_P, \quad 1.05457\times 10^{-34} \text{Js} \)
\(i\) Imaginary unit \(\sqrt{-1}\)
Useful math:
\[ e^{i\theta} = \cos\theta + i\sin\theta \]