Making Spaces
In quantum mechanics a lot of emphasis is placed on the concept of vector spaces. One of the key tools is the ability to construct new vector spaces out of existing ones. However, very often authors construct new vector spaces without explicitly saying what they have done, and the result can be confusing. In this post I am going to attempt to summarize all the methods I have seen for constructing new vector spaces out of old, and point out where they are used in quantum mechanics.
Why: To represent superpositions of classical states
The building blocks are always Hilbert spaces. These are vector spaces over the complex numbers $\mathbb{C}$, with inner products and limits. The pair $(V, \langle\cdot\lvert\cdot\rangle)$ is a Hilbert space if
Why: Helps with notation
Given a Hilbert space $V$, the linear maps $V\mapsto\mathbb{C}$ are also a complex vector space, and of the same dimension. Furthermore, for any $u$ in $V$
$$
\langle u \lvert : v \mapsto \langle u \lvert v \rangle
$$
is a linear map. We can denote these by $\langle u \lvert$ and call them bras. Note that every linear map $V \mapsto \mathbb{C}$ is a bra, since if $F$ is a linear map, and $v_i$ is an orthonormal basis
$$
\begin{align}
F(v) &= F(\sum \langle v_i \lvert v \rangle v_i) \\
&= \sum \langle v_i \lvert v \rangle F(v_i) \\
&= \langle \sum \overline{F(v_i)} v_i \lvert v \rangle \\
\end{align}
$$
Therefore we can call the space of linear maps $V\mapsto\mathbb{C}$ the bra vector space, $V^*$ Finally note that the mapping $u\mapsto\langle u \lvert$ is unique since $\langle u \lvert = \langle v \lvert \implies \langle u - v \lvert = 0 \implies \lVert u-v \rVert = 0 \implies u = v$. So it's clear that the space of bra vectors is interchangable with the space $V$. The space of bra vectors is commonly referred to as the dual space. Note that, if $A:V\mapsto V$ is a linear map and $v \in V$ then the composite $\langle v \lvert A$ is also a linear map and therefore also a bra vector.
Why: Helps with notation
Ket vectors are maps $V\mapsto\mathbb{C}$ of the following form
$$
\lvert v \rangle : u \mapsto \langle u \lvert v \rangle
$$
These are complex conjugates of linear maps $V\mapsto\mathbb{C}$. However, the space of complex conjugates of linear maps is itself a vector space. And we can show that it is isomorphic to $V$. In fact $v\mapsto\lvert v \rangle$ is the isomorphism. Authors often use the original vectors and their corresponding kets interchangeably.
A quick note about Hermitians before we continue. Given a linear map $F:U\mapsto V$ the Hermitian conjugate $F^\dagger$ is defined to be the unique linear map s.t. $\langle v\lvert F(u)\rangle = \langle F^\dagger(v)\lvert u\rangle$ for all $u\in U,v \in V$, and $F:V\mapsto V$ is Hermitian if it equals it's own conjugate. If we define $F\lvert v \rangle = \lvert F(v) \rangle$ then $\langle F(v) \lvert = \langle v \lvert F^\dagger$. Hermitian operators are used to represent observables. When the observation is made the state collapses to one of the eigenvectors and the corresponding eigenvalue (which is real because of the Hermitian property) is returned. The probability associated with the eigenvalue is the square of the modulus of the coefficient of the corresponding eigenvector. Of course the collapse is only from the point of view of the end state - a superposition of collapsed states is also still meaningful.
Why: Less arbitrary than choosing a dense basis for each given problem
All finite dimensional Hilbert spaces have orthonormal bases. For example, the spin state of a single electron is often referred to as a vector in a two dimensional space with orthonormal basis $\lvert \uparrow \rangle, \lvert \downarrow \rangle$. (There are infinitely many other orthonormal bases but this is the one usually chosen.) Given an orthonormal basis $v_i$ and given $u$ in $V$ we have
$$
u = \sum \langle v_i \lvert u \rangle v_i
$$
In addition to finite dimensional spaces there are spaces with infinite, but discretely parametized bases. For example, a particle can be in one of infinitely many locations $x_i$, giving infinite orthonormal basis $\{\lvert x_i\rangle : i \in \mathbb{Z}\}$. Given such an discretely parametized basis the following recipe for the inner product follows from the definitions
$$
\begin{align}
\langle u \lvert v \rangle &= \sum \overline{\langle x_i \lvert u \rangle} \langle x_i \lvert v \rangle \\
&= \sum \overline{u_i} v_i
\end{align}
$$
(where we defined $w_i := \langle x_i \lvert w \rangle$)
A continuous basis space is one where the basis vectors are parametized by an infinite set such as $\mathbb{R}$. However, many authors just state without much justification that when you replace your discrete basis $x_i$ with a continuous one the recipe becomes
$$
\begin{align}
\langle \phi \lvert \psi \rangle &= \int \overline{\langle x \lvert \phi \rangle} \langle x \lvert \psi \rangle dx \\
&= \int \overline{\phi(x)}\psi(x) dx
\end{align}
$$
(where we defined $\chi(x) := \langle x \lvert \chi \rangle$)
Why should the recipe for the inner product change? Where's the trick? It should be always possible to approximate a space where the basis is continuous with one where the basis is discrete, but these two recipes do not look the same! The answer is that the reason for the change of recipe is that the $\lvert x \rangle$ chosen in the continuous basis space are just orthogonal, and not orthonormal. To obtain the continuous-basis space as a limit of discrete-basis spaces, define define basis vectors $\lvert x \rangle$ such that
$$
\langle x \lvert x' \rangle = \frac{\delta_{xx'}}{dx}
$$
where $dx$ is the space between values in the discrete set. It then follows from the definitions that
$$
\begin{align}
\langle \phi \lvert \psi \rangle &=\langle \phi \lvert I \lvert \psi \rangle \\
&=\langle \phi \lvert \sum_x \frac{\lvert x \rangle \langle x \lvert }{\langle x \lvert x \rangle} \lvert \psi \rangle \\
&=\sum_x \overline{\langle x \lvert \phi \rangle} \langle x \lvert \psi \rangle dx \\
&\approx \int \overline{\phi(x)}\psi(x) dx
\end{align}
$$
My notes taken in HandWrite Pro showing how to convert between position and momentum continuous bases
Why: Represent states of system built from two subsystems
A tensor product space is created from two vector spaces $U,V$ and has the symbol $U\otimes V$. This is simply the space of maps $F:U\mapsto V$ with the property $F(\alpha u +\alpha'u') = \overline{\alpha}F(u)+\overline{\alpha'}F(u')$. I'm not sure what the name is for maps like these - let's call them conjugate linear. It is easy to check that
Product spaces are often used to describe multiple particles, e.g. where $U$ represents the state of one particle and $V$ another. Or alternatively multiple parameters relating to a single particle such as its x, y, and z coordinates and spin. (Although these must be both independent and commuting - i.e. you can't work out the value of one from the others, and you can measure each without changing the value of the others.)
Why: To remove aliases for the same physical states
This is perhaps the most glossed over way of making spaces from other spaces. At 20:40 into the following video, Professor Susskind says, that for fermions
$$
\lvert x_1 x_2 \rangle = - \lvert x_2 x_1 \rangle
$$
Now, I think these videos are a fantastic gift to the world and I am incredibly grateful to Leonard Susskind for them, but this bit really foxed me. I had no idea how to interpret it. I understood that $\lvert x_1\rangle$ is the state where fermion 1 is in known position $x_1$ and likewise $\lvert x_2\rangle$ is the state where fermion 2 is in known position $x_2$. So $\lvert x_1 x_2 \rangle$ must be a vector in the product space. I.e. if $U$ is the space of location states for fermion 1 and $V$ the space for fermion 2 then $\lvert x_1 x_2 \rangle$ must mean $\lvert x_1 \rangle \otimes \lvert x_2 \rangle$ which is in $U \otimes V$. However, if this is the case then $\lvert x_1 x_2 \rangle, \lvert x_2 x_1 \rangle$ are linearly independent.
The answer, I think, is that $\lvert x_1 x_2 \rangle$ actually represents a vector in a Quotient Space of the original product space. So, what's a quotient space?
If $V$ is a vector space and $U$ is a subspace then the set of cosets $v + U = \{v+u: u \in U\}$ is a vector space, under obvious rules for addition and multiplication by a scalar, and has the name $V/U$. It's a bit like defining every element of U to be zero. Often the notation $v$ is used to represent a coset $H = v+U$, but you have to remember that $v$ is not unique, i.e. any $v'=v+u$ could also have been used where $u \in U$.
If $V$ is an inner product space then $V/U$ is trivially isomorphic to the subspace of vectors which are perpendicular to every element of $U$, namely $U^{\perp}$. This means that $V/U$ trivially inherits an inner product from $V$.
Thus, when Susskind said $\lvert x_1 x_2 \rangle = - \lvert x_2 x_1 \rangle$ he was actually referring to a member of the quotient space of the product space where the zeroed out subspace $U$ is the span of $\lvert x_1 x_2 \rangle + \lvert x_2 x_1 \rangle$. (If they were bosons it would be the span of $\lvert x_1 x_2 \rangle - \lvert x_2 x_1 \rangle$).
Some (most?) other authors take a slightly different approach where instead of taking a quotient space of the product space they restrict the conversation to the subspace of the product space defined by $\lvert \psi\rangle$ s.t. $\langle x_1 x_2 \lvert \psi \rangle = - \langle x_2 x_1 \lvert \psi \rangle$ (again, this is for fermions, use a plus sign for bosons). This is called anti-symmetrization (or symmetrization for bosons).
I think Susskind actually uses both conventions in his lectures, which is confusing. Or then again, perhaps only the subspace approach since $\lvert x_1 x_2 \rangle = - \lvert x_2 x_1 \rangle$ is actually true when both sides are considered as maps from the subspace to $\mathbb{C}$.
Why: Represent superpositions of different types of classical state
So far, the spaces covered can only described a fixed number of particles. What if you want to represent a superposition of a 2 particle and a 3 particle state? For this we need direct sum spaces.
A direct sum $U\oplus V$ of vector spaces $U, V$ is the set of pairs $(u,v)$ - also denoted $u\oplus v$ - with the obvious addition and scalar multiplication rules. If $U$ and $V$ have inner products then $U\oplus V$ inherits a trivial one via $\langle u\oplus v \lvert u' \oplus v' \rangle = \langle u \lvert u' \rangle + \langle v \lvert v' \rangle$.
If $\{u_i\}$ and $\{v_i\}$ are orthonormal bases of $U$ and $V$ then $\{u_i\oplus 0\}\cup\{0\oplus v_i\}$ is an orthonormal basis of $U\oplus V$.
What: Space with basis $\lvert n_1n_2n_3...\rangle$ (finitely many $n_i$ non-zero)
Why: Represent system with variable number of particles of the same type.
Suppose we want to represent a state which is a superposition of states each representing a different number of particles of the same type. The solution is an infinite direct sum of product spaces. So if $H$ is the Hilbert space representing a single particle and $H^0=\mathbb{C}, H^1=H, H^2=H\otimes H, ...$ the space we need is $V=H^0 \oplus H^1 \oplus H^2 \oplus ...$. However, we are still not quite done. If the particles are bosons we need $\langle x_1 x_2 ... x_n \lvert \psi \rangle = \langle x_{\pi(1)} x_{\pi(2)} ... x_{\pi(n)} \lvert \psi \rangle$ for every choice of $n, x_i$ and permutation $\pi$. So we symmetrize by restricting ourselves to the subspace of $V$ consisting of the $\lvert \psi \rangle$ for which this is true. This subspace $U$ is known as Fock space.
Now let $\lvert \psi_i\rangle$ be the ith energy eigenfunction for the single particle state and consider the three states in $H^3$ (a subspace of $V=\oplus_nH^n$):
$$
\lvert \psi_0 \psi_2 \psi_2 \rangle \\
\lvert \psi_2 \psi_0 \psi_2 \rangle \\
\lvert \psi_2 \psi_2 \psi_0 \rangle \\
$$
None of these states is in $U$, but their sum is, and the normalized version of it can be represented by $\lvert 1,0,2,0,0... \rangle$. This is the occupation number representation in which the sequence 1,0,2,0,0... indicates that there is one particle in the ground state, none in the first raised state, two 2nd raised state, and none in any other. The $\lvert n_1n_2n_3...\rangle$ form an orthonormal basis of the Fock space.
Okay, I now need a beer and a couple of paracetamol.
The building blocks
What: Hilbert SpacesWhy: To represent superpositions of classical states
The building blocks are always Hilbert spaces. These are vector spaces over the complex numbers $\mathbb{C}$, with inner products and limits. The pair $(V, \langle\cdot\lvert\cdot\rangle)$ is a Hilbert space if
- $V$ is a vector space over $\mathbb{C}$
- $v\mapsto\langle u\lvert v\rangle$ is linear map $V\mapsto\mathbb{C}$ for any $u$ in $V$
- $\langle u\lvert v\rangle = \overline{\langle v\lvert u\rangle}$ for any $u,v$ in $V$
- $\langle v\lvert v\rangle \geq 0$ with equality iff $v=0$
- $\sum\lVert v_{i+1}-v_i \rVert$ converges $\implies$ $v_i$ converges, where $\lVert v\rVert = \sqrt{\langle v \lvert v \rangle }$
The constructions
- Making the bra space
- Making the ket space
- Making the continuous basis spaces
- Making tensor product spaces
- Making quotient spaces
- Making direct sum spaces
- Making Fock space
Making the bra space
What: The same as the original spaceWhy: Helps with notation
Given a Hilbert space $V$, the linear maps $V\mapsto\mathbb{C}$ are also a complex vector space, and of the same dimension. Furthermore, for any $u$ in $V$
$$
\langle u \lvert : v \mapsto \langle u \lvert v \rangle
$$
is a linear map. We can denote these by $\langle u \lvert$ and call them bras. Note that every linear map $V \mapsto \mathbb{C}$ is a bra, since if $F$ is a linear map, and $v_i$ is an orthonormal basis
$$
\begin{align}
F(v) &= F(\sum \langle v_i \lvert v \rangle v_i) \\
&= \sum \langle v_i \lvert v \rangle F(v_i) \\
&= \langle \sum \overline{F(v_i)} v_i \lvert v \rangle \\
\end{align}
$$
Therefore we can call the space of linear maps $V\mapsto\mathbb{C}$ the bra vector space, $V^*$ Finally note that the mapping $u\mapsto\langle u \lvert$ is unique since $\langle u \lvert = \langle v \lvert \implies \langle u - v \lvert = 0 \implies \lVert u-v \rVert = 0 \implies u = v$. So it's clear that the space of bra vectors is interchangable with the space $V$. The space of bra vectors is commonly referred to as the dual space. Note that, if $A:V\mapsto V$ is a linear map and $v \in V$ then the composite $\langle v \lvert A$ is also a linear map and therefore also a bra vector.
Making the ket space
What: The same as the original spaceWhy: Helps with notation
Ket vectors are maps $V\mapsto\mathbb{C}$ of the following form
$$
\lvert v \rangle : u \mapsto \langle u \lvert v \rangle
$$
These are complex conjugates of linear maps $V\mapsto\mathbb{C}$. However, the space of complex conjugates of linear maps is itself a vector space. And we can show that it is isomorphic to $V$. In fact $v\mapsto\lvert v \rangle$ is the isomorphism. Authors often use the original vectors and their corresponding kets interchangeably.
A quick note about Hermitians before we continue. Given a linear map $F:U\mapsto V$ the Hermitian conjugate $F^\dagger$ is defined to be the unique linear map s.t. $\langle v\lvert F(u)\rangle = \langle F^\dagger(v)\lvert u\rangle$ for all $u\in U,v \in V$, and $F:V\mapsto V$ is Hermitian if it equals it's own conjugate. If we define $F\lvert v \rangle = \lvert F(v) \rangle$ then $\langle F(v) \lvert = \langle v \lvert F^\dagger$. Hermitian operators are used to represent observables. When the observation is made the state collapses to one of the eigenvectors and the corresponding eigenvalue (which is real because of the Hermitian property) is returned. The probability associated with the eigenvalue is the square of the modulus of the coefficient of the corresponding eigenvector. Of course the collapse is only from the point of view of the end state - a superposition of collapsed states is also still meaningful.
Making continuous basis spaces
What: The logical limit of making the basis vectors more denseWhy: Less arbitrary than choosing a dense basis for each given problem
All finite dimensional Hilbert spaces have orthonormal bases. For example, the spin state of a single electron is often referred to as a vector in a two dimensional space with orthonormal basis $\lvert \uparrow \rangle, \lvert \downarrow \rangle$. (There are infinitely many other orthonormal bases but this is the one usually chosen.) Given an orthonormal basis $v_i$ and given $u$ in $V$ we have
$$
u = \sum \langle v_i \lvert u \rangle v_i
$$
In addition to finite dimensional spaces there are spaces with infinite, but discretely parametized bases. For example, a particle can be in one of infinitely many locations $x_i$, giving infinite orthonormal basis $\{\lvert x_i\rangle : i \in \mathbb{Z}\}$. Given such an discretely parametized basis the following recipe for the inner product follows from the definitions
$$
\begin{align}
\langle u \lvert v \rangle &= \sum \overline{\langle x_i \lvert u \rangle} \langle x_i \lvert v \rangle \\
&= \sum \overline{u_i} v_i
\end{align}
$$
(where we defined $w_i := \langle x_i \lvert w \rangle$)
A continuous basis space is one where the basis vectors are parametized by an infinite set such as $\mathbb{R}$. However, many authors just state without much justification that when you replace your discrete basis $x_i$ with a continuous one the recipe becomes
$$
\begin{align}
\langle \phi \lvert \psi \rangle &= \int \overline{\langle x \lvert \phi \rangle} \langle x \lvert \psi \rangle dx \\
&= \int \overline{\phi(x)}\psi(x) dx
\end{align}
$$
(where we defined $\chi(x) := \langle x \lvert \chi \rangle$)
Why should the recipe for the inner product change? Where's the trick? It should be always possible to approximate a space where the basis is continuous with one where the basis is discrete, but these two recipes do not look the same! The answer is that the reason for the change of recipe is that the $\lvert x \rangle$ chosen in the continuous basis space are just orthogonal, and not orthonormal. To obtain the continuous-basis space as a limit of discrete-basis spaces, define define basis vectors $\lvert x \rangle$ such that
$$
\langle x \lvert x' \rangle = \frac{\delta_{xx'}}{dx}
$$
where $dx$ is the space between values in the discrete set. It then follows from the definitions that
$$
\begin{align}
\langle \phi \lvert \psi \rangle &=\langle \phi \lvert I \lvert \psi \rangle \\
&=\langle \phi \lvert \sum_x \frac{\lvert x \rangle \langle x \lvert }{\langle x \lvert x \rangle} \lvert \psi \rangle \\
&=\sum_x \overline{\langle x \lvert \phi \rangle} \langle x \lvert \psi \rangle dx \\
&\approx \int \overline{\phi(x)}\psi(x) dx
\end{align}
$$
My notes taken in HandWrite Pro showing how to convert between position and momentum continuous bases
Making tensor product spaces
What: An MxN dimensional space spanned by basis vector pairsWhy: Represent states of system built from two subsystems
A tensor product space is created from two vector spaces $U,V$ and has the symbol $U\otimes V$. This is simply the space of maps $F:U\mapsto V$ with the property $F(\alpha u +\alpha'u') = \overline{\alpha}F(u)+\overline{\alpha'}F(u')$. I'm not sure what the name is for maps like these - let's call them conjugate linear. It is easy to check that
- space of conjugate linear maps is a vector space spanned by the maps $u \otimes v:w \mapsto \langle w \lvert u \rangle v$.
- If $u_i$ is a basis of $U$ and $v_j$ is a basis of $V$ then $u_i \otimes v_j$ is a basis of $U\otimes V$
- $\cdot \otimes v$ and $u \otimes \cdot$ are linear maps
- Given an orthonormal basis $u_i$ of $U$ we can define an inner product on $U\otimes V$ by $\langle F \lvert G \rangle = \sum_{i} \langle F(u_i)\lvert G(u_i)\rangle$
- The choice of orthonormal basis $u_i$ doesn't make a difference to the inner product defined above.
- If $v_j$ is an orthonormal basis of $V$ then $u_i\otimes v_j$ is an orthonormal basis of $U\otimes V$
- For any $u,u' \in U, v,v' \in V$ we have $\langle u\otimes v \lvert u' \otimes v' \rangle = \langle u \lvert u' \rangle\langle v \lvert v' \rangle$
Product spaces are often used to describe multiple particles, e.g. where $U$ represents the state of one particle and $V$ another. Or alternatively multiple parameters relating to a single particle such as its x, y, and z coordinates and spin. (Although these must be both independent and commuting - i.e. you can't work out the value of one from the others, and you can measure each without changing the value of the others.)
Making quotient spaces
What: The original space, but with some vectors made synonyms for zeroWhy: To remove aliases for the same physical states
This is perhaps the most glossed over way of making spaces from other spaces. At 20:40 into the following video, Professor Susskind says, that for fermions
$$
\lvert x_1 x_2 \rangle = - \lvert x_2 x_1 \rangle
$$
The answer, I think, is that $\lvert x_1 x_2 \rangle$ actually represents a vector in a Quotient Space of the original product space. So, what's a quotient space?
If $V$ is a vector space and $U$ is a subspace then the set of cosets $v + U = \{v+u: u \in U\}$ is a vector space, under obvious rules for addition and multiplication by a scalar, and has the name $V/U$. It's a bit like defining every element of U to be zero. Often the notation $v$ is used to represent a coset $H = v+U$, but you have to remember that $v$ is not unique, i.e. any $v'=v+u$ could also have been used where $u \in U$.
If $V$ is an inner product space then $V/U$ is trivially isomorphic to the subspace of vectors which are perpendicular to every element of $U$, namely $U^{\perp}$. This means that $V/U$ trivially inherits an inner product from $V$.
Thus, when Susskind said $\lvert x_1 x_2 \rangle = - \lvert x_2 x_1 \rangle$ he was actually referring to a member of the quotient space of the product space where the zeroed out subspace $U$ is the span of $\lvert x_1 x_2 \rangle + \lvert x_2 x_1 \rangle$. (If they were bosons it would be the span of $\lvert x_1 x_2 \rangle - \lvert x_2 x_1 \rangle$).
Some (most?) other authors take a slightly different approach where instead of taking a quotient space of the product space they restrict the conversation to the subspace of the product space defined by $\lvert \psi\rangle$ s.t. $\langle x_1 x_2 \lvert \psi \rangle = - \langle x_2 x_1 \lvert \psi \rangle$ (again, this is for fermions, use a plus sign for bosons). This is called anti-symmetrization (or symmetrization for bosons).
I think Susskind actually uses both conventions in his lectures, which is confusing. Or then again, perhaps only the subspace approach since $\lvert x_1 x_2 \rangle = - \lvert x_2 x_1 \rangle$ is actually true when both sides are considered as maps from the subspace to $\mathbb{C}$.
Making direct sum spaces
What: An M+N dimensional space spanned by the union of the basesWhy: Represent superpositions of different types of classical state
So far, the spaces covered can only described a fixed number of particles. What if you want to represent a superposition of a 2 particle and a 3 particle state? For this we need direct sum spaces.
A direct sum $U\oplus V$ of vector spaces $U, V$ is the set of pairs $(u,v)$ - also denoted $u\oplus v$ - with the obvious addition and scalar multiplication rules. If $U$ and $V$ have inner products then $U\oplus V$ inherits a trivial one via $\langle u\oplus v \lvert u' \oplus v' \rangle = \langle u \lvert u' \rangle + \langle v \lvert v' \rangle$.
If $\{u_i\}$ and $\{v_i\}$ are orthonormal bases of $U$ and $V$ then $\{u_i\oplus 0\}\cup\{0\oplus v_i\}$ is an orthonormal basis of $U\oplus V$.
Making Fock space
What: Space with basis $\lvert n_1n_2n_3...\rangle$ (finitely many $n_i$ non-zero)
Why: Represent system with variable number of particles of the same type.
Suppose we want to represent a state which is a superposition of states each representing a different number of particles of the same type. The solution is an infinite direct sum of product spaces. So if $H$ is the Hilbert space representing a single particle and $H^0=\mathbb{C}, H^1=H, H^2=H\otimes H, ...$ the space we need is $V=H^0 \oplus H^1 \oplus H^2 \oplus ...$. However, we are still not quite done. If the particles are bosons we need $\langle x_1 x_2 ... x_n \lvert \psi \rangle = \langle x_{\pi(1)} x_{\pi(2)} ... x_{\pi(n)} \lvert \psi \rangle$ for every choice of $n, x_i$ and permutation $\pi$. So we symmetrize by restricting ourselves to the subspace of $V$ consisting of the $\lvert \psi \rangle$ for which this is true. This subspace $U$ is known as Fock space.
Now let $\lvert \psi_i\rangle$ be the ith energy eigenfunction for the single particle state and consider the three states in $H^3$ (a subspace of $V=\oplus_nH^n$):
$$
\lvert \psi_0 \psi_2 \psi_2 \rangle \\
\lvert \psi_2 \psi_0 \psi_2 \rangle \\
\lvert \psi_2 \psi_2 \psi_0 \rangle \\
$$
None of these states is in $U$, but their sum is, and the normalized version of it can be represented by $\lvert 1,0,2,0,0... \rangle$. This is the occupation number representation in which the sequence 1,0,2,0,0... indicates that there is one particle in the ground state, none in the first raised state, two 2nd raised state, and none in any other. The $\lvert n_1n_2n_3...\rangle$ form an orthonormal basis of the Fock space.
Okay, I now need a beer and a couple of paracetamol.
Comments
Post a Comment