
---
Commentary:
$\textbf{Exercise 17.}$ Prove that $V$ is infinite-dimensional if and only if there is a sequence $v_1, v_2, \ldots$ of vectors in $V$ such that $v_1, \ldots, v_m$ is linearly independent for every positive integer $m$.
$\textbf{Solution 17.}$ First suppose $V$ is infinite-dimensional. Choose $v_1$ to be any nonzero vector in $V$. Choose $v_2, v_3, \ldots$ by the following inductive process: suppose $v_1, \ldots, v_{m-1}$ have been chosen; choose any vector $v_m \in V$ such that $v_m \notin \operatorname{span}(v_1, \ldots, v_{m-1})$—because $V$ is not finite-dimensional, $\operatorname{span}(v_1, \ldots, v_{m-1})$ cannot equal $V$ so choosing $v_m$ in this fashion is possible. The linear dependence lemma (2.19) implies that $v_1, \ldots, v_m$ is linearly independent for every positive integer $m$, as desired.
Conversely, suppose there is a sequence $v_1, v_2, \ldots$ of vectors in $V$ such that $v_1, \ldots, v_m$ is linearly independent for every positive integer $m$. The existence of a spanning list in $V$ would contradict 2.22. Thus $V$ is infinite-dimensional.
$\textit{Commentary:}$ This exercise characterizes infinite-dimensional vector spaces in terms of the existence of arbitrarily large linearly independent sets.
The proof of the forward direction constructs an infinite sequence of vectors inductively. It starts with any nonzero vector $v_1$. Then, at each stage, it chooses the next vector $v_m$ to be outside the span of all the previous vectors. This is always possible in an infinite-dimensional space, because the span of any finite set of vectors cannot equal the entire space. The linear dependence lemma then implies that each initial segment of this sequence is linearly independent.
The proof of the reverse direction is by contradiction. If $V$ were finite-dimensional, then by definition, it would have a finite spanning set. But this would contradict the result from 2.22 that a spanning list cannot contain a linearly independent list of greater length.
This result is fundamental in understanding the structure of infinite-dimensional spaces. It shows that in such spaces, we can always find linearly independent sets of any desired finite size. This is in stark contrast to finite-dimensional spaces, where there is an upper bound on the size of linearly independent sets (namely, the dimension of the space).
The exercise also provides practice in constructing inductive arguments and in reasoning about spanning sets and linear independence.
$\textit{Additional Examples:}$
The space of polynomials $\mathbb{R}[x]$ is infinite-dimensional. The sequence $1, x, x^2, \ldots$ has the property that any initial segment $1, x, \ldots, x^{m-1}$ is linearly independent.
The space of continuous functions $C([0,1], \mathbb{R})$ is infinite-dimensional. The sequence $1, x, x^2, \ldots$ has the property that any initial segment $1, x, \ldots, x^{m-1}$ is linearly independent.
The space of sequences $\ell^2(\mathbb{N})$ is infinite-dimensional. The sequence $e_1 = (1, 0, 0, \ldots)$, $e_2 = (0, 1, 0, 0, \ldots)$, $e_3 = (0, 0, 1, 0, 0, \ldots)$, ... has the property that any initial segment $e_1, \ldots, e_m$ is linearly independent.
The space of solutions to the differential equation $y'' + y = 0$ is two-dimensional (and hence not infinite-dimensional). There is no infinite sequence of solutions such that every initial segment is linearly independent.
These examples illustrate the concept of infinite-dimensionality in some common vector spaces. The polynomial space $\mathbb{R}[x]$, the function space $C([0,1], \mathbb{R})$, and the sequence space $\ell^2(\mathbb{N})$ are all infinite-dimensional, and in each case, we can exhibit an infinite sequence with the required property. On the other hand, the solution space to a second-order linear differential equation is finite-dimensional, and hence no such sequence exists.