Here is the conversion to LaTeX for the key components, labeled as requested: 1.1 Definition: complex numbers, $\mathbb{C}$ begin{definition} A \textbf{complex number} is an ordered pair $(a,b)$, where $a,b \in \mathbb{R}$, but we will write this as $a+bi$. The set of all complex numbers is denoted by $\mathbb{C}$: \[ \mathbb{C} = \{a+bi : a,b \in \mathbb{R}\}. \] Addition and multiplication on $\mathbb{C}$ are defined by \begin{align*} (a+bi) + (c+di) &= (a+c) + (b+d)i, \\ (a+bi)(c+di) &= (ac-bd) + (ad+bc)i; \end{align*} here $a,b,c,d \in \mathbb{R}$. \end{definition} 1.5 Definition: $-\alpha$, subtraction, $1/\alpha$, division \begin{definition} Suppose $\alpha, \beta \in \mathbb{C}$. \begin{itemize} \item Let $-\alpha$ denote the \textbf{additive inverse} of $\alpha$. Thus $-\alpha$ is the unique complex number such that $\alpha + (-\alpha) = 0$. \item \textbf{Subtraction} on $\mathbb{C}$ is defined by $\beta - \alpha = \beta + (-\alpha)$. \item For $\alpha \neq 0$, let $1/\alpha$ and $\alpha^{-1}$ denote the \textbf{multiplicative inverse} of $\alpha$. Thus $1/\alpha$ is the unique complex number such that $\alpha(1/\alpha) = 1$. \item For $\alpha \neq 0$, \textbf{division} by $\alpha$ is defined by $\beta/\alpha = \beta(1/\alpha)$. \end{itemize} \end{definition} 1.6 Notation: $\mathbb{F}$ \begin{notation} Throughout this book, $\mathbb{F}$ stands for either $\mathbb{R}$ or $\mathbb{C}$. \end{notation} 1.8 Definition: list, length \begin{definition} \begin{itemize} \item Suppose $n$ is a nonnegative integer. A \textbf{list of length $n$} is an ordered collection of $n$ elements (which might be numbers, other lists, or more abstract objects). \item Two lists are equal if and only if they have the same length and the same elements in the same order. \end{itemize} \end{definition} 1.10 Notation: $n$ \begin{notation} Fix a positive integer $n$ for the rest of this chapter. \end{notation} 1.11 Definition: $\mathbb{F}^n$, coordinate \begin{definition} $\mathbb{F}^n$ is the set of all lists of length $n$ of elements of $\mathbb{F}$: \[ \mathbb{F}^n = \{(x_1, \ldots, x_n) : x_k \in \mathbb{F} \text{ for } k=1,\ldots,n\}. \] For $(x_1, \ldots, x_n) \in \mathbb{F}^n$ and $k \in \{1,\ldots,n\}$, we say that $x_k$ is the $k$th \textbf{coordinate} of $(x_1, \ldots, x_n)$. \end{definition} 1.13 Definition: addition in $\mathbb{F}^n$ \begin{definition} Addition in $\mathbb{F}^n$ is defined by adding corresponding coordinates: \[ (x_1, \ldots, x_n) + (y_1, \ldots, y_n) = (x_1 + y_1, \ldots, x_n + y_n). \] \end{definition} 1.15 Notation: $\mathbf{0}$ \begin{notation} Let $\mathbf{0}$ denote the list of length $n$ whose coordinates are all $0$: \[ \mathbf{0} = (0, \ldots, 0). \] \end{notation} 1.17 Definition: additive inverse in $\mathbb{F}^n$, $-\mathbf{x}$ \begin{definition} For $\mathbf{x} \in \mathbb{F}^n$, the \textbf{additive inverse} of $\mathbf{x}$, denoted by $-\mathbf{x}$, is the vector $-\mathbf{x} \in \mathbb{F}^n$ such that $\mathbf{x} + (-\mathbf{x}) = \mathbf{0}$. Thus if $\mathbf{x} = (x_1, \ldots, x_n)$, then $-\mathbf{x} = (-x_1, \ldots, -x_n)$. \end{definition} 1.18 Definition: scalar multiplication in $\mathbb{F}^n$ \begin{definition} The product of a number $\lambda$ and a vector in $\mathbb{F}^n$ is computed by multiplying each coordinate of the vector by $\lambda$: \[ \lambda(x_1, \ldots, x_n) = (\lambda x_1, \ldots, \lambda x_n); \] here $\lambda \in \mathbb{F}$ and $(x_1, \ldots, x_n) \in \mathbb{F}^n$. \end{definition} 1.19 Definition: addition, scalar multiplication (on an abstract set $V$) \begin{definition} \begin{itemize} \item An \textbf{addition} on a set $V$ is a function that assigns an element $u+v \in V$ to each pair of elements $u,v \in V$. \item A \textbf{scalar multiplication} on a set $V$ is a function that assigns an element $\lambda v \in V$ to each $\lambda \in \mathbb{F}$ and each $v \in V$. \end{itemize} \end{definition} 1.20 Definition: vector space \begin{definition} A \textbf{vector space} is a set $V$ along with an addition on $V$ and a scalar multiplication on $V$ such that the following properties hold. \begin{description} \item[commutativity] $u+v = v+u$ for all $u,v \in V$. \item[associativity] $(u+v)+w = u+(v+w)$ and $(ab)v = a(bv)$ for all $u,v,w \in V$ and for all $a,b \in \mathbb{F}$. \item[additive identity] There exists an element $\mathbf{0} \in V$ such that $v+\mathbf{0}=v$ for all $v \in V$. \item[additive inverse] For every $v \in V$, there exists $w \in V$ such that $v+w = \mathbf{0}$. \item[multiplicative identity] $1v = v$ for all $v \in V$. \item[distributive properties] $a(u+v) = au+av$ and $(a+b)v = av+bv$ for all $a,b \in \mathbb{F}$ and all $u,v \in V$. \end{description} \end{definition} 1.21 Definition: vector, point \begin{definition} Elements of a vector space are called \textbf{vectors} or \textbf{points}. \end{definition} 1.22 Definition: real vector space, complex vector space \begin{definition} \begin{itemize} \item A \textbf{real vector space} is a vector space over $\mathbb{R}$. \item A \textbf{complex vector space} is a vector space over $\mathbb{C}$. \end{itemize} \end{definition} 1.24 Notation: $\mathbb{F}^S$ \begin{notation} \begin{itemize} \item If $S$ is a set, then $\mathbb{F}^S$ denotes the set of functions from $S$ to $\mathbb{F}$. \item For $f,g \in \mathbb{F}^S$, the sum $f+g \in \mathbb{F}^S$ is the function defined by $(f+g)(x) = f(x) + g(x)$ for all $x \in S$. \item For $\lambda \in \mathbb{F}$ and $f \in \mathbb{F}^S$, the product $\lambda f \in \mathbb{F}^S$ is the function defined by $(\lambda f)(x) = \lambda f(x)$ for all $x \in S$. \end{itemize} \end{notation} 1.28 Notation: $-\mathbf{v}$, $\mathbf{w}-\mathbf{v}$ (for elements $\mathbf{v},\mathbf{w}$ of a vector space) \begin{notation} Let $\mathbf{v},\mathbf{w} \in V$. Then \begin{itemize} \item $-\mathbf{v}$ denotes the additive inverse of $\mathbf{v}$; \item $\mathbf{w}-\mathbf{v}$ is defined to be $\mathbf{w}+(-\mathbf{v})$. \end{itemize} \end{notation} 1.29 Notation: $V$ \begin{notation} For the rest of this book, $V$ denotes a vector space over $\mathbb{F}$. \end{notation} 1.33 Definition: subspace \begin{definition} A subset $U$ of $V$ is called a \textbf{subspace} of $V$ if $U$ is also a vector space with the same additive identity, addition, and scalar multiplication as on $V$. \end{definition} 1.36 Definition: sum of subspaces \begin{definition} Suppose $V_1,\ldots,V_m$ are subspaces of $V$. The \textbf{sum} of $V_1,\ldots,V_m$, denoted by $V_1 + \cdots + V_m$, is the set of all possible sums of elements of $V_1,\ldots,V_m$. More precisely, \[ V_1 + \cdots + V_m = \{v_1 + \cdots + v_m : v_1 \in V_1, \ldots, v_m \in V_m\}. \] \end{definition} 1.41 Definition: direct sum, $\oplus$ \begin{definition} Suppose $V_1,\ldots,V_m$ are subspaces of $V$. \begin{itemize} \item The sum $V_1 + \cdots + V_m$ is called a \textbf{direct sum} if each element of $V_1 + \cdots + V_m$ can be written in only one way as a sum $v_1 + \cdots + v_m$, where each $v_k \in V_k$. \item If $V_1 + \cdots + V_m$ is a direct sum, then $V_1 \oplus \cdots \oplus V_m$ denotes $V_1 + \cdots + V_m$, with the $\oplus$ notation serving as an indication that this is a direct sum. \end{itemize} \end{definition} 1.3 Theorem: properties of complex arithmetic \begin{theorem} For all $\alpha,\beta,\lambda \in \mathbb{C}$, the following properties hold: \begin{description} \item[commutativity] $\alpha+\beta = \beta+\alpha$ and $\alpha\beta = \beta\alpha$. \item[associativity] $(\alpha+\beta)+\lambda = \alpha+(\beta+\lambda)$ and $(\alpha\beta)\lambda = \alpha(\beta\lambda)$. \item[identities] $\lambda + 0 = \lambda$ and $\lambda 1 = \lambda$. \item[additive inverse] For every $\alpha \in \mathbb{C}$, there exists a unique $\beta \in \mathbb{C}$ such that $\alpha+\beta = 0$. \item[multiplicative inverse] For every $\alpha \in \mathbb{C}$ with $\alpha \neq 0$, there exists a unique $\beta \in \mathbb{C}$ such that $\alpha\beta = 1$. \item[distributive property] $\lambda(\alpha+\beta) = \lambda\alpha + \lambda\beta$. \end{description} \end{theorem} 1.14 Theorem: commutativity of addition in $\mathbb{F}^n$ \begin{theorem} If $\mathbf{x},\mathbf{y} \in \mathbb{F}^n$, then $\mathbf{x}+\mathbf{y} = \mathbf{y}+\mathbf{x}$. \end{theorem} \begin{proof} Suppose $\mathbf{x} = (x_1,\ldots,x_n) \in \mathbb{F}^n$ and $\mathbf{y} = (y_1,\ldots,y_n) \in \mathbb{F}^n$. Then \begin{align*} \mathbf{x}+\mathbf{y} &= (x_1,\ldots,x_n)+(y_1,\ldots,y_n) \\ &= (x_1+y_1,\ldots,x_n+y_n) \\ &= (y_1+x_1,\ldots,y_n+x_n) \\ &= (y_1,\ldots,y_n)+(x_1,\ldots,x_n) \\ &= \mathbf{y}+\mathbf{x}, \end{align*} where the second and fourth equalities above hold because of the definition of addition in $\mathbb{F}^n$ and the third equality holds because of the usual commutativity of addition in $\mathbb{F}$. \end{proof} 1.26 Theorem: unique additive identity \begin{theorem} A vector space has a unique additive identity. \end{theorem} \begin{proof} Suppose $\mathbf{0}$ and $\mathbf{0}'$ are both additive identities for some vector space $V$. Then \[ \mathbf{0}' = \mathbf{0}'+\mathbf{0} = \mathbf{0}+\mathbf{0}' = \mathbf{0}, \] where the first equality holds because $\mathbf{0}$ is an additive identity, the second equality comes from commutativity, and the third equality holds because $\mathbf{0}'$ is an additive identity. Thus $\mathbf{0}' = \mathbf{0}$, proving that $V$ has only one additive identity. \end{proof} 1.27 Theorem: unique additive inverse \begin{theorem} Every element in a vector space has a unique additive inverse. \end{theorem} \begin{proof} Suppose $V$ is a vector space. Let $\mathbf{v} \in V$. Suppose $\mathbf{w}$ and $\mathbf{w}'$ are additive inverses of $\mathbf{v}$. Then \begin{align*} \mathbf{w} &= \math\mathbf{w} + \mathbf{0} \\ &= \mathbf{w} + (\mathbf{v} + \mathbf{w}') \\ &= (\mathbf{w} + \mathbf{v}) + \mathbf{w}' \\ &= \mathbf{0} + \mathbf{w}' \\ &= \mathbf{w}'. \end{align*} Thus $\mathbf{w} = \mathbf{w}'$, as desired. \end{proof} 1.30 Theorem: the number $0$ times a vector \begin{theorem} $0\mathbf{v} = \mathbf{0}$ for every $\mathbf{v} \in V$. \end{theorem} \begin{proof} For $\mathbf{v} \in V$, we have \[ 0\mathbf{v} = (0+0)\mathbf{v} = 0\mathbf{v} + 0\mathbf{v}. \] Adding the additive inverse of $0\mathbf{v}$ to both sides of the equation above gives $\mathbf{0} = 0\mathbf{v}$, as desired. \end{proof} 1.31 Theorem: a number times the vector $\mathbf{0}$ \begin{theorem} $a\mathbf{0} = \mathbf{0}$ for every $a \in \mathbb{F}$. \end{theorem} \begin{proof} For $a \in \mathbb{F}$, we have \[ a\mathbf{0} = a(\mathbf{0}+\mathbf{0}) = a\mathbf{0} + a\mathbf{0}. \] Adding the additive inverse of $a\mathbf{0}$ to both sides of the equation above gives $\mathbf{0} = a\mathbf{0}$, as desired. \end{proof} 1.32 Theorem: the number $-1$ times a vector \begin{theorem} $(-1)\mathbf{v} = -\mathbf{v}$ for every $\mathbf{v} \in V$. \end{theorem} \begin{proof} For $\mathbf{v} \in V$, we have \[ \mathbf{v} + (-1)\mathbf{v} = 1\mathbf{v} + (-1)\mathbf{v} = (1+(-1))\mathbf{v} = 0\mathbf{v} = \mathbf{0}. \] This equation says that $(-1)\mathbf{v}$, when added to $\mathbf{v}$, gives $\mathbf{0}$. Thus $(-1)\mathbf{v}$ is the additive inverse of $\mathbf{v}$, as desired. \end{proof} 1.34 Theorem: conditions for a subspace \begin{theorem} A subset $U$ of $V$ is a subspace of $V$ if and only if $U$ satisfies the following three conditions. \begin{description} \item[additive identity] $\mathbf{0} \in U$. \item[closed under addition] $\mathbf{u},\mathbf{w} \in U$ implies $\mathbf{u}+\mathbf{w} \in U$. \item[closed under scalar multiplication] $a \in \mathbb{F}$ and $\mathbf{u} \in U$ implies $a\mathbf{u} \in U$. \end{description} \end{theorem} \begin{proof} If $U$ is a subspace of $V$, then $U$ satisfies the three conditions above by the definition of vector space. Conversely, suppose $U$ satisfies the three conditions above. The first condition ensures that the additive identity of $V$ is in $U$. The second condition ensures that addition makes sense on $U$. The third condition ensures that scalar multiplication makes sense on $U$. If $\mathbf{u} \in U$, then $-\mathbf{u}$ (which equals $(-1)\mathbf{u}$ by 1.32) is also in $U$ by the third condition above. Hence every element of $U$ has an additive inverse in $U$. The other parts of the definition of a vector space, such as associativity and commutativity, are automatically satisfied for $U$ because they hold on the larger space $V$. Thus $U$ is a vector space and hence is a subspace of $V$. \end{proof} 1.40 Theorem: sum of subspaces is the smallest containing subspace \begin{theorem} Suppose $V_1,\ldots,V_m$ are subspaces of $V$. Then $V_1 + \cdots + V_m$ is the smallest subspace of $V$ containing $V_1,\ldots,V_m$. \end{theorem} \begin{proof} The reader can verify that $V_1 + \cdots + V_m$ contains the additive identity $\mathbf{0}$ and is closed under addition and scalar multiplication. Thus 1.34 implies that $V_1 + \cdots + V_m$ is a subspace of $V$. The subspaces $V_1,\ldots,V_m$ are all contained in $V_1 + \cdots + V_m$ (to see this, consider sums $v_1 + \cdots + v_m$ where all except one of the $v_ks are $\mathbf{0}$). Conversely, every subspace of $V$ containing $V_1,\ldots,V_m$ contains $V_1 + \cdots + V_m$ (because subspaces must contain all finite sums of their elements). Thus $V_1 + \cdots + V_m$ is the smallest subspace of $V$ containing $V_1,\ldots,V_m$. \end{proof} 1.45 Theorem: condition for a direct sum \begin{theorem} Suppose $V_1,\ldots,V_m$ are subspaces of $V$. Then $V_1 + \cdots + V_m$ is a direct sum if and only if the only way to write $\mathbf{0}$ as a sum $v_1 + \cdots + v_m$, where each $v_k \in V_k$, is by taking each $v_k$ equal to $\mathbf{0}$. \end{theorem} \begin{proof} First suppose $V_1 + \cdots + V_m$ is a direct sum. Then the definition of direct sum implies that the only way to write $\mathbf{0}$ as a sum $v_1 + \cdots + v_m$, where each $v_k \in V_k$, is by taking each $v_k$ equal to $\mathbf{0}$. Now suppose that the only way to write $\mathbf{0}$ as a sum $v_1 + \cdots + v_m$, where each $v_k \in V_k$, is by taking each $v_k$ equal to $\mathbf{0}$. To show that $V_1 + \cdots + V_m$ is a direct sum, let $\mathbf{v} \in V_1 + \cdots + V_m$. We can write $\mathbf{v} = v_1 + \cdots + v_m$ for some $v_1 \in V_1,\ldots,v_m \in V_m$. To show that this representation is unique, suppose we also have $\mathbf{v} = u_1 + \cdots + u_m$, where $u_1 \in V_1,\ldots,u_m \in V_m$. Subtracting these two equations, we have \[ \mathbf{0} = (v_1 - u_1) + \cdots + (v_m - u_m). \] Because $v_1 - u_1 \in V_1,\ldots,v_m - u_m \in V_m$, the equation above implies that each $v_k - u_k$ equals $\mathbf{0}$. Thus $v_1 = u_1,\ldots,v_m=u_m$, as desired. \end{proof} 1.46 Theorem: direct sum of two subspaces \begin{theorem} Suppose $U$ and $W$ are subspaces of $V$. Then $U+W$ is a direct sum if and only if $U \cap W = \{\mathbf{0}\}$. \end{theorem} \begin{proof} First suppose that $U+W$ is a direct sum. If $\mathbf{v} \in U \cap W$, then \[ \mathbf{0} = \mathbf{v} + (-\mathbf{v}), \] where $\mathbf{v} \in U$ and $-\mathbf{v} \in W$. By the unique representation of $\mathbf{0}$ as the sum of a vector in $U$ and a vector in $W$, we have $\mathbf{v} = \mathbf{0}$. Thus $U \cap W = \{\mathbf{0}\}$, completing the proof in one direction. To prove the other direction, now suppose $U \cap W = \{\mathbf{0}\}$. To prove that $U+W$ is a direct sum, suppose $\mathbf{u} \in U$, $\mathbf{w} \in W$, and \[ \mathbf{0} = \mathbf{u} + \mathbf{w}. \] To complete the proof, we only need to show that $\mathbf{u} = \mathbf{w} = \mathbf{0}$ (by 1.45). The equation above implies that $\mathbf{u} = -\mathbf{w} \in W$. Thus $\mathbf{u} \in U \cap W$. Hence $\mathbf{u} = \mathbf{0}$, which by the equation above implies that $\mathbf{w} = \mathbf{0}$, completing the proof. \end{proof} 1.2 Example: complex arithmetic The product $(2+3i)(4+5i)$ can be evaluated by applying the distributive and commutative properties from 1.3: \begin{align*} (2+3i)(4+5i) &= 2 \cdot (4+5i) + (3i)(4+5i) \\ &= 2 \cdot 4 + 2 \cdot 5i + 3i \cdot 4 + (3i)(5i) \\ &= 8 + 10i + 12i - 15 \\ &= -7 + 22i. \end{align*} 1.4 Example: commutativity of complex multiplication To show that $\alpha\beta = \beta\alpha$ for all $\alpha,\beta \in \mathbb{C}$, suppose $\alpha = a+bi$ and $\beta = c+di$, where $a,b,c,d \in \mathbb{R}$. Then the definition of multiplication of complex numbers shows that \[ \alpha\beta = (a+bi)(c+di) = (ac-bd) + (ad+bc)i \] and \[ \beta\alpha = (c+di)(a+bi) = (ca-db) + (cb+da)i. \] The equations above and the commutativity of multiplication and addition of real numbers show that $\alpha\beta = \beta\alpha$. 1.7 Example: $\mathbb{R}^2$ and $\mathbb{R}^3$ \begin{itemize} \item The set $\mathbb{R}^2$, which you can think of as a plane, is the set of all ordered pairs of real numbers: \[ \mathbb{R}^2 = \{(x,y) : x,y \in \mathbb{R}\}. \] \item The set $\mathbb{R}^3$, which you can think of as ordinary space, is the set of all ordered triples of real numbers: \[ \mathbb{R}^3 = \{(x,y,z) : x,y,z \in \mathbb{R}\}. \] \end{itemize} 1.12 Example: $\mathbb{C}^4$ $\mathbb{C}^4$ is the set of all lists of four complex numbers: \[ \mathbb{C}^4 = \{(z_1,z_2,z_3,z_4) : z_1,z_2,z_3,z_4 \in \mathbb{C}\}. \] 1.16 Example: context determines which $\mathbf{0}$ is intended Consider the statement that $\mathbf{0}$ is an additive identity for $\mathbb{F}^n$: \[ \mathbf{x} + \mathbf{0} = \mathbf{x} \text{ for all } \mathbf{x} \in \mathbb{F}^n. \] Here the $\mathbf{0}$ above is the list defined in 1.15, not the number $0$, because we have not defined the sum of an element of $\mathbb{F}^n$ (namely, $\mathbf{x}$) and the number $0$. 1.23 Example: $\mathbb{F}^\infty$ $\mathbb{F}^\infty$ is defined to be the set of all sequences of elements of $\mathbb{F}$: \[ \mathbb{F}^\infty = \{(x_1,x_2,\ldots) : x_k \in \mathbb{F} \text{ for } k=1,2,\ldots\}. \] Addition and scalar multiplication on $\mathbb{F}^\infty$ are defined as expected: \begin{align*} (x_1,x_2,\ldots) + (y_1,y_2,\ldots) &= (x_1+y_1,x_2+y_2,\ldots), \\ \lambda(x_1,x_2,\ldots) &= (\lambda x_1,\lambda x_2,\ldots). \end{align*} With these definitions, $\mathbb{F}^\infty$ becomes a vector space over $\mathbb{F}$, as you should verify. The additive identity in this vector space is the sequence of all $0s. 1.25 Example: $\mathbb{F}^S$ is a vector space \begin{itemize} \item If $S$ is a nonempty set, then $\mathbb{F}^S$ (with the operations of addition and scalar multiplication as defined above) is a vector space over $\mathbb{F}$. \item The additive identity of $\mathbb{F}^S$ is the function $\mathbf{0} \colon S \to \mathbb{F}$ defined by $\mathbf{0}(x) = 0$ for all $x \in S$. \item For $f \in \mathbb{F}^S$, the additive inverse of $f$ is the function $-f \colon S \to \mathbb{F}$ defined by $(-f)(x) = -f(x)$ for all $x \in S$. \end{itemize} 1.35 Example: subspaces \begin{itemize} \item[(a)] If $b \in \mathbb{F}$, then $\{(x_1,x_2,x_3,x_4) \in \mathbb{F}^4 : x_3 = 5x_4 + b\}$ is a subspace of $\mathbb{F}^4$ if and only if $b=0$. \item[(b)] The set of continuous real-valued functions on the interval $[0,1]$ is a subspace of $\mathbb{R}^{[0,1]}$. \item[(c)] The set of differentiable real-valued functions on $\mathbb{R}$ is a subspace of $\mathbb{R}^\mathbb{R}$. \item[(d)] The set of differentiable real-valued functions $f$ on the interval $(0,3)$ such that $f'(2) = b$ is a subspace of $\mathbb{R}^{(0,3)}$ if and only if $b=0$. \item[(e)] The set of all sequences of complex numbers with limit $0$ is a subspace of $\mathbb{C}^\infty$. \end{itemize} 1.37 Example: a sum of subspaces of $\mathbb{F}^3$ Suppose $U$ is the set of all elements of $\mathbb{F}^3$ whose second and third coordinates equal $0$, and $W$ is the set of all elements of $\mathbb{F}^3$ whose first and third coordinates equal $0$: \begin{align*} U &= \{(x,0,0) \in \mathbb{F}^3 : x \in \mathbb{F}\}, \\ W &= \{(0,y,0) \in \mathbb{F}^3 : y \in \mathbb{F}\}. \end{align*} Then \[ U+W = \{(x,y,0) \in \mathbb{F}^3 : x,y \in \mathbb{F}\}, \] as you should verify. 1.38 Example: a sum of subspaces of $\mathbb{F}^4$ Suppose \begin{align*} U &= \{(x,x,y,y) \in \mathbb{F}^4 : x,y \in \mathbb{F}\}, \\ W &= \{(x,x,x,y) \in \mathbb{F}^4 : x,y \in \mathbb{F}\}. \end{align*} Using words rather than symbols, we could say that $U$ is the set of elements of $\mathbb{F}^4$ whose first two coordinates equal each other and whose third and fourth coordinates equal each other. Similarly, $W$ is the set of elements of $\mathbb{F}^4$ whose first three coordinates equal each other. To find a description of $U+W$, consider a typical element $(a,a,b,b)$ of $U$ and a typical element $(c,c,c,d)$ of $W$, where $a,b,c,d \in \mathbb{F}$. We have \[ (a,a,b,b) + (c,c,c,d) = (a+c,a+c,b+c,b+d), \] which shows that every element of $U+W$ has its first two coordinates equal to each other. Thus \[ U+W \subseteq \{(x,x,y,z) \in \mathbb{F}^4 : x,y,z \in \mathbb{F}\}. \] To prove the inclusion in the other direction, suppose $x,y,z \in \mathbb{F}$. Then \[ (x,x,y,z) = (x,x,y,y) + (0,0,0,z-y), \] where the first vector on the right is in $U$ and the second vector on the right is in $W$. Thus $(x,x,y,z) \in U+W$, showing that the inclusion above also holds in the opposite direction. Hence \[ U+W = \{(x,x,y,z) \in \mathbb{F}^4 : x,y,z \in \mathbb{F}\}, \] which shows that $U+W$ is the set of elements of $\mathbb{F}^4$ whose first two coordinates equal each other. 1.42 Example: a direct sum of two subspaces Suppose $U$ is the subspace of $\mathbb{F}^3$ of those vectors whose last coordinate equals $0$, and $W$ is the subspace of $\mathbb{F}^3$ of those vectors whose first two coordinates equal $0$: \begin{align*} U &= \{ (x,y,0) \in \mathbb{F}^3 : x,y \in \mathbb{F} \}, \\ W &= \{ (0,0,z) \in \mathbb{F}^3 : z \in \mathbb{F} \}. \end{align*} Then $\mathbb{F}^3 = U \oplus W$, as you should verify. 1.43 Example: a direct sum of multiple subspaces Suppose $V_k$ is the subspace of $\mathbb{F}^n$ of those vectors whose coordinates are all $0$, except possibly in the $k$th slot; for example, \[ V_2 = \{(0,x,0,\ldots,0) \in \mathbb{F}^n : x \in \mathbb{F}\}. \] Then $\mathbb{F}^n = V_1 \oplus \cdots \oplus V_n$, as you should verify. 1.44 Example: a sum that is not a direct sum Suppose \begin{align*} V_1 &= \{(x,y,0) \in \mathbb{F}^3 : x,y \in \mathbb{F}\}, \\ V_2 &= \{(0,0,z) \in \mathbb{F}^3 : z \in \mathbb{F}\}, \\ V_3 &= \{(0,y,y) \in \mathbb{F}^3 : y \in \mathbb{F}\}. \end{align*} Then $\mathbb{F}^3 = V_1 + V_2 + V_3$ because every vector $(x,y,z) \in \mathbb{F}^3$ can be written as \[ (x,y,z) = (x,y,0) + (0,0,z) + (0,0,0), \] where the first vector on the right side is in $V_1$, the second vector is in $V_2$, and the third vector is in $V_3$. However, $\mathbb{F}^3$ does not equal the direct sum of $V_1,V_2,V_3$, because the vector $\mathbf{0}$ can be written in more than one way as a sum $v_1+v_2+v_3$, with each $v_k \in V_k$. Specifically, we have \[ \mathbf{0} = (0,1,0) + (0,0,1) + (0,-1,-1) \] and, of course, \[ \mathbf{0} = (0,0,0) + (0,0,0) + (0,0,0), \] where the first vector on the right side of each equation above is in $V_1$, the second vector is in $V_2$, and the third vector is in $V_3$. Thus the sum $V_1+V_2+V_3$ is not a direct sum.