Lecture 19
We will cover Section 4.2 (part II) in this lecture.
Column Space
Definition: The column space of an \(m \times n\) matrix \(A\), denoted \(\operatorname{Col} A\), is the set of all linear combinations of the columns of \(A\). In other words, if \(A = [\mathbf{a}_1 \; \cdots \; \mathbf{a}_n]\) where each \(\mathbf{a}_i\) is a column of \(A\), then \[\operatorname{Col} A = \operatorname{Span}\{\mathbf{a}_1, \ldots, \mathbf{a}_n\}.\]
Theorem: The column space of an \(m \times n\) matrix \(A\) is a subspace of \(\mathbb{R}^m\).
The Matrix-Vector Product and Column Space
There is an important connection between matrix-vector multiplication and the column space. Let \(A\) be an \(m \times n\) matrix and \(\mathbf{x} \in \mathbb{R}^n\). Writing out the product \(A\mathbf{x}\) entry by entry yields
\[A\mathbf{x} = \begin{pmatrix} \sum_{i=1}^{n} a_{1i} x_i \\ \sum_{i=1}^{n} a_{2i} x_i \\ \vdots \\ \sum_{i=1}^{n} a_{mi} x_i \end{pmatrix} = \sum_{i=1}^{n} x_i \begin{pmatrix} a_{1i} \\ a_{2i} \\ \vdots \\ a_{mi} \end{pmatrix}.\]This shows that every product \(A\mathbf{x}\) is a linear combination of the columns of \(A\), with weights given by the entries of \(\mathbf{x}\). Consequently,
\[\operatorname{Col} A = \{\mathbf{y} \mid \mathbf{y} = A\mathbf{x} \text{ for some } \mathbf{x} \in \mathbb{R}^n\}.\]Important: The column space of an \(m \times n\) matrix \(A\) is all of \(\mathbb{R}^m\) if and only if the equation \(A\mathbf{x} = \mathbf{b}\) has a solution for every \(\mathbf{b} \in \mathbb{R}^m\).
Example 1
Find a matrix \(A\) such that the following set equals \(\operatorname{Col} A\):
\[\left\{ \begin{pmatrix} 8a + 22b \\ -3b \\ a + 4b \end{pmatrix} \;\middle|\; a, b \in \mathbb{R} \right\}.\]Solution: We begin by decomposing the general vector in the set as a linear combination of two fixed vectors, separating the contributions from \(a\) and \(b\):
\[\begin{pmatrix} 8a + 22b \\ -3b \\ a + 4b \end{pmatrix} = a\begin{pmatrix} 8 \\ 0 \\ 1 \end{pmatrix} + b\begin{pmatrix} 22 \\ -3 \\ 4 \end{pmatrix}.\]Therefore, the set equals
\[\left\{ a\begin{pmatrix} 8 \\ 0 \\ 1 \end{pmatrix} + b\begin{pmatrix} 22 \\ -3 \\ 4 \end{pmatrix} \;\middle|\; a, b \in \mathbb{R} \right\} = \operatorname{Span}\left\{\begin{pmatrix}8\\0\\1\end{pmatrix},\begin{pmatrix}22\\-3\\4\end{pmatrix}\right\}.\]Since \(\operatorname{Col} A\) is defined as all linear combinations of the columns of \(A\), we use the two spanning vectors as columns. One valid choice is
\[A = \begin{pmatrix} 8 & 22 \\ 0 & -3 \\ 1 & 4 \end{pmatrix}.\]Note that the column order does not affect the span, so \(A = \begin{pmatrix} 22 & 8 \\ -3 & 0 \\ 4 & 1 \end{pmatrix}\) is equally valid.
Row Space
Definition: If \(A\) is an \(m \times n\) matrix, each row has \(n\) entries and can therefore be associated to a vector in \(\mathbb{R}^n\), called a row vector.
Definition: The row space of an \(m \times n\) matrix \(A\), denoted \(\operatorname{Row} A\), is the set of all linear combinations of the row vectors of \(A\).
Theorem: The row space of an \(m \times n\) matrix \(A\) is a subspace of \(\mathbb{R}^n\).
Determining the Ambient Space for the Null Space and Column Space
For an \(m\times n\) matrix \(A\), both the null space and column space are subspaces of specific Euclidean spaces. The null space \(\operatorname{Nul} A\) consists of vectors \(\mathbf{x}\) satisfying \(A\mathbf{x} = \mathbf{0}\); since \(\mathbf{x} \in \mathbb{R}^n\), the null space lives in \(\mathbb{R}^n\). The column space \(\operatorname{Col} A\) consists of linear combinations of the columns, which are vectors in \(\mathbb{R}^m\), so it lives in \(\mathbb{R}^m\).
Example 2
Suppose \(A = \begin{pmatrix} 3 & 5 & 0 & 9 \\ 2 & 86 & 1 & 1 \end{pmatrix}\).
(a) Find \(k\) such that \(\operatorname{Nul} A\) is a subspace of \(\mathbb{R}^k\).
(b) Find \(k\) such that \(\operatorname{Col} A\) is a subspace of \(\mathbb{R}^k\).
Solution:
(a) Because \(A\) has 4 columns, vectors \(\mathbf{x}\) in \(A\mathbf{x} = \mathbf{0}\) belong to \(\mathbb{R}^4\), so \(k = 4\).
(b) Because \(A\) has 2 rows, the columns of \(A\) are vectors in \(\mathbb{R}^2\), so \(k = 2\).
Testing Membership in the Column Space and Null Space
Example 3
Suppose \(A = \begin{pmatrix} 2 & 4 & 6 \\ 1 & 3 & 2 \end{pmatrix}\).
(a) Is \(\mathbf{v} = \begin{pmatrix} 2 \\ 1 \end{pmatrix}\) in \(\operatorname{Col} A\)? Can \(\mathbf{v}\) be in \(\operatorname{Nul} A\)?
(b) Is \(\mathbf{u} = \begin{pmatrix} 1 \\ -2 \\ 1 \end{pmatrix}\) in \(\operatorname{Nul} A\)? Can \(\mathbf{u}\) be in \(\operatorname{Col} A\)?
Solution: (a) To test whether \(\mathbf{v} \in \operatorname{Col} A\), we check whether \(A\mathbf{x} = \mathbf{v}\) is consistent by row-reducing the augmented matrix \([A \mid \mathbf{v}]\):
\begin{align*}\begin{pmatrix} 2 & 4 & 6 & 2 \\ 1 & 3 & 2 & 1 \end{pmatrix} &\xrightarrow{r_1 \to r_1-r_2} \begin{pmatrix} 1 & 1 & 4 & 1 \\ 1 & 3 & 2 & 1 \end{pmatrix}\\ &\xrightarrow{r_2 \to r_2-r_1} \begin{pmatrix} 1 & 1 & 4 & 1 \\ 0 & 2 & -2 & 0 \end{pmatrix}\\ &\xrightarrow{r_2\to\frac{r_2}{2}} \begin{pmatrix} 1 & 1 & 4 & 1 \\ 0 & 1 & -1 & 0 \end{pmatrix}\\ &\xrightarrow{r_1\to r_1-r_2} \begin{pmatrix} 1 & 0 & 5 & 1 \\ 0 & 1 & -1 & 0 \end{pmatrix}.\end{align*}The system is consistent, so \(\mathbf{v} \in \operatorname{Col} A\). However, \(\mathbf{v}\) cannot belong to \(\operatorname{Nul} A\). The null space is a subspace of \(\mathbb{R}^3\) (since \(A\) has 3 columns), and \(\mathbf{v}\) has only 2 entries — a dimension mismatch makes membership impossible.
(b) To test whether \(\mathbf{u} \in \operatorname{Nul} A\), we compute \(A\mathbf{u}\):
\[A\mathbf{u} = \begin{pmatrix} 2\cdot 1 + 4\cdot (-2) + 6\cdot 1 \\ 1\cdot 1 + 3\cdot (-2) + 2\cdot 1 \end{pmatrix} = \begin{pmatrix} 2 - 8 + 6 \\ 1 - 6 + 2 \end{pmatrix} = \begin{pmatrix} 0 \\ -3 \end{pmatrix} \neq \mathbf{0}.\]Since \(A\mathbf{u} \neq \mathbf{0}\), we have \(\mathbf{u} \notin \operatorname{Nul} A\). Furthermore, \(\mathbf{u}\) cannot belong to \(\operatorname{Col} A\). The column space is a subspace of \(\mathbb{R}^2\) (since \(A\) has 2 rows), and \(\mathbf{u}\) has 3 entries — again a dimension mismatch.
Warning: Suppose \(A\) is an \(m\times n\) matrix. Always check dimensional compatibility before testing membership. A vector in \(\mathbb{R}^m\) cannot lie in \(\operatorname{Nul} A\) (a subspace of \(\mathbb{R}^n\)) unless \(m = n\). Similarly, a vector in \(\mathbb{R}^n\) cannot lie in \(\operatorname{Col} A\) (a subspace of \(\mathbb{R}^m\)) unless \(m = n\).
Linear Transformations between Vector Spaces
The notions of null space and column space generalize naturally to the setting of abstract linear transformations between vector spaces.
Definition: A linear transformation \(T\) from a vector space \(V\) into a vector space \(W\) is a rule that assigns to each vector \(\mathbf{x} \in V\) a unique vector \(T(\mathbf{x}) \in W\), satisfying both of the following properties:
- \(T(\mathbf{u} + \mathbf{v}) = T(\mathbf{u}) + T(\mathbf{v})\) for all \(\mathbf{u}, \mathbf{v} \in V\), and
- \(T(c\mathbf{u}) = cT(\mathbf{u})\) for all \(\mathbf{u} \in V\) and all scalars \(c\).
Definition: The kernel of a linear transformation \(T : V \to W\) is the set of all vectors in \(V\) that map to the zero vector in \(W\), i.e.
\[\ker T = \{\mathbf{u} \in V \mid T(\mathbf{u}) = \mathbf{0}\}.\]Definition: The image (or range) of a linear transformation \(T : V \to W\) is the set of all vectors in \(W\) that are outputs of \(T\), i.e.
\[\operatorname{im} T = \{T(\mathbf{u}) \mid \mathbf{u} \in V\}.\]Important: When \(T\) is the matrix transformation \(T(\mathbf{x}) = A\mathbf{x}\), these abstract notions reduce to the familiar ones: \(\ker T = \operatorname{Nul} A\) and \(\operatorname{im} T = \operatorname{Col} A\). Furthermore, \(\ker T\) is always a subspace of \(V\), and \(\operatorname{im} T\) is always a subspace of \(W\).
The Differentiation Operator as a Linear Transformation
Example 4
Let \(V\) be the vector space of all real-valued functions \(f\) defined on an interval \([a, b]\) such that \(f\) is differentiable and \(f'\) is continuous on \([a, b]\). Let \(W\) be the vector space of all continuous functions on \([a, b]\). Define \(T : V \to W\) by \(T(f) = f'\) for all \(f \in V\).
Let's first verify that \(T\) is a linear transformation. For additivity, the sum rule of differentiation gives \[T(f + g) = (f + g)' = f' + g' = T(f) + T(g)\] for all \(f, g \in V\). For homogeneity, the constant multiple rule gives \[T(cf) = (cf)' = cf' = cT(f)\] for all \(f \in V\) and scalars \(c\). Both properties hold, so \(T\) is indeed a linear transformation.
The kernel of \(T\) consists of all differentiable functions \(f\) on \([a, b]\) satisfying \(f' = 0\). A function with zero derivative everywhere on an interval must be constant. Therefore,
\[\ker T = \{f \in V \mid f \text{ is constant on } [a, b]\}.\]By the Fundamental Theorem of Calculus, every continuous function on \([a, b]\) is the derivative of some differentiable function (namely, any of its antiderivatives). Therefore \(\operatorname{im} T = W\); the differentiation operator is surjective onto the continuous functions.