This site is being phased out.

Linear operators: part 3

From Mathematics Is A Science
Jump to navigationJump to search

Compositions of operators correspond to ... of matrices

Does this look familiar?

$$(f \circ g)' = f' \circ g'.$$

Wrong!

Is this the Chain Rule?

No, not the composition! It's product, in the left-hand side:

$(f \circ g)' = f' \cdot g'$

Wrong? Depends on the point of view.

These are wrong:

  • $(f \cdot g)' \neq f' \cdot g',$
  • $(\frac{f}{g})' \neq \frac{f'}{g'}.$

Our interpretation: the derivative is a linear operator.

And, composition of linear operators $=$ product of their matrices.

Then the "new" formula works!

Example: To clarify, suppose $f(x,y) = (x^2-y,xy)$. What is $f'$? It's made of partial derivatives:

$$f' = \left[ \begin{array}{} 2x & -1 \\ y & x \end{array} \right]$$


Theorem. The composition of two linear operators corresponds to the product of their matrices.

Consider:

$$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllllllll} {\rm basis \hspace{3pt}}B & & {\rm basis \hspace{3pt}} D & & {\rm basis \hspace{3pt}}E \\ V & \ra{T} & U & \ra{S} & W \\ \da{}\ua{\varkappa_B} & & \ua{}\da{\varkappa_D} & & \ua{}\da{\varkappa_E} \\ {\bf R}^n & \ra{A_T} & {\bf R}^m & \ra{A_S} & {\bf R}^k \end{array} $$

BigCommutativeDiagram.png

Here $S \circ T$ corresponds under $\varkappa$ to $A_S \cdot A_T$, which is the product of their matrices.

Why? We use the Uniqueness Theorem for Linear Operators:

$$(S \circ T)(v_i) = \varkappa_E^{-1}(A_SA_T)\varkappa_B(v_i),$$

where $v_i \in B$. $\blacksquare$

Conclusion:

  • Facts about matrices can be turned into facts about linear operators.

For more see Vector calculus: course.

Theorem: $A_{T^{-1}} = A_T^{-1}$, provided $T \colon V \rightarrow U$ is invertible.

Note: "-1" on the left refers to the inverse of the operator while on the right it's about matrix inverse.

Note: This is a matrix equation.

Proof:

BigCommutativeDiagram2.png (half of the proof)

To continue, recall the definition of inverse (both of them):

  • compositions: $T^{-1} \circ T = {\rm id}_V$, $T \circ T^{-1}={\rm id}_U$
  • products: $A^{-1}A=I_n$, $AA^{-1} = I_m$

Here $n={\rm dim \hspace{3pt}} V$, $m = {\rm dim \hspace{3pt}} U$.

Finish. $\blacksquare$

Key: matrices are linear operators (with basis fixed).

Choosing a basis

As we discussed before, one basis is not enough. In particular, we can find a basis to to help us understand a given linear operator.

Example: This is familiar:

$R = \left[ \begin{array}{} {\rm cos \hspace{3pt}} \alpha & -{\rm sin \hspace{3pt}} \alpha \\ {\rm sin \hspace{3pt}} \alpha * {\rm cos \hspace{3pt}} \alpha \end{array} \right]$ is the rotation of ${\bf R}^2$ through $\alpha$.

$${\rm matrix \hspace{3pt}} \longleftarrow {\rm linear \hspace{3pt} operator}$$

Example: But what about this? What is it?

$A = \left[ \begin{array}{} {\rm cos \hspace{3pt}} \alpha & -{\rm sin \hspace{3pt}} \alpha & 0 \\ {\rm sin \hspace{3pt}} \alpha & {\rm cos \hspace{3pt}} \alpha & 0 \\ 0 & 0 & 1 \end{array} \right]$ What is it?

Answer: This is rotation about $z$-axis. Why?

$$A\left[ \begin{array}{} x \\ y \\ z \end{array} \right] = \left[ \begin{array}{} (*) \\ (*) \\ z \end{array} \right] \rightarrow $$

So, we only need to consider the $xy$-plane and we realize this is rotation.

Here is another one:

$$B=\left[ \begin{array}{} {\rm cos \hspace{3pt}} \alpha & 0 & -{\rm sin \hspace{3pt}} \alpha \\ 0 & 1 & 0 \\ {\rm sin \hspace{3pt}} \alpha & 0 & {\rm cos \hspace{3pt}} \alpha \end{array} \right],$$

That's rotation about the $y$-axis.

Lesson: we can recognize rotation if we find an appropriate basis.

What if we rotate about the line along $(1,1,1)$?

Then the matrix $A$ wouldn't be recognizable...

Plan: find the best basis.

In what sense?

Suppose $C=\left[ \begin{array}{} {\rm cos \hspace{3pt}} \alpha & -{\rm sin \hspace{3pt}} \alpha & 0 \\ {\rm sin \hspace{3pt}} \alpha & {\rm cos \hspace{3pt}} \alpha & 0 \\ 0 & 0 & 1 \end{array} \right]$.

What is the basis with respect to which the matrix of $A$ is $C$?

BasisForRotation.png

$v_1=?$, $v_2=?$, $v_3=(1,1,1)$.

Clearly, $v_1,v_2$ are in the plane perpendicular to $(1,1,1)$!


Change of basis

As a start, given two bases, we want to be able to convert all vectors from one to the other.

Suppose:

  • $V$ is a vector space,
  • $\dim V=n$,
  • $B,D$ are two bases of $V$.

We want:

  • given a vector $v \in V$, and its coordinate vector $v^D \in V_D={\bf R}^n$ with respect to $D$,
  • find its coordinate vector $v^B \in V_B={\bf R}^n$ with respect to $B$.

This may be confusing: $$V_B = {\bf R}^n=V_D.$$ Is this the same? To avoid the confusion, we prefer to think of them as two different spaces: $$V_B \stackrel{\varkappa_B}{=} {\bf R}^n_B$$ and $$V_D \stackrel{\varkappa_D}{=} {\bf R}^n_D,$$ as if two different copies of $V$ (and of ${\bf R}^n$) are created with our "coordinate operators".

The coordinate conversion will carried out by a linear operator: $$P_{DB}:V_D\to V_B,$$ and a matrix so that we can compute: $$v^B = P_{DB}v^D$$ Let's find it.

We use the theorem about matrices of linear operators:

Theorem: Given $T \colon V \rightarrow U$, a linear operator, and fixed bases $v_1,\ldots,v_n$ of $V$ and $u_1,\ldots,u_n$ of $U$. Then the $i^{\rm th}$ column of the matrix $A_T$ of $T$ with respect to these bases is made of the coefficients of the linear combination of $T(v_i)$ (i.e., $T$'s values on the basis' elements) in terms of $u_1,\ldots,u_n$.

It follows that to find $P_{DB}$ we only need its values on $D$ (basis of $V_D$); further these values, as columns, form its matrix. This follows from the above theorem:

"$A_T$ is $m \times n$ matrix the $i^{\rm th}$ column of which is $T(e_i)$, $\{e_i\}$ is the standard basis of ${\bf R}^n$."

We use $v^B=P_{DB}v^p$ for the basis elements.

Let $B=\{v_1,\ldots,v_n\}$, $D = \{u_1,\ldots,u_n\}$, in $V$.

Note: $V_1^B=[1,0,0,\ldots,0]^T \in {\bf R}^n$ comes from $v_1=a_1v_1+\ldots+a_nv_n$.

Find $a_1,\ldots,a_n$.

$$\begin{array}{} v_n^B &= [0,\ldots,0,1]^T \\ u_1^D &= [1,0,\ldots,0]^T \\ u_n^D &= [0,\ldots,0,1]^T \end{array}$$

Therefore, $B$ is the standard basis of $V_B$, and $D$ is the standard basis of $V_D$.

Problem: What is $u_1^B, \ldots, u_n^B$?

We need to rewrite basis $D$ in terms of basis $B$.

So, we need only:

$$u_i^B = P_{DB}u_i^D, i=1,\ldots,n.$$

Hence,

Theorem: The matrix of $P_{DB}$ (with respect to $D$) is made of these columns $$u_1^B,u_2^B,\ldots,u_n^B.$$ Here $P_{DB} \colon V_D \rightarrow V_B$.

Theorem: $P_{BD}=P_{DB}^{-1}$ exercise.

We now can convert vectors from $B$ to $D$ and back!

What happens to the matrix?

Suppose $T \colon V \rightarrow V$ is a self function. What's its matrix?

Haven't we done this before? Yes, but in reality it was this $T \colon V_B \rightarrow V_B$ -- for some given basis $B$.

What if we want $T$ with respect to another basis $D$?

This commutative diagram will help:

$$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllllllll} V_B & \ra{T_B} & V_B \\ \da{P_{BD}} & & \da{P_{BD}} \\ V_D & \ra{T_D} & V_D \end{array} $$

BigCommutativeDiagram3.png

Problem:

  • Know $T_B$,
  • find $T_D$.

Solution: $T_D = P_{BD}T_BP_{BD}^{-1}$.


Example: $V={\bf R}^2$,

  • $B=\{e_1,e_2\}$ standard basis,
  • $D=\{(0,1),(-1,0)\}$, written in terms of $B$.

We mean $(0,1)$ in terms of $B$ and $(-1,0)$ in terms of $B$ written as columns.

Then $P_{BD} = \left[ \begin{array}{} 0 & -1 \\ 1 & 0 \end{array} \right]$.

$$\begin{array}{} P_{BD}e_1 = \left[ \begin{array}{} 0 & -1 \\ 1 & 0 \end{array} \right] \left[ \begin{array}{} 1 \\ 0 \end{array} \right] = \left[ \begin{array}{} 0 \\ 1 \end{array} \right] \in D \\ P_{BD}e_2 = \left[ \begin{array}{} 0 & -1 \\ 1 & 0 \end{array} \right] \left[ \begin{array}{} 0 \\ 1 \end{array} \right] = \left[ \begin{array}{} -1 \\ 0 \end{array} \right] \in D \end{array}$$

Verify these two equations.

Now, find the matrix of rotation with respect to $D$.

First find $P_{BD}^{-1}$:

$$\begin{array}{} & \left[ \begin{array}{cc|cc} 0 & -1 & 1 & 0 \\ 1 & 0 & 0 & 1 \end{array} \right] \\ R_1 \leftrightarrow R_2 & \left[ \begin{array}{cc|cc} 1 & 0 & 0 & 1 \\ 0 & -1 & 1 & 0 \end{array} \right] \\ -R_2 & \left[ \begin{array}{cc|cc} 1 & 0 & 0 & 1 \\ 0 & -1 & 1 & 0 \end{array} \right] \end{array}$$

So we have:

$P_{BD}^{-1} = \left[ \begin{array}{} 0 & 1 \\ -1 & 0 \end{array} \right] = P_{DB}$

Now compute:

$$\begin{array}{} T_D &= P_{BD}T_BP_{BD}^{-1} \\ &= \left[ \begin{array}{} 0 & -1 \\ 1 & 0 \end{array} \right]\left[ \begin{array}{} {\rm cos \hspace{3pt}}\alpha & -{\rm sin \hspace{3pt}}\alpha \\ {\rm sin \hspace{3pt}}\alpha & {\rm cos \hspace{3pt}}\alpha \end{array} \right]\left[ \begin{array}{} 0 & 1 \\ -1 & 0 \end{array} \right] \left[ \begin{array}{} -{\rm sin \hspace{3pt}}\alpha & -{\rm cos \hspace{3pt}}\alpha \\ {\rm cos \hspace{3pt}}\alpha & -{\rm sin \hspace{3pt}}\alpha \end{array} \right] \left[ \begin{array}{} 0 & 1 \\ -1 & 0 \end{array} \right] \\ &=\left[ \begin{array}{} {\rm cos \hspace{3pt}}\alpha & -{\rm sin \hspace{3pt}}\alpha \\ {\rm sin \hspace{3pt}}\alpha & {\rm cos \hspace{3pt}}\alpha \end{array} \right] \end{array}$$

Same as $T_D$, as expected.

RotationDiagram.png

Exercise: Do the same for $D = \{(\frac{\sqrt{2} }{2}, \frac{\sqrt{2} }{2 }), (0,1) \}$.