This site is being phased out.

Inner product

From Mathematics Is A Science
Jump to navigationJump to search

In order to develop a complete discrete calculus we need to be able compute lengths of vectors and angles between them. An inner product is how one adds geometry to a vector space.

Given a vector space $V$, an inner product on $V$ is a function that associates a number to each pair of vectors in $V$: $$<\cdot,\cdot>:V \times V \rightarrow {\bf R}$$ $$(u,v)\mapsto < u,v >, u,v \in V,$$ that satisfies these properties:

Items 3. and 4. together make up bi-linearity. Indeed, consider $p \colon V \times V \rightarrow {\bf R}$, where $p ( u , v)= < u , v>$. Then we have linearity of $p$ with respect to the first variable, and the second variable, separately:

  • 1) fix $v=b$, then $p(u,b)$ is linear: $V \rightarrow {\bf R}$;
  • 2) fix $u=a$, then $p(a,v)$ is linear: $V \rightarrow {\bf R}$.

It is easy to verify these axioms for the dot product defined on ${\bf R}^n$: $$u=(u_1,\ldots,u_n), v=(v_1,\ldots,v_n) \in {\bf R}^n$$ then $$< u , v>=u_1v_1 + u_2v_2 + \ldots + u_nv_n .$$ Moreover, a weighted dot product $$< u , v>=w_1u_1v_1 + w_2u_2v_2 + \ldots + w_nu_nv_n ,$$ where $w_i \in {\bf R}^+ ,i=1,...,n$, are positive "weights", is also an inner product. The term is justified by its use in some applications when some measurements intended to evaluate the difference between complex entities, such molecules, may be more important than others.

A vector space equipped with an inner product is called an inner product space.

One can now use this well=-known formula $$\cos \alpha = \frac{< u, v >}{ \lVert u \rVert \cdot \lVert v \rVert }$$ to find the angle $\alpha$ between vectors $u,v$.

The norm of a vector $v$ in an inner product space $V$ is defined as $$\lVert v \rVert=\sqrt {<v,v>}.$$ It measures the length, or the magnitude, of the vector.

The norm satisfies certain properties that we can also use as axioms of a normed space. Given a vector space $V$, a norm on $V$ is a function $$\lVert\cdot \rVert:V \rightarrow {\bf R}$$ that satisfies

  • 1.
    • $\lVert v \rVert \geq 0$ for all $v \in V$,
    • $\lVert v \rVert = 0$ if and only if $v=0$;
  • 2. $\lVert rv \rVert = |r| \lVert v \rVert$ for all $v \in V, r\in {\bf R}$;
  • 3. $\lVert u + v \rVert \leq \lVert u \rVert + \lVert v \rVert$ for all $u, v \in V$.

Theorem. Any inner product $<\cdot, \cdot >$ on an $n$-dimensional vector space $V$ can be computed via matrix multiplication $$< u, v >=u^T Q v,$$ where $Q$ is a positive definite, symmetric $n \times n$ matrix.

In particular, the dot product is represented by the identity matrix $I_n$ while the weighted dot product is represented by the diagonal matrix $$Q=diag[w_1,...,w_n].$$

To emphasize the source of the norm we may use this notation: $$\lVert v \rVert _Q =\sqrt {<v,v>} = \sqrt {v^T Q v}.$$

If we look at the eigenvalues and eigenvectors of this matrix, we discover $$\lVert v \rVert _Q ^2 =v^T Q v = v^T \lambda v = \lambda \lVert v \rVert ^2,$$ where $\lVert v \rVert$ is the norm generated by the dot product norm. It follows:

Theorem. The eigenvalues of a positive definite matrix are real and positive.

We know from linear algebra that if the eigenvalues are real and distinct, the matrix is diagonalizable. As it turns out, having distinct eigenvalues isn't required from a positive definite matrix. It follows:

Theorem. Any inner product $<\cdot, \cdot >$ on an $n$-dimensional vector space $V$ can be represented, via a choice of basis, by a diagonal $n \times n$ matrix $Q$ with positive elements on the diagonal. In other words, every inner product is a weighted dot product, in some basis.

Also, since the determinant $\det Q$ of $Q$ is the product of the eigenvalues of $Q$, it is also positive and we have

Theorem. Matrix $Q$ that represents an inner product is invertible.

In calculus, an inner product parametrized by location is called the metric tensor.