# MATH2601 Higher Linear Algebra (1 Viewer)

#### InteGrand

##### Well-Known Member
I feel bad lol. I had the same idea as InteGrand, I just mucked up my matlab input when I went to check my answer
I don't know if this was the reason why, but in the Q you typed above, there's a typo (top-right entry should have 6 rather than 5 in the square root).

#### leehuan

##### Well-Known Member
I don't know if this was the reason why, but in the Q you typed above, there's a typo (top-right entry should have 6 rather than 5 in the square root).
Oops. Nah I think that was just a typo as I typed it on the forums

#### leehuan

##### Well-Known Member
This one's a bit long...

$\bg_white \\B\in M_{n,n}(\mathbb{C}) \text{ satisfies}\\ \text{nullity}(B^{m-1})< n\text{ and nullity}(B^m) = n\\ \text{for some integer }m$

$\bg_white \text{Fix }\textbf{v}\in \mathbb{C}^n \backslash \ker(B^{m-1})$

Proven in i): 0 is the only eigenvalue of B (so B is nilpotent)

$\bg_white \text{ii) By considering the Jordan form of }B\text{, or otherwise, prove that }\\\det (B+I)=1$

$\bg_white \text{iii) Prove that for }k=1,\dots,m-1\\ B^k \textbf{v} \in \ker(B^{m-k})\backslash \ker(B^{m-k-1})$

#### InteGrand

##### Well-Known Member
This one's a bit long...

$\bg_white \\B\in M_{n,n}(\mathbb{C}) \text{ satisfies}\\ \text{nullity}(B^{m-1})< n\text{ and nullity}(B^m) = n\\ \text{for some integer }m$

$\bg_white \text{Fix }\textbf{v}\in \mathbb{C}^n \backslash \ker(B^{m-1})$

Proven in i): 0 is the only eigenvalue of B (so B is nilpotent)

$\bg_white \text{ii) By considering the Jordan form of }B\text{, or otherwise, prove that }\\\det (B+I)=1$

$\bg_white \text{iii) Prove that for }k=1,\dots,m-1\\ B^k \textbf{v} \in \ker(B^{m-k})\backslash \ker(B^{m-k-1})$
$\bg_white \noindent For (ii), let B = PJP^{-1} where J is the Jordan form of B, then use what you know about the eigenvalues of B and the fact that B + I= P(J+I)P^{-1} to deduce the result.$

$\bg_white \noindent And I think (iii) is simpler than you (probably) think.$

#### leehuan

##### Well-Known Member
$\bg_white \noindent For (ii), let B = PJP^{-1} where J is the Jordan form of B, then use what you know about the eigenvalues of B and the fact that B + I= P(J+I)P^{-1} to deduce the result.$

$\bg_white \noindent And I think (iii) is simpler than you (probably) think.$
Oh of course. Once I drew out the Jordan chain again and looked carefully at what the question gave iii made sense.
_________________________________________

$\bg_white \\\text{Suppose that }A\text{ is a }9\times 9\text{ matrix with only one eigenvalue }\lambda\text{, and that}\\ \text{nullity}(A-\lambda I)=4\text{ and nullity}(A-\lambda I)^2 = 7$

$\bg_white \text{Show that there exists constant }9\times 9\text{ matrices }M_0,M_1,M_2,M_3\text{ matrices such that}\\ A^n = \lambda^nM_0+n\lambda^nM_1+n^2\lambda^nM_2+n^3 \lambda^nM _3\\\text{ for all }n$

Tools permitted if useful: Binomial theorem for matrices that commute in multiplication, Cayley-Hamilton theorem

Edit: Thanks IG I just saw where your reply was

Last edited:

#### leehuan

##### Well-Known Member
No more questions for this sem after tomorrow.
__________________

$\bg_white \\\text{I forgot how to use my field axioms.}\\ \text{Prove that }a0=0\text{ for }a\in \mathbb{F}$

#### InteGrand

##### Well-Known Member
No more questions for this sem after tomorrow.
__________________

$\bg_white \\\text{I forgot how to use my field axioms.}\\ \text{Prove that }a0=0\text{ for }a\in \mathbb{F}$
Hint: Use the axioms to show that a + a0 = a.

#### leehuan

##### Well-Known Member
This is a highly open-ended question and everyone's opinion might be different.

What's the easiest proof (or would be a very easy proof) of the Cauchy-Schwarz inequality to memorise?

#### InteGrand

##### Well-Known Member
This is a highly open-ended question and everyone's opinion might be different.

What's the easiest proof (or would be a very easy proof) of the Cauchy-Schwarz inequality to memorise?
Well you wrote one up here before, so maybe you'd find that easiest to "memorise" for yourself:

$\bg_white \text{They showed one of the many proofs of it in my lecture last sem.}$

\bg_white \begin{align*}p(\lambda)&=|\textbf{a}-\lambda\textbf{b}|^2\\ &=(\textbf{a}-\lambda\textbf{b})\cdot \textbf{a}-\lambda\textbf{b}\\ &= |\textbf{a}|^2-2\lambda \textbf{a}\cdot\textbf{b}+\lambda^2 |\textbf{b}|^2 \end{align*}\\ \text{And note that }p(\lambda)\ge 0

$\bg_white \\ \text{From 2U methods, we see that }p\text{ is miniimised when }\lambda = \frac{\textbf{a}\cdot\textbf{b}}{|\textbf{b}|^2}\\ \text{Substituting back in gives }\min_{\lambda \in \mathbb R}p(\lambda)=|\textbf{a}|^2-\frac{(\textbf{a}\cdot \textbf{b})^2}{|\textbf{b}|^2}$

$\bg_white \\\text{So from }p(\lambda )\ge 0\text{ and }|\textbf{x}|\ge 0\\ \text{Rearranging gives }|\textbf{a}\cdot \textbf{b}|\le |\textbf{a}||\textbf{b}|$

I did not even know that there was a sum form until doing past papers for 1251. Then I had to figure out why the sum and vector forms were equivalent.
Note that it needs to be adapted slightly to deal with the complex case, but it's not too big a deal.

You can also probably find many proofs online. There are twelve proofs here, but they seem to only be for the case of R^n: http://www.uni-miskolc.hu/~matsefi/Octogon/volumes/volume1/article1_19.pdf .

#### leehuan

##### Well-Known Member
Completely forgot about that one.
_________________

$\bg_white \\\text{Suppose }Q\in M_{n,n}\text{ is unitary. Prove that all its eigenvalues }\lambda\text{ satisfy}\\ |\lambda|=1$

Last edited:

#### InteGrand

##### Well-Known Member
Completely forgot about that one.
_________________

$\bg_white \\\text{Suppose }Q\in M_{n,n}\text{ is unitary. Prove that all its eigenvalues }\lambda\text{ satisfy}\\ |\lambda|=1$
$\bg_white \noindent Note that if \lambda is an eigenvalue of Q with unit eigenvector \vec{v}, then$

\bg_white \begin{align*}\lambda &= \lambda\left\langle \vec{v},\vec{v}\right\rangle \\ &=\langle \lambda\vec{v},\vec{v}\rangle \\ &= \langle Q \vec{v},\vec{v}\rangle \\ &= \langle \vec{v},Q^{*} \vec{v}\rangle \\ &= \langle \vec{v}, Q^{-1}\vec{v}\rangle \\ &= \langle \vec{v}, \lambda^{-1} \vec{v}\rangle \\ &= \overline{\lambda^{-1}}\langle \vec{v},\vec{v}\rangle \\ &= \left(\overline{\lambda}\right)^{-1}.\end{align*}

$\bg_white \noindent So \lambda = \left( \overline{\lambda}\right)^{-1} \Rightarrow \lambda\overline{\lambda} = 1\Rightarrow |\lambda|^{2} = 1 \Rightarrow |\lambda| = 1.$

$\bg_white \noindent Facts used include:$

$\bg_white \noindent \bullet Q^{*} = Q^{-1} (as Q is unitary)$

$\bg_white \noindent \bullet As Q is unitary, it is invertible and so cannot have a zero eigenvalue, so \lambda \neq 0$

$\bg_white \noindent \bullet If A is an invertible square complex matrix and A\vec{u} = t \vec{u} for some vector \vec{u} and scalar t, then A^{-1}\vec{u} = t^{-1} \vec{u}$

$\bg_white \noindent \bullet Definition of eigenvalue and eigenvector, basic properties of adjoints and inner products, and \langle \vec{v}, \vec{v} \rangle = 1, since \vec{v} is a unit eigenvector.$

Last edited:

#### leehuan

##### Well-Known Member
This is just some personal fun

$\bg_white \\\text{For the V.S. }(\mathbb{R}^+, \oplus, \otimes, \mathbb{R})\\ \text{where }x\oplus y = x\times y\\ x\otimes y = y^x\\ \text{Does there exist an inner product we can define to make it an inner product space?}$

Last edited:

#### InteGrand

##### Well-Known Member
This is just some personal fun

$\bg_white \\\text{For the V.S. }(\mathbb{R}^+, \oplus, \otimes, \mathbb{R}^+)\\ \text{where }x\oplus y = x\times y\\ x\otimes y = y^x\\ \text{Does there exist an inner product we can define to make it an inner product space?}$
Yes. (I assume you meant the field to be R.)

Last edited:

#### InteGrand

##### Well-Known Member
$\bg_white \noindent Just in general, suppose V is a vector space over a field \mathbb{F}, and let \phi be a function and W a set such that \phi : V \to W is a bijection (so of course the inverse \phi^{-1} : W \to V is also a bijection). Then we can make W be a vector space over \mathbb{F} that is isomorphic to V, by \emph{defining} vector addition in W as$

$\bg_white w_{1} \oplus w_{2} = \phi \left(\phi^{-1}\left(w_{1}\right)+ \phi^{-1} \left(w_{2}\right)\right) for all w_{1}, w_{2} \in W,$

$\bg_white \noindent where the +'' in the RHS is the addition operation in the vector space V, and scalar multiplication by$

$\bg_white \alpha \otimes w = \phi\left(\alpha * \phi^{-1}(w)\right) for all \alpha \in \mathbb{F} and w\in W,$

$\bg_white \noindent where the *'' refers to scalar multiplication in V.$

$\bg_white \noindent Under these definitions, you can easily confirm that W is a vector space over \mathbb{F} and is isomorphic to V (with \phi^{-1} providing an isomorphism; in fact, this is the reason why we chose to define \oplus and \otimes in this way -- it essentially makes \phi^{-1} into a bijective \emph{linear} map, i.e. a vector space isomorphism. (Of course \phi will also provide an isomorphism.)).$

$\bg_white \noindent Now with these definitions, you can also easily show that if V is an \emph{inner product} space over \mathbb{F} (which is either \mathbb{R} or \mathbb{C}), then W is also an inner product space over \mathbb{F} (which we would expect, as it is isomorphic to V), with an inner product on W being$

$\bg_white \langle w_{1}, w_{2} \rangle_{W} \stackrel{\text{def}}{=} \langle \phi^{-1} \left(w_{1}\right), \phi^{-1}\left(w_{2}\right) \rangle_{V} for all w_{1}, w_{2} \in W,$

$\bg_white \noindent where \langle \cdot, \cdot \rangle_{V} is the inner product of the inner product space V.$

$\bg_white \noindent (The intuition behind this is that since W is isomorphic to V, it would be natural for it to have an inner product that is just that of V, but you just plug in the vectors to \langle \cdot, \cdot \rangle_{V} that are the re-labelled'' versions of w_{1}, w_{2} in V, namely \phi^{-1} \left(w_{1}\right) and \phi^{-1} \left(w_{2}\right). You can show as an exercise that this is indeed an inner product.)$

$\bg_white \noindent In the example you gave, the known vector space (and inner product space) was V = \mathbb{R} (with field \mathbb{F} = \mathbb{R}). The set W was W = \mathbb{R}^{+}, and the bijection was \phi : V \to W (i.e. \phi : \mathbb{R} \to \mathbb{R}^{+}) given by \phi (v) = e^{v} for all v \in V = \mathbb{R}. (The inverse mapping was \phi^{-1}: \mathbb{R}^{+} \to \mathbb{R}, given by \phi^{-1} (w) = \ln w for all w\in \mathbb{R}^{+} (i.e. just the inverse function of \phi, which is an exponential).).$

$\bg_white \noindent In your example, then, from the preceding discussion, we would want to define addition on W = \mathbb{R}^{+} as w_{1}\oplus w_{2} = \phi \left(\phi^{-1}\left(w_{1}\right)+ \phi^{-1} \left(w_{2}\right)\right) with \phi being the exponential function. This is indeed what was done, as$

\bg_white \begin{align*}\phi \left(\phi^{-1}\left(w_{1}\right)+ \phi^{-1} \left(w_{2}\right)\right) &= \exp \left(\ln w_{1} + \ln w_{2}\right) \\ &= \exp \left(\ln \left(w_{1} w_{2}\right) \right) \\ &= w_{1}w_{2},\end{align*}

$\bg_white \noindent which is how they defined the addition. Similarly, we would want scalar multiplication to be defined by \alpha \otimes w = \phi\left(\alpha * \phi^{-1}(w)\right), and indeed it is, since$

\bg_white \begin{align*}\phi\left(\alpha * \phi^{-1}(w)\right) &= \exp \left(\alpha \ln w\right)\\ &= \exp \left(\ln \left(w^{\alpha}\right)\right) \\ &= w^{\alpha}. \checkmark \end{align*}

$\bg_white \noindent In other words, assuming you've proved the assertions made earlier in this post, the vector space with its addition and scalar multiplication you gave is isomorphic to \mathbb{R}. Since \mathbb{R} is an inner product space with an inner product \langle v_{1}, v_{2}\rangle_{\mathbb{R}} = v_{1} v_{2} for v_{1}, v_{2} \in \mathbb{R} (just normal multiplication), then assuming you proved the assertion about being isomorphic to an inner product space, we have that the W is also an inner product space over \mathbb{R} and has as an inner product$

\bg_white \begin{align*}\langle w_{1}, w_{2}\rangle_{W} &\stackrel{\text{def}}{=} \langle \phi^{-1} \left(w_{1}\right), \phi^{-1}\left(w_{2}\right) \rangle_{\mathbb{R}} \\ &\stackrel{\text{def}}{=} \phi^{-1}\left(w_{1}\right)\phi^{-1}\left(w_{2}\right) \\ &= \left(\ln w_{1}\right)\left(\ln w_{2}\right).\end{align*}

$\bg_white \noindent In other words, defining \langle w_{1}, w_{2}\rangle_{W} = \left(\ln w_{1}\right)\left(\ln w_{2}\right) (product of the logs) for all w_{1}, w_{2} \in \mathbb{R}^{+}, we have that W is an inner product space with this as an inner product.$

#### turtlesnore

##### New Member
Hopefully you don't mind if i post a question here. (taking MATH2601 this semester)

Suppose that G is a group with precisely three distinct elements e (the identity), a and b.
a) Prove that ab = e (Hint: eliminate other possibilities).
b) Prove that a^2 = b.
c) Deduce that G = {e, a, a^2} and hence that G is isomorphic to the group.

(How do you get LaTeX to work here?)

#### InteGrand

##### Well-Known Member
Hopefully you don't mind if i post a question here. (taking MATH2601 this semester)

Suppose that G is a group with precisely three distinct elements e (the identity), a and b.
a) Prove that ab = e (Hint: eliminate other possibilities).
b) Prove that a^2 = b.
c) Deduce that G = {e, a, a^2} and hence that G is isomorphic to the group.

(How do you get LaTeX to work here?)
What was your progress on the questions so far?

Also to use LaTeX on the forums here, you need to enclose TeX code in so-called "tex tags". (Using LaTeX here is a bit different to just using it on your own computer.) You need to enclose the TeX code in between: [tex.] [/tex.] (but leave out the red dots).

For example: typing

[tex.] y = x^{2}[/tex.]

(but deleting the red dots) gives

$\bg_white y = x^{2}$.

#### turtlesnore

##### New Member
I thought about using the identity and inverse axioms but I couldn't get anywhere with them. I thought that it would be straight forward that $\bg_white ab = e$ since there are only 3 elements in G, so either $\bg_white a = b^{-1}$ or $\bg_white b = a^{-1}$. But I was confused when I saw that part b said $\bg_white a^2 = b$ because that wouldn't be consistent with my result, and implies that $\bg_white a^{-1}$ doesn't necessarily have to be written in G to be in the set G.

Thanks for the LaTeX help.

#### InteGrand

##### Well-Known Member
I thought about using the identity and inverse axioms but I couldn't get anywhere with them. I thought that it would be straight forward that $\bg_white ab = e$ since there are only 3 elements in G, so either $\bg_white a = b^{-1}$ or $\bg_white b = a^{-1}$. But I was confused when I saw that part b said $\bg_white a^2 = b$ because that wouldn't be consistent with my result, and implies that $\bg_white a^{-1}$ doesn't necessarily have to be written in G to be in the set G.

Thanks for the LaTeX help.
$\bg_white \noindent So you essentially got out the first part?$

$\bg_white \noindent For the second part, it's fine for a^2 to equal b. In that case, in fact, a^2 would be the same as a^{-1}. (For example, remember from complex numbers how a cube root of unity w satisfies w^{2} = w^{-1}.)$

#### turtlesnore

##### New Member
Thanks a bunch InteGrand!

#### marxman

##### Member
By the same token, how do we prove $\bg_white a^{2} = b$ in the first place?