MATH2601 Higher Linear Algebra (1 Viewer)

davidgoes4wce

Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

I don't know much about Graph Theory and Group Theory but are they two different topics? Or Different names but same subjects?

I can't be bothered Googling it

InteGrand

Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

I don't know much about Graph Theory and Group Theory but are they two different topics? Or Different names but Two dame subjects?

I can't be bothered Googling it
Two different topics.

leehuan

Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

I was wondering if given the trace and the determinant of a matrix could you write down a unique matrix satisfying these conditions, or a simple formula for the family of matrices satisfying it?

Mostly asking for the 2x2 case

InteGrand

Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

I was wondering if given the trace and the determinant of a matrix could you write down a unique matrix satisfying these conditions, or a simple formula for the family of matrices satisfying it?

Mostly asking for the 2x2 case
No, the value for the trace and determinant of a real or complex matrix does not uniquely specify the matrix.

$\bg_white \noindent For 2\times 2, let A = \begin{bmatrix}a & b \\ c & d\end{bmatrix} and suppose the trace is given to be \alpha and the determinant \beta. This is equivalent to a + d = \alpha and ad - bc = \beta. So these are the conditions that make A have given trace \alpha and determinant \beta.$

(Note that for a 2x2 complex matrix, the trace and determinant will uniquely specify the eigenvalues of the matrix though.)

Last edited:

leehuan

Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

$\bg_white A=\begin{bmatrix}3&1\\-2&0\end{bmatrix}\\ \text{Find all diagonalisable matrices }B\text{ such that }B^2=A$

I then realised that B may share the same eigenvectors as A, and have eigenvalues equal to the square root of those of A. But I'm not sure where to proceed from there.

InteGrand

Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

$\bg_white A=\begin{bmatrix}3&1\\-2&0\end{bmatrix}\\ \text{Find all diagonalisable matrices }B\text{ such that }B^2=A$

I then realised that B may share the same eigenvectors as A, and have eigenvalues equal to the square root of those of A. But I'm not sure where to proceed from there.
$\bg_white \noindent Here's some hints. If B is a diagonalisable matrix such that B^{2} = A, then write B = PDP^{-1} (so P is an invertible matrix whose columns are eigenvectors of B, with corresponding eigenvalues in the diagonal entries of the diagonal matrix D). Then B^{2} = PD^{2}P^{-1} = A. So the eigenvalues of A are the squares of those of B and the corresponding eigenvectors are the eigenvectors from B.$

leehuan

Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

$\bg_white \\\text{Let }T:V\to V\text{ be linear, }\\W_1\text{ a subspace of }V\\ W_2\text{ a subspace of }W_1$

$\bg_white \text{Suppose }W_1\text{ is invariant under }T.\\ \text{Must }W_2\text{ be invariant under }T\text{ and why/why not?}$

InteGrand

Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

$\bg_white \\\text{Let }T:V\to V\text{ be linear, }\\W_1\text{ a subspace of }V\\ W_2\text{ a subspace of }W_1$

$\bg_white \text{Suppose }W_1\text{ is invariant under }T.\\ \text{Must }W_2\text{ be invariant under }T\text{ and why/why not?}$
No it need not be. Say V = R^2 and W1 = R^2 (= V) and W2 be the line (t, 0) (the x-axis). Take T to be a rotation map by 90 degrees counter-clockwise about the origin say. Then T is a linear map from V to V, so T(V) = T(W1) is a subspace of W1 = V = R^2, and W2 is a subspace of W1 (which is a subspace of V), but clearly W2 is not invariant under T (e.g. the point (1, 0) in W1 does not get mapped to a point in W2 by T; it gets mapped to (0, 1)).

Last edited:

leehuan

Well-Known Member
$\bg_white V\text{ is a finite dimensional V.S. over }\mathbb{C}\text{ and }T:V\to V\text{ is linear}$

$\bg_white \\\text{Suppose }T^2 = T = T^*\text{ (i.e. idempotent and self-adjoint)}\\ \text{Prove that there exists a subspace of }W\text{ such that }\\ T(\textbf{v}) = \text{proj}_{W}\textbf{v}$

My approach thus far: Write $\bg_white \textbf{v} = \textbf{x}+\textbf{y}, \quad \textbf{x}\in W\text{ and }\textbf{y} \in W^\perp$

$\bg_white <\textbf{x} \mid \textbf{y} > 0 \implies \textbf{x} \perp \textbf{v} - \textbf{x}$

Is this a dead end? Because I don't see how I can use what I know about T here

Last edited:

InteGrand

Well-Known Member
$\bg_white V\text{ is a finite dimensional V.S. over }\mathbb{C}\text{ and }T:V\to V\text{ is linear}$

$\bg_white \\\text{Suppose }T^2 = T = T^*\text{ (i.e. idempotent and self-adjoint)}\\ \text{Prove that there exists a subspace of }W\text{ such that }\\ T(\textbf{v}) = \text{proj}_{W}\textbf{v}$

My approach thus far: Write $\bg_white \textbf{v} = \textbf{x}+\textbf{y}, \quad \textbf{x}\in W\text{ and }\textbf{y} \in W^\perp$

$\bg_white <\textbf{x} \mid \textbf{y} > 0 \implies \textbf{x} \perp \textbf{v} - \textbf{x}$

Is this a dead end? Because I don't see how I can use what I know about T here
Claim: W := im(T) is such a subspace.

Proof: Exercise.

leehuan

Well-Known Member
Claim: W := im(T) is such a subspace.

Proof: Exercise.
Where does the inspiration come from that it just happens to be the image that satisfy this criteria

InteGrand

Well-Known Member
Where does the inspiration come from that it just happens to be the image that satisfy this criteria
$\bg_white \noindent Well for one thing, if T(\mathbf{v}) = \mathrm{proj}_{W}(\mathbf{v}) for all \mathbf{v} \in V, we need that T(\mathbf{v})\in W for all \mathbf{v} \in V (because by definition \mathrm{proj}_{W}(\mathbf{v})\in W). Now as we know, a subspace of V that contains T(\mathbf{v}) for all \mathbf{v} \in V is \mathrm{im}(T) (in fact of course any subspace with this property must contain the image, i.e. the image of T is the smallest'' subspace with this property). So it would make sense to try the image of T. (And if there was any other subspace W that would work, since W would have to contain \mathrm{im}(T), it would have to be the case that \mathrm{im}(T) works too. So just try \mathrm{im}(T).)$

leehuan

Well-Known Member
$\bg_white A=\begin{pmatrix}3 &-1&2\\ -1&3&2\\ 2&2&0\end{pmatrix}$

$\bg_white \text{Found in part iv): }A=QDQ^T\text{ where }\\ Q=\begin{pmatrix}\frac{1}{\sqrt3} & \frac{1}{\sqrt2} & -\frac{1}{\sqrt5}\\ \frac{1}{\sqrt3} & -\frac{1}{\sqrt2} & -\frac{1}{\sqrt 6}\\ \frac{1}{\sqrt 3}& 0 & \frac{2}{\sqrt6}\end{pmatrix}\\ D = \begin{pmatrix}4 &0&0\\ 0&4&0\\ 0&0&-2\end{pmatrix}$

$\bg_white \text{v) Write down an expression for a matrix }B\text{ such that }B^2=A$

He-Mann

Vexed?
$\bg_white (Q \sqrt{D} Q^T)(Q \sqrt{D} Q^T) = Q \sqrt{D} (Q^T Q) \sqrt{D} Q^T = Q D Q^T = A$

InteGrand

Well-Known Member
$\bg_white A=\begin{pmatrix}3 &-1&2\\ -1&3&2\\ 2&2&0\end{pmatrix}$

$\bg_white \text{Found in part iv): }A=QDQ^T\text{ where }\\ Q=\begin{pmatrix}\frac{1}{\sqrt3} & \frac{1}{\sqrt2} & -\frac{1}{\sqrt5}\\ \frac{1}{\sqrt3} & -\frac{1}{\sqrt2} & -\frac{1}{\sqrt 6}\\ \frac{1}{\sqrt 3}& 0 & \frac{2}{\sqrt6}\end{pmatrix}\\ D = \begin{pmatrix}4 &0&0\\ 0&4&0\\ 0&0&-2\end{pmatrix}$

$\bg_white \text{v) Write down an expression for a matrix }B\text{ such that }B^2=A$
$\bg_white \noindent If A = PDP^{-1} in general (D diagonal), then defining B= PD^{\frac{1}{2}}P^{-1}, we have B^{2} = A. Here D^{\frac{1}{2}} is a square root of D, which is a diagonal matrix with diagonal entries all being square roots of the entries in D. In your example, you can do this if you are willing to accept complex entries (a \sqrt{2}i for example). You can find out more about square roots of matrices here:$

https://en.wikipedia.org/wiki/Square_root_of_a_matrix#By_diagonalization .

boredofstudiesuser1

Active Member
$\bg_white \noindent If A = PDP^{-1} in general (D diagonal), then defining B= PD^{\frac{1}{2}}P^{-1}, we have B^{2} = A. Here D^{\frac{1}{2}} is a square root of D, which is a diagonal matrix with diagonal entries all being square roots of the entries in D. In your example, you can do this if you are willing to accept complex entries (a \sqrt{2}i for example). You can find out more about square roots of matrices here:$

https://en.wikipedia.org/wiki/Square_root_of_a_matrix
Woah, how do you do it so fast?

leehuan

Well-Known Member
I feel bad lol. I had the same idea as InteGrand, I just mucked up my matlab input when I went to check my answer
_______________

$\bg_white \text{Is this identity true?}\\ \text{proj}_W\textbf{v} = \textbf{v} - \text{proj}_{W^\perp}\textbf{v}$

InteGrand

Well-Known Member
I feel bad lol. I had the same idea as InteGrand, I just mucked up my matlab input when I went to check my answer
_______________

$\bg_white \text{Is this identity true?}\\ \text{proj}_W\textbf{v} = \textbf{v} - \text{proj}_{W^\perp}\textbf{v}$
Yes

boredofstudiesuser1

Active Member
I feel bad lol. I had the same idea as InteGrand, I just mucked up my matlab input when I went to check my answer
_______________

$\bg_white \text{Is this identity true?}\\ \text{proj}_W\textbf{v} = \textbf{v} - \text{proj}_{W^\perp}\textbf{v}$
It's ok, we'll call you a machine too if it makes you feel better.