2.7 MCQs-Diagonalization of Matrices


Diagonalization of Matrices

Eigenvalues and Eigenvectors

1. An eigenvector vv of a square matrix AA satisfies:

  1. Av=0Av = 0

  2. Av=λvAv = \lambda v for some scalar λ\lambda

  3. Av=ATvAv = A^T v

  4. Av=vAv = v

chevron-rightShow me the answerhashtag

Answer: 2. Av=λvAv = \lambda v for some scalar λ\lambda

Explanation: An eigenvector of a square matrix AA is a non-zero vector vv such that when AA multiplies vv, the result is a scalar multiple of vv. The scalar λ\lambda is called the eigenvalue corresponding to that eigenvector.

The defining equation is:

This can be rewritten as:

where II is the identity matrix.

2. The eigenvalues of a matrix AA are found by solving:

  1. det(A)=0\det(A) = 0

  2. det(AλI)=0\det(A - \lambda I) = 0

  3. det(A+λI)=0\det(A + \lambda I) = 0

  4. AλI=0A - \lambda I = 0

chevron-rightShow me the answerhashtag

Answer: 2. det(AλI)=0\det(A - \lambda I) = 0

Explanation: The eigenvalues λ\lambda are the roots of the characteristic polynomial, which is obtained from:

This equation comes from rewriting the eigenvector equation Av=λvAv = \lambda v as (AλI)v=0(A - \lambda I)v = 0. For non-zero solutions vv to exist, the matrix (AλI)(A - \lambda I) must be singular, meaning its determinant must be zero.

For a 2×22 \times 2 matrix A=(abcd)A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}:

3. For matrix A=(2112)A = \begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix}, the eigenvalues are:

  1. 1 and 2

  2. 1 and 3

  3. 2 and 3

  4. 3 and 4

chevron-rightShow me the answerhashtag

Answer: 2. 1 and 3

Explanation: Find eigenvalues by solving det(AλI)=0\det(A - \lambda I) = 0:

Solve the quadratic equation:

Thus, eigenvalues are λ1=1\lambda_1 = 1 and λ2=3\lambda_2 = 3.

4. The sum of eigenvalues of a matrix equals:

  1. The product of diagonal elements

  2. The trace of the matrix

  3. The determinant of the matrix

  4. The rank of the matrix

chevron-rightShow me the answerhashtag

Answer: 2. The trace of the matrix

Explanation: For an n×nn \times n matrix AA with eigenvalues λ1,λ2,,λn\lambda_1, \lambda_2, \ldots, \lambda_n:

  1. Sum of eigenvalues = Trace of AA: λ1+λ2++λn=tr(A)=a11+a22++ann\lambda_1 + \lambda_2 + \cdots + \lambda_n = \text{tr}(A) = a_{11} + a_{22} + \cdots + a_{nn}

  2. Product of eigenvalues = Determinant of AA: λ1λ2λn=det(A)\lambda_1 \lambda_2 \cdots \lambda_n = \det(A)

For a 2×22 \times 2 matrix A=(abcd)A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}:

  • Trace = a+da + d = sum of eigenvalues

  • Determinant = adbcad - bc = product of eigenvalues

5. If λ\lambda is an eigenvalue of AA, then λk\lambda^k is an eigenvalue of:

  1. AkA^k

  2. kAkA

  3. A+kIA + kI

  4. A1A^{-1}

chevron-rightShow me the answerhashtag

Answer: 1. AkA^k

Explanation: If λ\lambda is an eigenvalue of AA with eigenvector vv (so Av=λvAv = \lambda v), then for any positive integer kk:

Continuing this pattern:

Thus, λk\lambda^k is an eigenvalue of AkA^k with the same eigenvector vv.

Note: If AA is invertible and λ0\lambda \neq 0, then λ1\lambda^{-1} is an eigenvalue of A1A^{-1}.

Diagonalizability

6. A square matrix AA is diagonalizable if:

  1. It has distinct eigenvalues

  2. It can be written as A=PDP1A = PDP^{-1} where DD is diagonal

  3. It is symmetric

  4. All of the above

chevron-rightShow me the answerhashtag

Answer: 2. It can be written as A=PDP1A = PDP^{-1} where DD is diagonal

Explanation: A matrix AA is diagonalizable if there exists an invertible matrix PP and a diagonal matrix DD such that:

This is called the diagonalization of AA. The matrix PP contains the eigenvectors of AA as its columns, and DD contains the corresponding eigenvalues on its diagonal.

The condition for diagonalizability is that AA must have nn linearly independent eigenvectors, where nn is the size of the matrix.

7. A necessary condition for a matrix to be diagonalizable is:

  1. It must be symmetric

  2. It must have distinct eigenvalues

  3. It must be invertible

  4. It must have a complete set of linearly independent eigenvectors

chevron-rightShow me the answerhashtag

Answer: 4. It must have a complete set of linearly independent eigenvectors

Explanation: A matrix AA of size n×nn \times n is diagonalizable if and only if it has nn linearly independent eigenvectors. These eigenvectors form the columns of the matrix PP in the diagonalization A=PDP1A = PDP^{-1}.

While having distinct eigenvalues is a sufficient condition for diagonalizability (it guarantees nn linearly independent eigenvectors), it is not necessary. Some matrices with repeated eigenvalues can still be diagonalizable if they have enough independent eigenvectors.

Example: The identity matrix II has eigenvalue 1 repeated nn times, but it's already diagonal and thus diagonalizable.

8. A symmetric matrix is always:

  1. Diagonalizable with real eigenvalues

  2. Orthogonally diagonalizable

  3. Both 1 and 2

  4. Neither 1 nor 2

chevron-rightShow me the answerhashtag

Answer: 3. Both 1 and 2

Explanation: The Spectral Theorem states that every real symmetric matrix:

  1. Has all real eigenvalues

  2. Is orthogonally diagonalizable

This means a symmetric matrix AA can be diagonalized as:

where QQ is an orthogonal matrix (QT=Q1Q^T = Q^{-1}) whose columns are orthonormal eigenvectors of AA, and DD is a diagonal matrix containing the eigenvalues of AA.

This is a stronger result than general diagonalizability, as P1P^{-1} is simply QTQ^T.

9. The matrix A=(1101)A = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} is:

  1. Diagonalizable

  2. Not diagonalizable

  3. Orthogonally diagonalizable

  4. Both diagonalizable and symmetric

chevron-rightShow me the answerhashtag

Answer: 2. Not diagonalizable

Explanation: Find eigenvalues:

So λ=1\lambda = 1 is the only eigenvalue (repeated twice).

Now find eigenvectors by solving (AI)v=0(A - I)v = 0:

The equation (0100)(xy)=(00)\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} gives y=0y = 0.

All eigenvectors are of the form (x0)\begin{pmatrix} x \\ 0 \end{pmatrix}, i.e., multiples of (10)\begin{pmatrix} 1 \\ 0 \end{pmatrix}.

We only have one linearly independent eigenvector, but we need two for a 2×22 \times 2 matrix to be diagonalizable. Therefore, this matrix is not diagonalizable.

This is a classic example of a defective matrix (a matrix that is not diagonalizable).

Diagonalization Process

10. In the diagonalization A=PDP1A = PDP^{-1}, the matrix DD contains:

  1. The eigenvectors of AA

  2. The eigenvalues of AA on its diagonal

  3. The inverse of eigenvalues

  4. The trace and determinant of AA

chevron-rightShow me the answerhashtag

Answer: 2. The eigenvalues of AA on its diagonal

Explanation: In the diagonalization A=PDP1A = PDP^{-1}:

  • PP is the matrix whose columns are the eigenvectors of AA (in the same order as the eigenvalues in DD)

  • DD is the diagonal matrix with the eigenvalues of AA on its diagonal:

    D=(λ1000λ2000λn)D = \begin{pmatrix} \lambda_1 & 0 & \cdots & 0 \\ 0 & \lambda_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_n \end{pmatrix}
  • P1P^{-1} is the inverse of the eigenvector matrix

The eigenvalues appear on the diagonal of DD in the same order as their corresponding eigenvectors appear as columns in PP.

11. To diagonalize a matrix AA, the correct order of steps is:

  1. Find eigenvalues, find eigenvectors, form PP and DD

  2. Find eigenvectors, find eigenvalues, form PP and DD

  3. Compute P1P^{-1}, find eigenvalues, find eigenvectors

  4. Form DD, find eigenvalues, find eigenvectors

chevron-rightShow me the answerhashtag

Answer: 1. Find eigenvalues, find eigenvectors, form PP and DD

Explanation: The step-by-step procedure for diagonalizing a matrix AA is:

  1. Find the eigenvalues of AA by solving det(AλI)=0\det(A - \lambda I) = 0.

  2. Find the eigenvectors for each eigenvalue by solving (AλI)v=0(A - \lambda I)v = 0.

  3. Form matrix PP by taking the eigenvectors as columns (ensure they are linearly independent).

  4. Form diagonal matrix DD with the corresponding eigenvalues on the diagonal, in the same order as the eigenvectors in PP.

  5. Verify that A=PDP1A = PDP^{-1} or equivalently AP=PDAP = PD.

The verification step ensures the diagonalization is correct: multiply AA by each column of PP and check that it equals the corresponding eigenvalue times that column.

12. For the matrix A=(4123)A = \begin{pmatrix} 4 & 1 \\ 2 & 3 \end{pmatrix}, one eigenvector corresponding to eigenvalue λ=5\lambda = 5 is:

  1. (11)\begin{pmatrix} 1 \\ 1 \end{pmatrix}

  2. (11)\begin{pmatrix} 1 \\ -1 \end{pmatrix}

  3. (12)\begin{pmatrix} 1 \\ 2 \end{pmatrix}

  4. (21)\begin{pmatrix} 2 \\ 1 \end{pmatrix}

chevron-rightShow me the answerhashtag

Answer: 1. (11)\begin{pmatrix} 1 \\ 1 \end{pmatrix}

Explanation: First, verify that λ=5\lambda = 5 is an eigenvalue:

Now find eigenvector for λ=5\lambda = 5 by solving (A5I)v=0(A - 5I)v = 0:

This gives the equation x+y=0-x + y = 0 or x=yx = y.

So eigenvectors are of the form (tt)\begin{pmatrix} t \\ t \end{pmatrix} for t0t \neq 0.

Choosing t=1t = 1 gives the eigenvector (11)\begin{pmatrix} 1 \\ 1 \end{pmatrix}.

Properties and Applications

13. If AA is diagonalizable, then AkA^k equals:

  1. PkDk(P1)kP^k D^k (P^{-1})^k

  2. PDkP1PD^kP^{-1}

  3. PkDPkP^k D P^{-k}

  4. DkD^k

chevron-rightShow me the answerhashtag

Answer: 2. PDkP1PD^kP^{-1}

Explanation: If A=PDP1A = PDP^{-1}, then powers of AA are easy to compute:

In general:

Since DD is diagonal, DkD^k is just the diagonal matrix with each diagonal element raised to the kk-th power.

This is one of the main computational advantages of diagonalization.

14. The matrix exponential eAe^A for a diagonalizable matrix AA is:

  1. PeDP1Pe^D P^{-1}

  2. ePDeP1e^P D e^{P^{-1}}

  3. ePDP1e^{PDP^{-1}}

  4. P1eDPP^{-1}e^D P

chevron-rightShow me the answerhashtag

Answer: 1. PeDP1Pe^D P^{-1}

Explanation: For a diagonalizable matrix A=PDP1A = PDP^{-1}, the matrix exponential is:

This works because the matrix exponential is defined by the power series:

If A=PDP1A = PDP^{-1}, then:

Since DD is diagonal, eDe^D is simply the diagonal matrix with eλie^{\lambda_i} on the diagonal, where λi\lambda_i are the eigenvalues of AA.

15. Similar matrices have:

  1. The same eigenvalues

  2. The same eigenvectors

  3. The same determinant and trace

  4. Both 1 and 3

chevron-rightShow me the answerhashtag

Answer: 4. Both 1 and 3

Explanation: Two matrices AA and BB are similar if there exists an invertible matrix PP such that B=P1APB = P^{-1}AP.

Similar matrices share many properties:

  1. Same eigenvalues (with same multiplicities)

  2. Same determinant: det(B)=det(P1AP)=det(P1)det(A)det(P)=det(A)\det(B) = \det(P^{-1}AP) = \det(P^{-1})\det(A)\det(P) = \det(A)

  3. Same trace: tr(B)=tr(P1AP)=tr(APP1)=tr(A)\text{tr}(B) = \text{tr}(P^{-1}AP) = \text{tr}(APP^{-1}) = \text{tr}(A)

  4. Same characteristic polynomial

  5. Same rank

However, they do not necessarily have the same eigenvectors. If vv is an eigenvector of AA, then P1vP^{-1}v is an eigenvector of BB.

Special Cases and Conditions

16. A matrix with all distinct eigenvalues is:

  1. Always diagonalizable

  2. Always invertible

  3. Always symmetric

  4. Both 1 and 2

chevron-rightShow me the answerhashtag

Answer: 1. Always diagonalizable

Explanation: If an n×nn \times n matrix has nn distinct eigenvalues, then the corresponding eigenvectors are linearly independent. This provides a complete set of nn linearly independent eigenvectors, which is the condition needed for diagonalizability.

However, having distinct eigenvalues does not guarantee invertibility. A matrix is invertible if and only if 0 is not an eigenvalue. A matrix with distinct eigenvalues could have 0 as one of its eigenvalues, in which case it would not be invertible.

Example: A=(0001)A = \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} has distinct eigenvalues 0 and 1, so it's diagonalizable but not invertible.

17. A real matrix with complex eigenvalues:

  1. Cannot be diagonalized over the real numbers

  2. Can always be diagonalized over the complex numbers

  3. Is never diagonalizable

  4. Both 1 and 2

chevron-rightShow me the answerhashtag

Answer: 4. Both 1 and 2

Explanation: If a real matrix has complex eigenvalues, they occur in conjugate pairs. Such a matrix cannot be diagonalized using only real numbers because the eigenvectors corresponding to complex eigenvalues will have complex entries.

However, if we allow complex numbers, then:

  1. The matrix can be diagonalized over the complex numbers (assuming it has a complete set of eigenvectors)

  2. The diagonal matrix DD will have complex entries (the eigenvalues)

  3. The matrix PP will have complex entries (the eigenvectors)

For example, the rotation matrix (cosθsinθsinθcosθ)\begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix} has complex eigenvalues e±iθe^{\pm i\theta} and cannot be diagonalized over the reals.

18. The minimal polynomial of a diagonalizable matrix:

  1. Has only linear factors with no repeated roots

  2. Is the same as the characteristic polynomial

  3. Has degree equal to the matrix size

  4. Cannot be determined from eigenvalues

chevron-rightShow me the answerhashtag

Answer: 1. Has only linear factors with no repeated roots

Explanation: The minimal polynomial of a matrix AA is the monic polynomial of smallest degree such that m(A)=0m(A) = 0.

For a diagonalizable matrix:

  • The minimal polynomial has the form m(λ)=(λλ1)(λλ2)(λλk)m(\lambda) = (\lambda - \lambda_1)(\lambda - \lambda_2)\cdots(\lambda - \lambda_k) where λ1,λ2,,λk\lambda_1, \lambda_2, \ldots, \lambda_k are the distinct eigenvalues.

  • Each linear factor appears only once (no repeated roots).

  • The degree of the minimal polynomial equals the number of distinct eigenvalues.

This is actually an equivalent condition: A matrix is diagonalizable if and only if its minimal polynomial has no repeated roots.

For non-diagonalizable matrices, the minimal polynomial has repeated roots.

19. For an orthogonal diagonalization A=QDQTA = QDQ^T, the columns of QQ are:

  1. Eigenvectors of AA

  2. Orthonormal vectors

  3. Both 1 and 2

  4. Neither 1 nor 2

chevron-rightShow me the answerhashtag

Answer: 3. Both 1 and 2

Explanation: In orthogonal diagonalization A=QDQTA = QDQ^T, which is possible for real symmetric matrices:

  • QQ is an orthogonal matrix: QT=Q1Q^T = Q^{-1}

  • The columns of QQ are orthonormal eigenvectors of AA

  • DD is a diagonal matrix with the eigenvalues of AA on its diagonal

The orthonormality means:

  1. Each column has unit length: qi=1\|\mathbf{q}_i\| = 1 for all ii

  2. Columns are mutually orthogonal: qiqj=0\mathbf{q}_i \cdot \mathbf{q}_j = 0 for iji \neq j

This is a special case of diagonalization where P1P^{-1} is simply QTQ^T.

Computational Aspects

20. To compute A100A^{100} efficiently for a diagonalizable matrix AA:

  1. Multiply AA by itself 100 times

  2. Use the formula A100=PD100P1A^{100} = PD^{100}P^{-1}

  3. Find eigenvalues and raise them to the 100th power

  4. Both 2 and 3

chevron-rightShow me the answerhashtag

Answer: 4. Both 2 and 3

Explanation: For a diagonalizable matrix A=PDP1A = PDP^{-1}, computing high powers is efficient:

  1. Diagonalize AA to get PP and DD

  2. Compute D100D^{100} by raising each diagonal element (eigenvalue) to the 100th power

  3. Compute A100=PD100P1A^{100} = PD^{100}P^{-1}

This is much more efficient than multiplying AA by itself 100 times, especially for large matrices.

Example: If AA has eigenvalues λ1\lambda_1 and λ2\lambda_2, then:

And A100=P(λ110000λ2100)P1A^{100} = P\begin{pmatrix} \lambda_1^{100} & 0 \\ 0 & \lambda_2^{100} \end{pmatrix}P^{-1}.

21. The determinant of a diagonalizable matrix A=PDP1A = PDP^{-1} equals:

  1. The product of diagonal elements of DD

  2. The product of eigenvalues of AA

  3. Both 1 and 2

  4. The sum of eigenvalues of AA

chevron-rightShow me the answerhashtag

Answer: 3. Both 1 and 2

Explanation: For a diagonalizable matrix A=PDP1A = PDP^{-1}:

Since DD is diagonal, det(D)\det(D) is the product of its diagonal elements, which are the eigenvalues of AA.

Thus:

This is true for all square matrices (not just diagonalizable ones), but for diagonalizable matrices it's particularly obvious from the diagonalization.

22. If AA is diagonalizable and all its eigenvalues are positive, then:

  1. AA is positive definite

  2. AA is invertible

  3. AA is symmetric

  4. All of the above

chevron-rightShow me the answerhashtag

Answer: 2. AA is invertible

Explanation: If all eigenvalues are positive, then:

  1. No eigenvalue is 0, so det(A)=λi0\det(A) = \prod \lambda_i \neq 0, thus AA is invertible.

  2. However, AA being positive definite requires not only positive eigenvalues but also that AA is symmetric. A diagonalizable matrix with positive eigenvalues is not necessarily symmetric.

  3. For example, A=(2103)A = \begin{pmatrix} 2 & 1 \\ 0 & 3 \end{pmatrix} has eigenvalues 2 and 3 (both positive) and is diagonalizable but not symmetric.

So only statement 2 is necessarily true.

Defective Matrices and Jordan Form

23. A defective matrix is one that:

  1. Has all eigenvalues equal to zero

  2. Is not diagonalizable

  3. Has determinant zero

  4. Is not invertible

chevron-rightShow me the answerhashtag

Answer: 2. Is not diagonalizable

Explanation: A defective matrix is a square matrix that does not have a complete set of linearly independent eigenvectors. In other words, it is not diagonalizable.

This occurs when the geometric multiplicity (number of linearly independent eigenvectors for an eigenvalue) is less than the algebraic multiplicity (multiplicity of the eigenvalue as a root of the characteristic polynomial).

All diagonalizable matrices are non-defective. All defective matrices are non-diagonalizable.

Example: A=(1101)A = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} is defective because eigenvalue 1 has algebraic multiplicity 2 but geometric multiplicity 1.

24. Every square matrix can be written in:

  1. Diagonal form

  2. Jordan canonical form

  3. Orthogonal form

  4. Symmetric form

chevron-rightShow me the answerhashtag

Answer: 2. Jordan canonical form

Explanation: While not every matrix is diagonalizable, every square matrix (over an algebraically closed field like the complex numbers) can be written in Jordan canonical form:

where JJ is a block diagonal matrix called the Jordan normal form. Each block (Jordan block) has the form:

The Jordan form generalizes diagonalization: diagonalizable matrices have Jordan blocks of size 1, while defective matrices have at least one Jordan block of size greater than 1.

Applications to Systems of Differential Equations

25. The solution to the system dxdt=Ax\frac{d\mathbf{x}}{dt} = A\mathbf{x}, where AA is diagonalizable, is:

  1. x(t)=eAtx(0)\mathbf{x}(t) = e^{At}\mathbf{x}(0)

  2. x(t)=PeDtP1x(0)\mathbf{x}(t) = Pe^{Dt}P^{-1}\mathbf{x}(0)

  3. x(t)=cieλitvi\mathbf{x}(t) = \sum c_i e^{\lambda_i t}\mathbf{v}_i where vi\mathbf{v}_i are eigenvectors

  4. All of the above

chevron-rightShow me the answerhashtag

Answer: 4. All of the above

Explanation: For the system of linear differential equations dxdt=Ax\frac{d\mathbf{x}}{dt} = A\mathbf{x} with initial condition x(0)\mathbf{x}(0):

  1. The general solution is x(t)=eAtx(0)\mathbf{x}(t) = e^{At}\mathbf{x}(0).

  2. If A=PDP1A = PDP^{-1} is diagonalizable, then eAt=PeDtP1e^{At} = Pe^{Dt}P^{-1}, so x(t)=PeDtP1x(0)\mathbf{x}(t) = Pe^{Dt}P^{-1}\mathbf{x}(0).

  3. Alternatively, we can write the solution as a linear combination: x(t)=c1eλ1tv1+c2eλ2tv2++cneλntvn\mathbf{x}(t) = c_1 e^{\lambda_1 t}\mathbf{v}_1 + c_2 e^{\lambda_2 t}\mathbf{v}_2 + \cdots + c_n e^{\lambda_n t}\mathbf{v}_n where λi\lambda_i are eigenvalues, vi\mathbf{v}_i are corresponding eigenvectors, and cic_i are constants determined from x(0)\mathbf{x}(0).

This is one of the most important applications of diagonalization in applied mathematics.

26. For a diagonalizable matrix AA, the system dxdt=Ax\frac{d\mathbf{x}}{dt} = A\mathbf{x} is stable if:

  1. All eigenvalues have negative real parts

  2. All eigenvalues are real and negative

  3. The determinant of AA is negative

  4. The trace of AA is negative

chevron-rightShow me the answerhashtag

Answer: 1. All eigenvalues have negative real parts

Explanation: For the system dxdt=Ax\frac{d\mathbf{x}}{dt} = A\mathbf{x}, the stability is determined by the eigenvalues of AA:

  • If all eigenvalues have negative real parts, the system is asymptotically stable (solutions decay to zero).

  • If any eigenvalue has positive real part, the system is unstable (some solutions grow without bound).

  • If eigenvalues have zero real parts (purely imaginary), the system is marginally stable (oscillations but no growth or decay).

For diagonalizable AA, the solution is a linear combination of terms eλitvie^{\lambda_i t}\mathbf{v}_i. The term eλite^{\lambda_i t} decays to zero if and only if Re(λi)<0\text{Re}(\lambda_i) < 0.

Quadratic Forms and Diagonalization

27. A quadratic form Q(x)=xTAxQ(\mathbf{x}) = \mathbf{x}^T A \mathbf{x} can be diagonalized to:

  1. yTDy\mathbf{y}^T D \mathbf{y} where y=P1x\mathbf{y} = P^{-1}\mathbf{x}

  2. yTDy\mathbf{y}^T D \mathbf{y} where y=PTx\mathbf{y} = P^T\mathbf{x}

  3. yTDy\mathbf{y}^T D \mathbf{y} where DD contains eigenvalues of AA

  4. Both 1 and 3

chevron-rightShow me the answerhashtag

Answer: 4. Both 1 and 3

Explanation: For a symmetric matrix AA (which is always diagonalizable), the quadratic form Q(x)=xTAxQ(\mathbf{x}) = \mathbf{x}^T A \mathbf{x} can be simplified by diagonalization.

If A=PDPTA = PDP^T (orthogonal diagonalization), then:

where y=PTx\mathbf{y} = P^T \mathbf{x} (and note that P1=PTP^{-1} = P^T for orthogonal PP).

The diagonal matrix DD contains the eigenvalues of AA, so:

This is called the principal axes form of the quadratic form.

28. The type of conic section represented by ax2+bxy+cy2=1ax^2 + bxy + cy^2 = 1 can be determined by:

  1. Diagonalizing the corresponding symmetric matrix

  2. Looking at the signs of the eigenvalues

  3. Both 1 and 2

  4. Neither 1 nor 2

chevron-rightShow me the answerhashtag

Answer: 3. Both 1 and 2

Explanation: The quadratic form Q(x,y)=ax2+bxy+cy2Q(x,y) = ax^2 + bxy + cy^2 corresponds to the symmetric matrix:

By diagonalizing AA, we get Q(x,y)=λ1x2+λ2y2Q(x,y) = \lambda_1 x'^2 + \lambda_2 y'^2 in rotated coordinates.

The type of conic section ax2+bxy+cy2=1ax^2 + bxy + cy^2 = 1 is determined by the eigenvalues λ1\lambda_1 and λ2\lambda_2:

  • Ellipse: Both eigenvalues positive (or both negative)

  • Hyperbola: One positive and one negative eigenvalue

  • Parabola: One eigenvalue zero (degenerate case)

  • Circle: Both eigenvalues equal and positive

This is a classic application of diagonalization in geometry.

Computational Linear Algebra

29. The condition number of a diagonalizable matrix is related to:

  1. The ratio of largest to smallest eigenvalue (in absolute value)

  2. The spread of eigenvalues

  3. Both 1 and 2

  4. Neither 1 nor 2

chevron-rightShow me the answerhashtag

Answer: 3. Both 1 and 2

Explanation: For a diagonalizable matrix AA, the condition number κ(A)\kappa(A) (with respect to the 2-norm) is:

where σmax\sigma_{\text{max}} and σmin\sigma_{\text{min}} are the largest and smallest singular values.

For normal matrices (which include symmetric and diagonalizable matrices with orthogonal eigenvectors), the singular values are the absolute values of the eigenvalues. So:

A large condition number (large spread of eigenvalues) indicates that the matrix is ill-conditioned, meaning small changes in input can cause large changes in output when solving linear systems.

30. The power method for finding the dominant eigenvalue of a matrix works best when:

  1. The matrix is diagonalizable

  2. The dominant eigenvalue is well-separated from others

  3. The matrix is large and sparse

  4. All of the above

chevron-rightShow me the answerhashtag

Answer: 4. All of the above

Explanation: The power method is an iterative algorithm to find the eigenvalue with the largest magnitude (dominant eigenvalue). It works by repeatedly multiplying a vector by the matrix AA.

The method works best when:

  1. The matrix is diagonalizable: So it has a complete set of eigenvectors.

  2. The dominant eigenvalue is well-separated: λ1>λ2λ3|\lambda_1| > |\lambda_2| \geq |\lambda_3| \geq \cdots. The larger the gap, the faster the convergence.

  3. The matrix is large and sparse: The power method only requires matrix-vector multiplication, which is efficient for sparse matrices.

The method converges to the eigenvector corresponding to the dominant eigenvalue, and the eigenvalue can be estimated using the Rayleigh quotient.

Last updated