2.7 MCQs-Diagonalization of Matrices
Diagonalization of Matrices
Eigenvalues and Eigenvectors
1. An eigenvector v of a square matrix A satisfies:
Av=0
Av=λv for some scalar λ
Av=ATv
Av=v
Show me the answer
Answer: 2. Av=λv for some scalar λ
Explanation: An eigenvector of a square matrix A is a non-zero vector v such that when A multiplies v, the result is a scalar multiple of v. The scalar λ is called the eigenvalue corresponding to that eigenvector.
The defining equation is:
This can be rewritten as:
where I is the identity matrix.
2. The eigenvalues of a matrix A are found by solving:
det(A)=0
det(A−λI)=0
det(A+λI)=0
A−λI=0
Show me the answer
Answer: 2. det(A−λI)=0
Explanation: The eigenvalues λ are the roots of the characteristic polynomial, which is obtained from:
This equation comes from rewriting the eigenvector equation Av=λv as (A−λI)v=0. For non-zero solutions v to exist, the matrix (A−λI) must be singular, meaning its determinant must be zero.
For a 2×2 matrix A=(acbd):
3. For matrix A=(2112), the eigenvalues are:
1 and 2
1 and 3
2 and 3
3 and 4
Show me the answer
Answer: 2. 1 and 3
Explanation: Find eigenvalues by solving det(A−λI)=0:
Solve the quadratic equation:
Thus, eigenvalues are λ1=1 and λ2=3.
4. The sum of eigenvalues of a matrix equals:
The product of diagonal elements
The trace of the matrix
The determinant of the matrix
The rank of the matrix
Show me the answer
Answer: 2. The trace of the matrix
Explanation: For an n×n matrix A with eigenvalues λ1,λ2,…,λn:
Sum of eigenvalues = Trace of A: λ1+λ2+⋯+λn=tr(A)=a11+a22+⋯+ann
Product of eigenvalues = Determinant of A: λ1λ2⋯λn=det(A)
For a 2×2 matrix A=(acbd):
Trace = a+d = sum of eigenvalues
Determinant = ad−bc = product of eigenvalues
5. If λ is an eigenvalue of A, then λk is an eigenvalue of:
Ak
kA
A+kI
A−1
Show me the answer
Answer: 1. Ak
Explanation: If λ is an eigenvalue of A with eigenvector v (so Av=λv), then for any positive integer k:
Continuing this pattern:
Thus, λk is an eigenvalue of Ak with the same eigenvector v.
Note: If A is invertible and λ=0, then λ−1 is an eigenvalue of A−1.
Diagonalizability
6. A square matrix A is diagonalizable if:
It has distinct eigenvalues
It can be written as A=PDP−1 where D is diagonal
It is symmetric
All of the above
Show me the answer
Answer: 2. It can be written as A=PDP−1 where D is diagonal
Explanation: A matrix A is diagonalizable if there exists an invertible matrix P and a diagonal matrix D such that:
This is called the diagonalization of A. The matrix P contains the eigenvectors of A as its columns, and D contains the corresponding eigenvalues on its diagonal.
The condition for diagonalizability is that A must have n linearly independent eigenvectors, where n is the size of the matrix.
7. A necessary condition for a matrix to be diagonalizable is:
It must be symmetric
It must have distinct eigenvalues
It must be invertible
It must have a complete set of linearly independent eigenvectors
Show me the answer
Answer: 4. It must have a complete set of linearly independent eigenvectors
Explanation: A matrix A of size n×n is diagonalizable if and only if it has n linearly independent eigenvectors. These eigenvectors form the columns of the matrix P in the diagonalization A=PDP−1.
While having distinct eigenvalues is a sufficient condition for diagonalizability (it guarantees n linearly independent eigenvectors), it is not necessary. Some matrices with repeated eigenvalues can still be diagonalizable if they have enough independent eigenvectors.
Example: The identity matrix I has eigenvalue 1 repeated n times, but it's already diagonal and thus diagonalizable.
8. A symmetric matrix is always:
Diagonalizable with real eigenvalues
Orthogonally diagonalizable
Both 1 and 2
Neither 1 nor 2
Show me the answer
Answer: 3. Both 1 and 2
Explanation: The Spectral Theorem states that every real symmetric matrix:
Has all real eigenvalues
Is orthogonally diagonalizable
This means a symmetric matrix A can be diagonalized as:
where Q is an orthogonal matrix (QT=Q−1) whose columns are orthonormal eigenvectors of A, and D is a diagonal matrix containing the eigenvalues of A.
This is a stronger result than general diagonalizability, as P−1 is simply QT.
9. The matrix A=(1011) is:
Diagonalizable
Not diagonalizable
Orthogonally diagonalizable
Both diagonalizable and symmetric
Show me the answer
Answer: 2. Not diagonalizable
Explanation: Find eigenvalues:
So λ=1 is the only eigenvalue (repeated twice).
Now find eigenvectors by solving (A−I)v=0:
The equation (0010)(xy)=(00) gives y=0.
All eigenvectors are of the form (x0), i.e., multiples of (10).
We only have one linearly independent eigenvector, but we need two for a 2×2 matrix to be diagonalizable. Therefore, this matrix is not diagonalizable.
This is a classic example of a defective matrix (a matrix that is not diagonalizable).
Diagonalization Process
10. In the diagonalization A=PDP−1, the matrix D contains:
The eigenvectors of A
The eigenvalues of A on its diagonal
The inverse of eigenvalues
The trace and determinant of A
Show me the answer
Answer: 2. The eigenvalues of A on its diagonal
Explanation: In the diagonalization A=PDP−1:
P is the matrix whose columns are the eigenvectors of A (in the same order as the eigenvalues in D)
D is the diagonal matrix with the eigenvalues of A on its diagonal:
D=λ10⋮00λ2⋮0⋯⋯⋱⋯00⋮λnP−1 is the inverse of the eigenvector matrix
The eigenvalues appear on the diagonal of D in the same order as their corresponding eigenvectors appear as columns in P.
11. To diagonalize a matrix A, the correct order of steps is:
Find eigenvalues, find eigenvectors, form P and D
Find eigenvectors, find eigenvalues, form P and D
Compute P−1, find eigenvalues, find eigenvectors
Form D, find eigenvalues, find eigenvectors
Show me the answer
Answer: 1. Find eigenvalues, find eigenvectors, form P and D
Explanation: The step-by-step procedure for diagonalizing a matrix A is:
Find the eigenvalues of A by solving det(A−λI)=0.
Find the eigenvectors for each eigenvalue by solving (A−λI)v=0.
Form matrix P by taking the eigenvectors as columns (ensure they are linearly independent).
Form diagonal matrix D with the corresponding eigenvalues on the diagonal, in the same order as the eigenvectors in P.
Verify that A=PDP−1 or equivalently AP=PD.
The verification step ensures the diagonalization is correct: multiply A by each column of P and check that it equals the corresponding eigenvalue times that column.
12. For the matrix A=(4213), one eigenvector corresponding to eigenvalue λ=5 is:
(11)
(1−1)
(12)
(21)
Show me the answer
Answer: 1. (11)
Explanation: First, verify that λ=5 is an eigenvalue:
Now find eigenvector for λ=5 by solving (A−5I)v=0:
This gives the equation −x+y=0 or x=y.
So eigenvectors are of the form (tt) for t=0.
Choosing t=1 gives the eigenvector (11).
Properties and Applications
13. If A is diagonalizable, then Ak equals:
PkDk(P−1)k
PDkP−1
PkDP−k
Dk
Show me the answer
Answer: 2. PDkP−1
Explanation: If A=PDP−1, then powers of A are easy to compute:
In general:
Since D is diagonal, Dk is just the diagonal matrix with each diagonal element raised to the k-th power.
This is one of the main computational advantages of diagonalization.
14. The matrix exponential eA for a diagonalizable matrix A is:
PeDP−1
ePDeP−1
ePDP−1
P−1eDP
Show me the answer
Answer: 1. PeDP−1
Explanation: For a diagonalizable matrix A=PDP−1, the matrix exponential is:
This works because the matrix exponential is defined by the power series:
If A=PDP−1, then:
Since D is diagonal, eD is simply the diagonal matrix with eλi on the diagonal, where λi are the eigenvalues of A.
15. Similar matrices have:
The same eigenvalues
The same eigenvectors
The same determinant and trace
Both 1 and 3
Show me the answer
Answer: 4. Both 1 and 3
Explanation: Two matrices A and B are similar if there exists an invertible matrix P such that B=P−1AP.
Similar matrices share many properties:
Same eigenvalues (with same multiplicities)
Same determinant: det(B)=det(P−1AP)=det(P−1)det(A)det(P)=det(A)
Same trace: tr(B)=tr(P−1AP)=tr(APP−1)=tr(A)
Same characteristic polynomial
Same rank
However, they do not necessarily have the same eigenvectors. If v is an eigenvector of A, then P−1v is an eigenvector of B.
Special Cases and Conditions
16. A matrix with all distinct eigenvalues is:
Always diagonalizable
Always invertible
Always symmetric
Both 1 and 2
Show me the answer
Answer: 1. Always diagonalizable
Explanation: If an n×n matrix has n distinct eigenvalues, then the corresponding eigenvectors are linearly independent. This provides a complete set of n linearly independent eigenvectors, which is the condition needed for diagonalizability.
However, having distinct eigenvalues does not guarantee invertibility. A matrix is invertible if and only if 0 is not an eigenvalue. A matrix with distinct eigenvalues could have 0 as one of its eigenvalues, in which case it would not be invertible.
Example: A=(0001) has distinct eigenvalues 0 and 1, so it's diagonalizable but not invertible.
17. A real matrix with complex eigenvalues:
Cannot be diagonalized over the real numbers
Can always be diagonalized over the complex numbers
Is never diagonalizable
Both 1 and 2
Show me the answer
Answer: 4. Both 1 and 2
Explanation: If a real matrix has complex eigenvalues, they occur in conjugate pairs. Such a matrix cannot be diagonalized using only real numbers because the eigenvectors corresponding to complex eigenvalues will have complex entries.
However, if we allow complex numbers, then:
The matrix can be diagonalized over the complex numbers (assuming it has a complete set of eigenvectors)
The diagonal matrix D will have complex entries (the eigenvalues)
The matrix P will have complex entries (the eigenvectors)
For example, the rotation matrix (cosθsinθ−sinθcosθ) has complex eigenvalues e±iθ and cannot be diagonalized over the reals.
18. The minimal polynomial of a diagonalizable matrix:
Has only linear factors with no repeated roots
Is the same as the characteristic polynomial
Has degree equal to the matrix size
Cannot be determined from eigenvalues
Show me the answer
Answer: 1. Has only linear factors with no repeated roots
Explanation: The minimal polynomial of a matrix A is the monic polynomial of smallest degree such that m(A)=0.
For a diagonalizable matrix:
The minimal polynomial has the form m(λ)=(λ−λ1)(λ−λ2)⋯(λ−λk) where λ1,λ2,…,λk are the distinct eigenvalues.
Each linear factor appears only once (no repeated roots).
The degree of the minimal polynomial equals the number of distinct eigenvalues.
This is actually an equivalent condition: A matrix is diagonalizable if and only if its minimal polynomial has no repeated roots.
For non-diagonalizable matrices, the minimal polynomial has repeated roots.
19. For an orthogonal diagonalization A=QDQT, the columns of Q are:
Eigenvectors of A
Orthonormal vectors
Both 1 and 2
Neither 1 nor 2
Show me the answer
Answer: 3. Both 1 and 2
Explanation: In orthogonal diagonalization A=QDQT, which is possible for real symmetric matrices:
Q is an orthogonal matrix: QT=Q−1
The columns of Q are orthonormal eigenvectors of A
D is a diagonal matrix with the eigenvalues of A on its diagonal
The orthonormality means:
Each column has unit length: ∥qi∥=1 for all i
Columns are mutually orthogonal: qi⋅qj=0 for i=j
This is a special case of diagonalization where P−1 is simply QT.
Computational Aspects
20. To compute A100 efficiently for a diagonalizable matrix A:
Multiply A by itself 100 times
Use the formula A100=PD100P−1
Find eigenvalues and raise them to the 100th power
Both 2 and 3
Show me the answer
Answer: 4. Both 2 and 3
Explanation: For a diagonalizable matrix A=PDP−1, computing high powers is efficient:
Diagonalize A to get P and D
Compute D100 by raising each diagonal element (eigenvalue) to the 100th power
Compute A100=PD100P−1
This is much more efficient than multiplying A by itself 100 times, especially for large matrices.
Example: If A has eigenvalues λ1 and λ2, then:
And A100=P(λ110000λ2100)P−1.
21. The determinant of a diagonalizable matrix A=PDP−1 equals:
The product of diagonal elements of D
The product of eigenvalues of A
Both 1 and 2
The sum of eigenvalues of A
Show me the answer
Answer: 3. Both 1 and 2
Explanation: For a diagonalizable matrix A=PDP−1:
Since D is diagonal, det(D) is the product of its diagonal elements, which are the eigenvalues of A.
Thus:
This is true for all square matrices (not just diagonalizable ones), but for diagonalizable matrices it's particularly obvious from the diagonalization.
22. If A is diagonalizable and all its eigenvalues are positive, then:
A is positive definite
A is invertible
A is symmetric
All of the above
Show me the answer
Answer: 2. A is invertible
Explanation: If all eigenvalues are positive, then:
No eigenvalue is 0, so det(A)=∏λi=0, thus A is invertible.
However, A being positive definite requires not only positive eigenvalues but also that A is symmetric. A diagonalizable matrix with positive eigenvalues is not necessarily symmetric.
For example, A=(2013) has eigenvalues 2 and 3 (both positive) and is diagonalizable but not symmetric.
So only statement 2 is necessarily true.
Defective Matrices and Jordan Form
23. A defective matrix is one that:
Has all eigenvalues equal to zero
Is not diagonalizable
Has determinant zero
Is not invertible
Show me the answer
Answer: 2. Is not diagonalizable
Explanation: A defective matrix is a square matrix that does not have a complete set of linearly independent eigenvectors. In other words, it is not diagonalizable.
This occurs when the geometric multiplicity (number of linearly independent eigenvectors for an eigenvalue) is less than the algebraic multiplicity (multiplicity of the eigenvalue as a root of the characteristic polynomial).
All diagonalizable matrices are non-defective. All defective matrices are non-diagonalizable.
Example: A=(1011) is defective because eigenvalue 1 has algebraic multiplicity 2 but geometric multiplicity 1.
24. Every square matrix can be written in:
Diagonal form
Jordan canonical form
Orthogonal form
Symmetric form
Show me the answer
Answer: 2. Jordan canonical form
Explanation: While not every matrix is diagonalizable, every square matrix (over an algebraically closed field like the complex numbers) can be written in Jordan canonical form:
where J is a block diagonal matrix called the Jordan normal form. Each block (Jordan block) has the form:
The Jordan form generalizes diagonalization: diagonalizable matrices have Jordan blocks of size 1, while defective matrices have at least one Jordan block of size greater than 1.
Applications to Systems of Differential Equations
25. The solution to the system dtdx=Ax, where A is diagonalizable, is:
x(t)=eAtx(0)
x(t)=PeDtP−1x(0)
x(t)=∑cieλitvi where vi are eigenvectors
All of the above
Show me the answer
Answer: 4. All of the above
Explanation: For the system of linear differential equations dtdx=Ax with initial condition x(0):
The general solution is x(t)=eAtx(0).
If A=PDP−1 is diagonalizable, then eAt=PeDtP−1, so x(t)=PeDtP−1x(0).
Alternatively, we can write the solution as a linear combination: x(t)=c1eλ1tv1+c2eλ2tv2+⋯+cneλntvn where λi are eigenvalues, vi are corresponding eigenvectors, and ci are constants determined from x(0).
This is one of the most important applications of diagonalization in applied mathematics.
26. For a diagonalizable matrix A, the system dtdx=Ax is stable if:
All eigenvalues have negative real parts
All eigenvalues are real and negative
The determinant of A is negative
The trace of A is negative
Show me the answer
Answer: 1. All eigenvalues have negative real parts
Explanation: For the system dtdx=Ax, the stability is determined by the eigenvalues of A:
If all eigenvalues have negative real parts, the system is asymptotically stable (solutions decay to zero).
If any eigenvalue has positive real part, the system is unstable (some solutions grow without bound).
If eigenvalues have zero real parts (purely imaginary), the system is marginally stable (oscillations but no growth or decay).
For diagonalizable A, the solution is a linear combination of terms eλitvi. The term eλit decays to zero if and only if Re(λi)<0.
Quadratic Forms and Diagonalization
27. A quadratic form Q(x)=xTAx can be diagonalized to:
yTDy where y=P−1x
yTDy where y=PTx
yTDy where D contains eigenvalues of A
Both 1 and 3
Show me the answer
Answer: 4. Both 1 and 3
Explanation: For a symmetric matrix A (which is always diagonalizable), the quadratic form Q(x)=xTAx can be simplified by diagonalization.
If A=PDPT (orthogonal diagonalization), then:
where y=PTx (and note that P−1=PT for orthogonal P).
The diagonal matrix D contains the eigenvalues of A, so:
This is called the principal axes form of the quadratic form.
28. The type of conic section represented by ax2+bxy+cy2=1 can be determined by:
Diagonalizing the corresponding symmetric matrix
Looking at the signs of the eigenvalues
Both 1 and 2
Neither 1 nor 2
Show me the answer
Answer: 3. Both 1 and 2
Explanation: The quadratic form Q(x,y)=ax2+bxy+cy2 corresponds to the symmetric matrix:
By diagonalizing A, we get Q(x,y)=λ1x′2+λ2y′2 in rotated coordinates.
The type of conic section ax2+bxy+cy2=1 is determined by the eigenvalues λ1 and λ2:
Ellipse: Both eigenvalues positive (or both negative)
Hyperbola: One positive and one negative eigenvalue
Parabola: One eigenvalue zero (degenerate case)
Circle: Both eigenvalues equal and positive
This is a classic application of diagonalization in geometry.
Computational Linear Algebra
29. The condition number of a diagonalizable matrix is related to:
The ratio of largest to smallest eigenvalue (in absolute value)
The spread of eigenvalues
Both 1 and 2
Neither 1 nor 2
Show me the answer
Answer: 3. Both 1 and 2
Explanation: For a diagonalizable matrix A, the condition number κ(A) (with respect to the 2-norm) is:
where σmax and σmin are the largest and smallest singular values.
For normal matrices (which include symmetric and diagonalizable matrices with orthogonal eigenvectors), the singular values are the absolute values of the eigenvalues. So:
A large condition number (large spread of eigenvalues) indicates that the matrix is ill-conditioned, meaning small changes in input can cause large changes in output when solving linear systems.
30. The power method for finding the dominant eigenvalue of a matrix works best when:
The matrix is diagonalizable
The dominant eigenvalue is well-separated from others
The matrix is large and sparse
All of the above
Show me the answer
Answer: 4. All of the above
Explanation: The power method is an iterative algorithm to find the eigenvalue with the largest magnitude (dominant eigenvalue). It works by repeatedly multiplying a vector by the matrix A.
The method works best when:
The matrix is diagonalizable: So it has a complete set of eigenvectors.
The dominant eigenvalue is well-separated: ∣λ1∣>∣λ2∣≥∣λ3∣≥⋯. The larger the gap, the faster the convergence.
The matrix is large and sparse: The power method only requires matrix-vector multiplication, which is efficient for sparse matrices.
The method converges to the eigenvector corresponding to the dominant eigenvalue, and the eigenvalue can be estimated using the Rayleigh quotient.
Last updated