Algebraic vs. Geometric Multiplicity: The Dimensions of Eigenvalues
In the world of linear algebra, eigenvalues and eigenvectors represent the fundamental 'axes' of a linear transformation—the directions that remain unchanged, only scaled. But what happens when an eigenvalue is 'repeated'? This repetition isn't a simple concept; it has two distinct, crucial measures: Algebraic Multiplicity and Geometric Multiplicity. Understanding the difference between them unlocks a deeper insight into the nature of matrices and their transformations.
A Quick Eigen-Recap: The Foundation
Before diving into multiplicities, let's recall the core idea. For a square matrix $$A$$, an eigenvector $$\vec{v}$$ and its corresponding eigenvalue $$\lambda$$ satisfy the equation:
$$A\vec{v} = \lambda\vec{v}$$
This means that when the transformation $$A$$ is applied to the vector $$\vec{v}$$, the result is the same vector, just scaled by the factor $$\lambda$$.
To find these eigenvalues, we solve the characteristic equation, which is derived from $$(A - \lambda I)\vec{v} = \vec{0}$$ and requires finding the roots of the characteristic polynomial:
$$\det(A - \lambda I) = 0$$
The roots of this polynomial are our eigenvalues. The nature of these roots is where our story of multiplicity begins.
Understanding Algebraic Multiplicity (AM): The Polynomial's Story
The Algebraic Multiplicity (AM) of an eigenvalue is perhaps the more straightforward of the two concepts. It's all about the characteristic polynomial.
Definition: Algebraic Multiplicity (AM)
The algebraic multiplicity of an eigenvalue $$\lambda$$ is the number of times it appears as a root in the characteristic polynomial. In other words, it's the power of the factor $$(x - \lambda)$$ in the factored polynomial.
Analogy: Counting Repeated Roots
Imagine you have a simple polynomial equation like $$(x-5)(x-5)(x-5)(x+2) = 0$$, or more compactly, $$(x-5)^3(x+2)^1 = 0$$. The roots are $$x=5$$ and $$x=-2$$. The root $$x=5$$ appears three times, so its multiplicity is 3. The root $$x=-2$$ appears once, so its multiplicity is 1. Algebraic multiplicity for eigenvalues works exactly the same way.
Example: Calculating Algebraic Multiplicity
Let's consider the matrix $$B$$:
$$ B = \begin{pmatrix} 2 & 1 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1 \end{pmatrix} $$
1. Find the characteristic polynomial, $$\det(B - \lambda I)$$:
$$ det\left( \begin{pmatrix} 2-\lambda & 1 & 0 \\ 0 & 2-\lambda & 0 \\ 0 & 0 & 1-\lambda \end{pmatrix} \right) = (2-\lambda) \cdot det\left( \begin{pmatrix} 2-\lambda & 0 \\ 0 & 1-\lambda \end{pmatrix} \right) $$
$$ = (2-\lambda)(2-\lambda)(1-\lambda) = (\lambda-2)^2(\lambda-1)^1 $$
2. Identify the roots and their multiplicities:
- The root $$\lambda_1 = 2$$ appears twice. Therefore, the Algebraic Multiplicity of $$\lambda=2$$ is 2.
- The root $$\lambda_2 = 1$$ appears once. Therefore, the Algebraic Multiplicity of $$\lambda=1$$ is 1.
Exploring Geometric Multiplicity (GM): The Vector's Story
Geometric Multiplicity (GM) moves beyond the polynomial and looks at the eigenvectors themselves. It answers a fundamentally geometric question: for a given eigenvalue, how rich is its corresponding space of eigenvectors?
Definition: Geometric Multiplicity (GM)
The geometric multiplicity of an eigenvalue $$\lambda$$ is the number of linearly independent eigenvectors associated with it. This is equivalent to the dimension of the eigenspace for $$\lambda$$.
Analogy: Independent Avenues of Scaling
Think of a transformation. The eigenvalue $$\lambda=3$$ means there's a direction where vectors are simply stretched to 3 times their original length. The geometric multiplicity tells you how many *independent* directions share this same scaling behavior. If GM=2 for $$\lambda=3$$, it means there's a whole *plane* of vectors where every vector in that plane gets scaled by 3. If GM=1, there's only a single *line* of vectors that get scaled by 3.
How to Calculate Geometric Multiplicity: A Step-by-Step Guide
- For a given eigenvalue $$\lambda$$, set up the equation $$(B - \lambda I)\vec{x} = \vec{0}$$. This is the task of finding the null space (or kernel) of the matrix $$(B - \lambda I)$$.
- Use Gaussian elimination to reduce the matrix $$(B - \lambda I)$$ to its Row-Echelon Form.
- Count the number of 'free variables' in your system of equations. A free variable corresponds to a column without a leading 1 (a pivot).
- The number of free variables is the dimension of the null space, which is exactly the geometric multiplicity of $$\lambda$$.
Example: Calculating Geometric Multiplicity
Let's use our same matrix $$B$$ and its eigenvalues $$\lambda=2$$ and $$\lambda=1$$.
Case 1: Eigenvalue $$\lambda_1 = 2$$ (AM = 2)
1. Set up $$(B - 2I)\vec{x} = \vec{0}$$:
$$ (B - 2I) = \begin{pmatrix} 2-2 & 1 & 0 \\ 0 & 2-2 & 0 \\ 0 & 0 & 1-2 \end{pmatrix} = \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1 \end{pmatrix} $$
2. Solve the system:
$$ \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} $$
This gives us the equations $$x_2 = 0$$ and $$-x_3 = 0$$ (so $$x_3=0$$). Notice that there are no constraints on $$x_1$$. It can be any value. Thus, $$x_1$$ is our one and only free variable. The solution vector is:
$$ \vec{x} = \begin{pmatrix} x_1 \\ 0 \\ 0 \end{pmatrix} = x_1 \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} $$
3. Find the GM: Since there is only one free variable ($$x_1$$), there is only one linearly independent eigenvector for this eigenvalue. Therefore, the Geometric Multiplicity of $$\lambda=2$$ is 1.
Case 2: Eigenvalue $$\lambda_2 = 1$$ (AM = 1)
We find the eigenspace for $$\lambda=1$$. $$(B-I)\vec{x}=\vec{0}$$ gives $$x_1+x_2=0$$ and $$x_3=0$$. This has one free variable ($$x_2$$, for instance), so the basis for the eigenspace is one vector, e.g., $$$\begin{pmatrix} -1 \\ 1 \\ 0 \end{pmatrix}$$$. The Geometric Multiplicity of $$\lambda=1$$ is 1.
The Crucial Relationship: Connecting AM and GM
Now we can compare the multiplicities for our matrix $$B$$:
- For $$\lambda=2$$: AM = 2, but GM = 1.
- For $$\lambda=1$$: AM = 1 and GM = 1.
This reveals a fundamental theorem of linear algebra.
The Golden Rule of Multiplicities
For any eigenvalue $$\lambda$$ of a matrix $$A$$, its geometric and algebraic multiplicities are related by the following inequality:
$$ 1 \le GM(\lambda) \le AM(\lambda) $$
The geometric multiplicity can never be greater than the algebraic multiplicity. It can be equal, or it can be less, but never more.
Defective vs. Diagonalizable Matrices
This relationship is not just a mathematical curiosity; it's the defining characteristic that separates matrices into two important classes.
- Defective: An eigenvalue is called 'defective' if its Geometric Multiplicity is strictly less than its Algebraic Multiplicity ($$GM < AM$$). A matrix with at least one defective eigenvalue is called a defective matrix. Our matrix $$B$$ is defective because for $$\lambda=2$$, its GM (1) is less than its AM (2).
- Non-Defective (Diagonalizable): An eigenvalue is 'non-defective' if $$GM = AM$$. A matrix is non-defective or, more commonly, diagonalizable, if and only if all of its eigenvalues are non-defective.
Why This Distinction Matters: The Big Picture
The gap between AM and GM tells you something profound about the geometry of the transformation.
- Algebraic Multiplicity (AM) is the 'potential' or 'expected' dimension of the eigenspace based on the matrix's characteristic DNA.
- Geometric Multiplicity (GM) is the 'actual' dimension of the eigenspace that truly exists.
When $$GM < AM$$, it means the matrix transformation is 'deficient' in some way. It doesn't have enough independent directions of pure scaling to form a complete basis for the vector space. The transformation involves a 'shear' component in addition to scaling, which collapses dimensions and prevents a full set of eigenvectors from emerging.
The Key to Diagonalization
A square matrix $$A$$ is diagonalizable if it can be written as $$A = PDP^{-1}$$, where $$D$$ is a diagonal matrix of eigenvalues and $$P$$ is an invertible matrix whose columns are the corresponding eigenvectors.
This powerful decomposition is only possible if you can find a basis for the entire vector space that consists entirely of eigenvectors. This requires that for an n x n matrix, you can find n linearly independent eigenvectors.
The condition for this is simple: A matrix is diagonalizable if and only if the geometric multiplicity of every eigenvalue equals its algebraic multiplicity.
Since our matrix $$B$$ is defective ($$GM(2) < AM(2)$$), it is not diagonalizable. We simply cannot find enough linearly independent eigenvectors to form the matrix $$P$$.
Conclusion
Algebraic and geometric multiplicity provide two different lenses for viewing the same phenomenon. Algebraic multiplicity is a count from a polynomial, giving the 'expected' number of dimensions for an eigenspace. Geometric multiplicity is a count of actual, linearly independent eigenvectors, giving the 'true' dimension of that eigenspace.
The relationship $$1 \le GM \le AM$$ is a fundamental constraint, and the question of whether $$GM=AM$$ for all eigenvalues is the ultimate test for whether a matrix can be simplified to its purest form—a diagonal matrix. This distinction is not just academic; it underpins countless applications, from solving systems of differential equations to modeling population growth and analyzing quantum states.
Take a Quiz Based on This Article
Test your understanding with AI-generated questions tailored to this content