|
|
Line 1: |
Line 1: |
| In [[mathematics]], the '''dimension theorem for vector spaces''' states that all [[Basis (linear algebra)|bases]] of a [[vector space]] have equally many elements. This number of elements may be finite, or given by an infinite [[cardinal number]], and defines the [[Dimension (vector space)|dimension]] of the space.
| | My name is Kandace (47 years old) and my hobbies are Petal collecting and pressing and Cycling.<br><br>Feel free to surf to my homepage [http://renaldoiuiu.wordpress.com/2014/07/02/plans-of-gutter-installation-dallas-where-to-go/ Dallas roofing contractor] |
| | |
| Formally, the '''dimension theorem for vector spaces''' states that
| |
| | |
| :Given a [[vector space]] ''V'', any two [[linearly independent]] [[generating set]]s (in other words, any two bases) have the same [[cardinality]].
| |
| | |
| If ''V'' is [[finitely generated module|finitely generated]], then it has a finite basis, and the result says that any two bases have the same number of elements.
| |
| | |
| While the proof of the existence of a basis for any vector space in the general case requires [[Zorn's lemma]] and is in fact equivalent to the [[axiom of choice]], the uniqueness of the cardinality of the basis requires only the [[ultrafilter lemma]],<ref>Howard, P., Rubin, J.: "Consequences of the axiom of choice" - Mathematical Surveys and Monographs, vol 59 (1998) ISSN 0076-5376.</ref> which is strictly weaker (the proof given below, however, assumes [[trichotomy (mathematics)|trichotomy]], i.e., that all [[cardinal number]]s are comparable, a statement which is also equivalent to the axiom of choice). The theorem can be generalized to arbitrary [[module (mathematics)|''R''-modules]] for rings ''R'' having [[invariant basis number]].
| |
| | |
| The theorem for finitely generated case can be proved with elementary arguments of [[linear algebra]], and requires no forms of the axiom of choice.
| |
| | |
| ==Proof==
| |
| | |
| Assume that { ''a''<sub>''i''</sub>: ''i'' ∈ ''I'' } and
| |
| { ''b''<sub>''j''</sub>: ''j'' ∈ ''J'' } are both bases, with the cardinality of ''I'' bigger than the cardinality of ''J''. From this assumption we will derive a contradiction.
| |
| | |
| ===Case 1===
| |
| Assume that ''I'' is infinite.
| |
| | |
| Every ''b''<sub>''j''</sub> can be written as a finite sum
| |
| :<math>b_j = \sum_{i\in E_j} \lambda_{i,j} a_i </math>, where <math>E_j</math> is a finite subset of <math>I</math>.
| |
| Since the cardinality of ''I'' is greater than that of ''J'' and the ''E<sub>j</sub>'s'' are finite subsets of ''I'', the cardinality of ''I'' is also bigger than the cardinality of <math>\bigcup_{j\in J} E_j</math>. (Note that this argument works ''only'' for infinite ''I''.) So there is some <math>i_0\in I</math> which does not appear
| |
| in any <math>E_j</math>. The corresponding <math>a_{i_0}</math> can be expressed as a finite linear combination of <math>b_j</math>'s, which in turn can be expressed as finite linear combination of <math> a_i</math>'s, not involving <math>a_{i_0}</math>. Hence <math> a_{i_0}</math> is linearly dependent on the other <math>a_i</math>'s.
| |
| | |
| ===Case 2===
| |
| Now assume that ''I'' is finite and of cardinality bigger than the cardinality of ''J''. Write ''m'' and ''n'' for the cardinalities of ''I'' and ''J'', respectively.
| |
| Every ''a''<sub>''i''</sub> can be written as a sum
| |
| :<math>a_i = \sum_{j\in J} \mu_{i,j} b_j </math>
| |
| The matrix <math> (\mu_{i,j}: i\in I, j\in J)</math> has ''n'' columns (the ''j''-th column is the
| |
| ''m''-tuple <math> (\mu_{i,j}: i\in I)</math>), so it has rank at most ''n''. [[Vicious circle|This means]] that [[Rank (linear algebra)#Proofs that column rank = row rank|its ''m'' rows cannot be linearly independent]]. Write <math>r_i = (\mu_{i,j}: j\in J)</math> for the ''i''-th row, then there is a nontrivial
| |
| linear combination
| |
| :<math> \sum_{i\in I} \nu_i r_i = 0</math>
| |
| But then also <math>\sum_{i\in I} \nu_i a_i = \sum_{i\in I} \nu_i \sum_{j\in J} \mu_{i,j} b_j = \sum_{j\in J} \biggl(\sum_{i\in I} \nu_i\mu_{i,j} \biggr) b_j = 0, </math>
| |
| so the <math> a_i</math> are linearly dependent.
| |
| | |
| ====Alternative Proof====
| |
| The proof above uses several non-trivial results. If these results are not carefully established in advance, the proof may give rise to circular reasoning. Here is a proof of the finite case which requires less prior development.
| |
| | |
| '''Theorem 1:''' If <math>A = (a_1,\dots,a_n) \subseteq V</math> is a linearly independent [[tuple]] in a vector space <math>V</math>, and <math>B_0 = (b_1,...,b_r)</math> is a tuple that [[spanning set|spans]] <math>V</math>, then <math>n\leq r</math>.<ref>S. Axler, "Linear Algebra Done Right," Springer, 2000.</ref> The argument is as follows:
| |
| | |
| Since <math>B_0</math> spans <math>V</math>, the tuple <math>(a_1,b_1,\dots,b_r)</math> also spans. Since <math>a_1\neq 0</math> (because <math>A</math> is linearly independent), there is at least one <math>t \in \{1,\ldots,r\}</math> such that <math>b_{t}</math> can be written as a linear combination of <math>B_1 = (a_1,b_1,\dots,b_{t-1}, b_{t+1}, ... b_r)</math>. Thus, <math>B_1</math> is a [[spanning set|spanning tuple]], and its length is the same as <math>B_0</math>'s.
| |
| | |
| Repeat this process. Because <math>A</math> is linearly independent, we can always remove an element from the list <math>B_i</math> which is not one of the <math>a_j</math>'s that we prepended to the list in a prior step (because <math>A</math> is linearly independent, and so there must be some nonzero coefficient in front of one of the <math>b_i</math>'s). Thus, after <math>n</math> iterations, the result will be a tuple <math>B_n = (a_1, \ldots, a_n, b_{m_1}, \ldots, b_{m_k})</math> (possibly with <math>k=0</math>) of length <math>r</math>. In particular, <math>A \subseteq B_n</math>, so <math>|A| \leq |B_n|</math>, i.e., <math>n \leq r</math>.
| |
| | |
| To prove the finite case of the dimension theorem from this, suppose that <math>V</math> is a vector space and <math>S = \{v_1, \ldots, v_n\}</math> and <math>T = \{w_1, \ldots, w_m\}</math> are both bases of <math>V</math>. Since <math>S</math> is linearly independent and <math>T</math> spans, we can apply Theorem 1 to get <math>m \geq n</math>. And since <math>T</math> is linearly independent and <math>S</math> spans, we get <math>n \geq m</math>. From these, we get <math>m=n</math>.
| |
| | |
| ==Kernel extension theorem for vector spaces==
| |
| This application of the dimension theorem is sometimes itself called the ''dimension theorem''. Let
| |
| | |
| :''T'': ''U'' → ''V''
| |
| | |
| be a [[linear transformation]]. Then
| |
| | |
| :''dim''(''range''(''T'')) + ''dim''(''kernel''(''T'')) = ''dim''(''U''),
| |
| | |
| that is, the dimension of ''U'' is equal to the dimension of the transformation's [[Range (mathematics)|range]] plus the dimension of the [[Kernel (algebra)|kernel]]. See [[rank-nullity theorem]] for a fuller discussion.
| |
| | |
| ==References==
| |
| <references />
| |
| | |
| {{DEFAULTSORT:Dimension Theorem For Vector Spaces}}
| |
| [[Category:Theorems in abstract algebra]]
| |
| [[Category:Theorems in linear algebra]]
| |
| [[Category:Articles containing proofs]]
| |
My name is Kandace (47 years old) and my hobbies are Petal collecting and pressing and Cycling.
Feel free to surf to my homepage Dallas roofing contractor