Vector Spaces in Abstract Algebra

To put it plainly, "this is kinda a mind fuck". Class notes are accessible at April 25th. This is needed to understand Fields specifically Extension Fields and Splitting Fields

How to think about it

If you have done linear algebra or multivariable calc, this might work against you here. We are usually used to two dimensional vector spaces, such as x,y or x,y,z which can be added together or multiplied by a scaler.

However, in abstract algebra, the goal is to take out the physical attributes and think of it more abstractly. Then you can notice that we can define vectors in n dimensions with some simple definitions, and many more things that we usually do not think about can become vectors; such as sets in Cryptography or Abstract algebra for ML where you might have attributes in 10!+ dimensions.

Many examples from the book were done on paper.

Definition of a Vector Space

A vector space V over a field F is an abelian group with scalar products αν defined for all αF and all νV which satisfies the following axioms given α,βF (Scalars) and μ,νV. (Vectors)

  1. α(βν)=(αβ)ν Or associativity between the scalers
  2. (α+β)ν=αν+βν Distributively between scalers times vectors
  3. α(ν+μ) =αν+αμ Distributively between vectors times scalers
  4. 1v=v
    Notice, most of the time two vectors can not be multiplied. It is always scaler vector. They can however be added. I'll differentiate the additive scaler identity and additive vector identity with 0s and 0v.

Examples:

Rn over R is a vector space.

clearly, you can multiply a scaler from R with a vector from Rn. We define scaler and vector multiplication as such:

s(ab)=(sasb)

Lets show the following axioms within R2.

  1. Associativity:s1(s2(ab))=s1(s2as2b)=(s1s2as2s2b)=(s1s2)(ab)
  2. First Distributivity:(s1+s2)v1=((s1+s2)a(s1+s2)b)=(s1a+s2as1b+s2b)=(s1as1b)+(s2as2b)=s1v1+s2v2
  3. Second distributivity:s1(v1+v2)=s1((ab)+(cd))=s1((a+cb+d))=(s1a+s1cs1b+s1d)=(s1as1b)+(s1cs1d)=s1v1+s2v2
  4. Identity:1(ab)=(1a1b)=(ab)

Clearly R2 is a vector space over R. For the future I will be less rigorous with examples unless absolutely needed, and we have build more intuition.

F[x] forms a vector space over field F

Def:

Vector addition is just polynomial addition and scaler multiplication is just multiplying the polynomial over an element.
This is easy to see so to see so I will skip the proof.

The set of all continuous real-valued functions on a closed interval [a,b] is a vector space over R.

Q(2) is a vector space over Q.

Lets first define Vector addition and scaler multiplication, then it is easy to prove the axioms hold. I did the first few here and the rest on paper, but it is clear to see they hold.

Definition:
Vector addition:

For v1,v2Q[2], let v1=a+b2 and v1=c+d2 for a,b,c,dQ. Then we define v1+v2 as

(v1+v2)=(a+c)+(b+d)2
Scaler Multiplication

For s1Q and the same v1 we used above,

s1(a+b2)=s1a+s1b2
Proof: of first two

Lets prove that Q[2] is a vector space over Q.

  1. Associativity:s1(s2(a+b2))=s1(s2a+s2b2)=s1s2a+s1s2b2=(s1s2)(a+b2)
  2. First distributive property:(v1+v2)s1=((a+c)+(b+d)2)s1=s1(a+c)+s1(b+d)2

Properties:

If V is a vector space over F then some properties hold

  1. 0sv=0 for all v
  2. s0v=0 for all s
  3. sv=0s=0 or v=0
  4. (1)v=v for all v
  5. (sv)=(s)v=s(v) for all s and v

Subspaces

Similar to subgroups for groups and subrings for rings, we have subspaces for vector spaces.

Definition of a Subspace

Let V be a vector space over F, and let W be some subset of V. Then W is said to be a subspace of V iff:

  1. Closed under vector addition and scaler multiplication, i.e. w1+w2 and swW for all scalers and multiples.

Example

A quick example is the subset W of F[x] where polynomials in W have no odd powered terms. (Remember F[x] is a vector space over F)
Clearly, w1+w2W as the odd terms will always have coefficients 0. Similarly for any sF, swW as again the coefficients of odd terms are 0.

Linear Combination

Definition of Linear Combination

w=i=1nsivi=s1v1+s2v2+....+snvn

w is a linear combination of vectors v1,v2,...,vn.

Definition of Spanning set

Spanning set of any vectors v1,v2,...,vn is the linear combination of all the possible vectors.

If a set W is is a spanning set of vectors v1,v2,...,vn, we say W is spanned by v1,v2,...,vn.

If S={v1,v2,...,vn} is a subset of vector space V. Then the span of S is a subspace of V.

Firstly, the span of S is all the vectors of form c1v1+c2v2+....+cnvn.

Vector Addition closure:

For two elements in Span S; s1,s2, we have it so

s1=c1v1+c2v2+....+cnvns2=a1v1+a2v2+....+anvn

Then

s1+s2=(a1+c1)v1+(a2+c2)v2+....+(an+cn)vn

Which is also in Span S

Scaler Multiplication closure.

For any scaler n, this obviously holds as

ns1=nc1v1+nc2v2+....+ncnvnspan(S)

Linear Independence (in Abstract Algebra)

Like in linear algebra vector is set to be linearly dependent if there exists scalers such that the linear combination is 0 without all coefficients being zero. Otherwise, its linearly independent.

I.E. A set is linearly independent if

c1v1+c2v2+...+cnvn=0c1,c2,...cn=0

another way to look at it is a set is linearly independent if a vector can not be written as linear combination of the other vectors.

Byproducts of linear independence:

If {v1,v2,...,vn} is a set of linearly independent vectors, and v=c1v1+c2v2+....+cnvn=b1v1+b2v2+....+bnvn then we know c1=b1....cn=bn

This follows as

v=c1v1+c2v2+....+cnvn=b1v1+b2v2+....+bnvn

means

(c1v1+c2v2+....+cnvn)(b1v1+b2v2+....+bnvn)=0(c1b1)v1+(c2b2)v2+....+(cnb1n)vn=0

This every cibi=0 or ci=bi.

A set of vectors in a vector space V is linearly dependent one of the Viis a linear combination of the rest.

Essentially, take the following example of linear independence where at least one scaler (say c2) is not zero.

c1v1+c2v2+...+cnvn=0

Then we can write v2 as the following

v2=c1c2v1c3c2v3...cnc2vn

This works for any V if the vector is linearly dependent.

Suppose that V is spanned by n vectors. If m>n then any set of m vectors in v must be linearly dependent.

  1. We are saying that V is all the possible linear combinations of n vectors. Any larger set of vectors must be linearly dependent.

Basis:

Think of basis as the generators for a vector space, where the vector space is the span of those basis. The basis has to be linearly independent.

Basis for R3

The Basis of R3 would be {(1,0,0),(0,1,0),(0,0,1)} where the entire set can be generated as a linear combination of these linearly independent vectors.

Note, one set can have multiple basis, such as {(3,2,1),(3,2,0),(1,1,1)} is also the basis for R3.

In general, there is no unique basis for a vector space. In the example above there are actually infinite basis.

Basis for Q[2]

Remember:

Q[2]={a+b2a,bQ}

The sets {1,2} and {1+2,12} are both basis for this set.

All basis for a vector space are of the same length.

This length of the basis is called the dimension of the vector space.

Ending theorems and notes

Let V be a vector space of dimension n. Then we have the following:

  1. Any set of n linearly independent vectors is a basis for V
  2. Any set of n vectors that spans V is a basis for V
  3. For any k<n, every set of k linearly independent vectors there exists a set of nk vectors which can be a joined to create to create a basis for V. (has to be length n)

Linear Transformations Abstract Algebra

For some vector spaces V and W over F, we can define a linear transformation as a the following map ϕ:VW preserving scaler multiplication and vector addition:

ϕ(v1+v2)=ϕ(v1)+ϕ(v2)ϕ(s1v1)=sϕ(v1)

This is a linear transformation from V to W. Note: v1,v2V and s1F

Notice that this is just a homomorphism preserving the structures of vector spaces.

Kernel:

The kernel in linear algebra is just the null space, and its all the vectors in V such that ψ(v)=0

The kernel is always a subspace of V.

Proof:

  1. Closure with vector addition
    1. Let v1,v2 both be in ker(ϕ). Thenϕ(v1+v2)=ϕ(v1)+ϕ(v2)=0+0=0Which means v1+v2 is clearly also in the kernel.
  2. Closure with scaler multiplication
    1. We have to show that if v1 is in the kernel then sv1 is also in the kernel. Assume v1 is in the kernel, then:ϕ(sv1)=sϕ(v1)=s0=0
    2. So clearly sv1 is also in the kernel.

Also, remember that Homomorphism is Injective if and only if the kernel is trivial.

Example

Lets define a linear transformation with the following:

ψ:R2R3ϕ((ab))=(ba+bab)

Now, this is a linear transformation as for any two matrices,
First lets notice that it holds vector addition:

ϕ((a1b1)+(a2b2))=ϕ((a1+a2b1+b2))=(b1+b2a1+a2+b1+b2(a1+a2)(b1+b2))=(b1+b2a1+b1+a2+b2a1b1+a2b2)=(b1a1+b1a1b1)+(b2a2+b2a2b2)=ϕ((a1b1))+ϕ((a2b2))

Now lets notice that it holds scaler multiplication:

ϕ(s(ab))=ϕ((sasb))=(sbsa+sbsasb)=s(ba+sbab)=sϕ((ab))

The kernel of this vector space is also easy to compute, we just need all (ab) such that (ba+bab)=0.

Clearly this happens when the following systems of equations hold true:

b=0,a+b=0,ab=0

Given this, this only happens when a and b are both zero, so the kernel has to be {(00)} or trivial.