Hello people!! I am Mr. Geek, and as promised here is my first post on Vectors. Here I am trying to approach the advanced basics of linear algebra (specifically vectors and matrices) from a beginner level, so I guess any high school student can follow here. So here I go…

First, let me ask you, for those who are acquainted with graphs, what is a dimension of a space? Or, let’s say, what do we mean when we say 3D or 2D space?

Many of you will try to answer like, we can measure a point, or line, or anything with three different quantities: length, breadth and height in 3D, or only length and breadth in 2D… or some of you might come up with an answer like we can have maximum 3 axes drawn who are perpendicular to each other (X-Y-Z) and locate each point with respect to them in 3D, and similarly 2 in case of 2D. Though, the second answer with the axes and all is more preferable, or might be more preferable to some, theoretically this is NOT the case.

Now let us come to the readers here who are in high schools and just had Matrix and Determinant in their syllabus for like the first time. Many of you have heard about the RANK of a matrix right? You might also know how to find them, you know those echelons and row echelons.. blah blah… but (from what I have observed), can you tell me what exactly is a rank?? So let’s answer these basic question where many are seen to stumble upon….

Now, what is a vector? Vector is a quantity which has magnitude, as well as direction….. duh!! Everyone knows that….. now in a space, say in a 2D space, how is this vector represented? Many of you will know that each vector is a point…. as written in many books… WRONG!!!! Though a point and a vector is related, they are not the same. A vector is essentially the difference between two points. So, this might raise questions like, say a vector at point P(2,3) in 2D space is said to be a vector say

Now, essentially this is a vector from origin (0,0) to P. Therefore, a vector is same, as long as the difference between two points is equal. Hence, let us consider a vector u (I am losing the arrow-head as typing it is quite difficult), now let,

therefore, this vector is same as when you draw it from origin to point (9,4)…. or if you draw it from a point say (12,4) to a point (21,8), as long as the difference is equal, the vector is same.

Note that, in high school physics, we were taught that two parallel vectors of equal length are equal. Now let’s try to map what we learnt just now with that. Let us consider a vector from (x_{1}, y_{1}) to the point (x_{2}, y_{2}), therefore the vector becomes (let v)

Now, if this vector is considered a line, the length stands as

√((x_{2}-x_{1})^{2} + (y_{2}-y_{1})^{2}), therefore, take any two points keeping the difference constant, the vector’s magnitude remains equal.

Now coming to the parallelism, the positive angle that the line makes with the X-axis will be varying with

(y_{2}-y_{1})/(x_{2}-x_{1}), which evidently depends upon the difference. So, the angle remains constant as long as the difference is constant i.e. they are parallel. I guess this makes both the concept quite clear in your mind.

Now, to answer the first question that I asked at the very start of this blog post, about the dimensions, I need to define few basic terms. Keep up with me, if a single point gets even slightly blurred out, verify whether the example provided is explaining it to you, or re-read the part again, as the rest of the post (and some of the following, and probably for the rest of your career) requires knowledge of these definitions.

Note: A vector can be written in matrix form, which are either a column or row matrix, consisting of the components along each axes as an element of the matrix. For example, consider the vector u as discussed above, its matrix form will be (column matrix) :

I will be preferring column matrix, but it’s the reader’s wish to choose one.

Consider n number of scalars α_{1}, α_{2}, α_{3} … α_{n}, and n number of non-zero vectors x̰_{1}, x̰_{2}, x̰_{3} … x̰_{n}, such that,

α_{1}x̰_{1} + α_{2}x̰_{2} + … + α_{n}x̰_{n} = 0 implies that α_{1} = α_{2} = … = α_{n} = 0, then the vectors x̰_{1}, x̰_{2}, x̰_{3} … x̰_{n} are called __linearly independent__, else they are linearly dependent, which means, if there exist a scalar α_{i} ≠ 0, then all of them are not independent linearly. In general sense, none of those n linearly independent vectors would ever produce another vector in those n vectors by some scalar multiplication and such.

For example, consider two vectors u and P in the previous discussions, they are linearly independent. Notice that one can never produce P from u or vice-versa by some linear multiplication of scalars.

Now, if I consider a vector say x̰, such that it can be written as :

x̰ = α_{1}x̰_{1} + α_{2}x̰_{2} + … + α_{n}x̰_{n},

where all the α_{i}‘s are scalars and x̰_{i}‘s are vectors, then x̰ is called the __LINEAR COMBINATION__ of the set of vectors {x̰_{1},x̰_{2}…x̰_{n}}.

Now, I want to remind you people, that all the vectors I am working here with are real, i.e., if I talk about an n-dimensional (I will get back to this soon) vectors here, I will be talking about vectors that belong to ℝ^{n}, which is set of all the real n-dimensional vectors, or if we talk in matrix form, set of all n x 1 column matrices with real valued elements.

So, let V be a subset of ℝ^{n}, such that if vectors a̰ and b̰ belong to V, then a̰ + b̰, αa̰, αb̰ also belong to V, for every scalar α, i.e., V is closed under addition and scalar multiplication, then V is called a __SUBSPACE__.

The set of all possible linear combination of a set of vectors is called the __SPAN__ of the set of vectors. For example, let {x̰_{1}, x̰_{2}, x̰_{3} … x̰_{n}} be a set of vectors, then span({x̰_{1}, x̰_{2}, x̰_{3} … x̰_{n}}) is the set of all possible linear combination of this set.

Now, onto the first question (FINALLY), what is dimension?

Alright, now from what we have discussed so far, if I take a set of m number of linearly independent vectors {x̰_{1}, x̰_{2}, x̰_{3} … x̰_{m}} which is a subset to a subspace say V, such that span({x̰_{1}, x̰_{2}, x̰_{3} … x̰_{m}}) = V, then the vectors x̰_{1}, x̰_{2}, x̰_{3} … x̰_{m} are called the __BASIS__ of the subspace V.

Now, keep up with me, try mapping this with your 2D graph, or 3D graph you have learnt in your co-ordinate geometry. Here, your space on which you are considering your points (in 2D, it is ℝ^{2}) is the subspace, therefore your basis would be the unit vectors along axes (î and ĵ in 2D). Note that neither of the axes’ unit vectors is dependent linearly upon each other, which proves their linear independence. You cannot produce the unit vectors î and ĵ from each other by any linear addition or scalar multiplication. In fact, these are called the natural, or standard basis of the subspace ℝ^{2}.

Actually, these are one of many bases possible for this subspace. So I guess some of you probably guessed what the perfect definition of dimension would be.

So, from vector theory, any n number of linearly independent vectors will produce (or rather spans) the subspace ℝ^{n}. Hence, the definition of dimension stands:

- The minimum number of linearly independent vectors that spans, or whose set of all possible linear combination will produce all the vectors possible in the subspace is
__DIMENSION__of that space.

So, you see, what you have been learning these days is merely a trivial, or a special case. You will now realise, that not just the 2 axes, or the two standard unit vectors, but any two linearly independent vectors of ℝ^{2} can produce every vector that is possible in ℝ^{2} by their linear combination.

… TO BE CONTINUED

That’s all for today, the next question will be answered in the next post, which is a continuation of this. So make sure you understand all of this, you can comment, or post it on the facebook page any query of yours.

And if you like it, share this, how difficult it is TO SHARE THIS PAGE!!! IT’S FREE!!

Until, next time, bye bye geeks!!