2

I recently learned that a vector in mathematics (an element of vector space) is not necessarily a vector in physics. In physics, we also need that the components of the vector on a coordinate transformation as the components of the displacement vector change. So, if my understanding is correct, if $|\mathbf{c}_1|, |\mathbf{c}_2|, |\mathbf{c}_3|,\, \ldots \,,|\mathbf{c}_n|$ are the components of a vector $\mathbf{A}$ and $f$ is the function of transforming coordinates (change of basis), then $$f(\mathbf{A}) = \sum_{i=1}^n{f(\mathbf{c}_i)}$$ where $\mathbf{A} = \sum_{i=1}^n\mathbf{c}_i$.

That is to say, the transformed vector by applying $f$ to it should be equal to the vector formed by the vector components which have been transformed by applying $f$ to them.

Am I correct?

4 Answers4

4

I recently learned that a vector in mathematics (an element of vector space) is not necessarily a vector in physics

A vector in "physics" is exactly the same thing as you have defined it in "mathematics".

Any vector space contains a basis $e_i$ upon which each element can be expanded as $$ v = \sum_k v^k e_k. $$ By definition of basis as tangent vectors to a set of curves, one can show that they must transform in a certain way, say given a transformation matrix $\Lambda$. Since the vector $v$ must be independent of the representation, if the basis transform using $\Lambda$ then the components must transform using the inverse matrix $\Lambda^{-1}$.

$\Lambda$ (respectively $\Lambda^{-1}$) are what physicists refer to as covariant (respectively contravariant) transformation laws for the basis (respectively vector components).

Same holds for dual forms and tensors mutatis mutandis.

gented
  • 6,058
2

You are correct in saying they are different. Physics vectors are mathematical vectors, but not necessarily vice versa.

For example, Birkhoff and Maclane "A Survey of Modern Algebra", p162 of the 1953 edition:


A vector space $V$ over a field $F$ is a set of elements, called vectors, such that any two elements $\alpha$ and $\beta$ of $V$ determine a (unique) vector $\alpha+\beta$ as sum, and that any vector $\alpha$ from V and any scalar $c$ from $F$ determine a scalar product $c.\alpha$ in $V$, with the properties

$V$ is an Abelian group under addition

$c.(\alpha+\beta)=c.\alpha+c.\beta , \qquad (c+c').\alpha=c.\alpha+c'.\alpha$ (Distributive laws)

$(cc').\alpha=c.(c'.\alpha),\qquad 1.\alpha=\alpha$


Hence sets of functions form a vector space. So do simple shopping lists. This brings in the dual space, dimensionality and the basis but there is nothing about physical space, tangents, pointing arrows or all the familiar properties us physicists ascribe to a vector.

Physics vectors have the additional property that they can be transformed (by a rotation). Vector equations must remain valid if they are transformed. So if ${\bf A}={\bf B} + {\bf C}$ then ${\bf f(A)=f(B)+f(C)}$. The transformation must be linear. Your result follows, and shows that, if you have a basis $\{{\bf e}_i\}$ and write ${\bf A}=\sum_i c_i {\bf e_i}$ then the function can be written as a matrix multiplication.

RogerJBarlow
  • 10,293
  • 1
  • 22
  • 45
1

Yes, your statement is correct, however... I haven't seen your use of "component" in a long time. Your use is the strictly correct meaning of component. That is, components are vectors. But the term is often used to mean the "coordinates" of a vector. That is in $\vec{v}=v_{x}\hat{i}+v_{y}\hat{j}+v_{z}\hat{k}$ the actual components are $v_{x}\hat{i}, v_{y}\hat{j}$ and $v_{z}\hat{k}$. But people almost always mean $v_{x}, v_{y}$ and $v_{z}$ when they say "components." And $f\left(v_{y}\hat{j}\right)$ (vector argument) and $f\left(v_{y}\right)$ (scalar argument) are not the same thing. In most situations a function (transformation) taking a vector argument will not be defined for a scalar argument.

Steven Thomas Hatton
  • 2,086
  • 10
  • 28
0

I found out that this answer by user joshphysics answers my question satisfactorily.