6

In introductory courses, vectors are defined as objects with direction and magnitude. I guess everyone has arrows in mind when talking about vectors and that's probably the most intuitive description, but I was wondering if mathematical physicists have found more well-founded ways of describing vectors. I am not talking about arbitrary vectors (elements of a vector space), but about three dimensional vector quantities like velocity, current density, electric field and so on.

I am aware of two concepts that seem promising: The translation space of an affine space (1.) and the tangent spaces of a manifold (2.). But there are probably other concepts I don't even know of.

My thoughts:

  1. Once we have chosen a system (not necessarely a frame) of reference, we can model space as a three dimensional affine space. So one guess would be that vectors are elements of the translation space.

  2. Since maps like to electric or magentic field (velocity/force fields) are called vector fields in introductery courses, one might expect that there is a connection to the vector fields from differential geometry. The $3$-dim. affine space mentioned above has an associated $3$-dim. manifold, so it seems plausible that they can be modeled as vector fields on that manifold.$^1$

This are really just some ideas - I haven't found a reference yet. If this goes in the right direction, I'd be happy about a confirmation/elaboration.

Any thoughts and comments are much appreciated!

$^1$As far as I know, the electric field can't be modeled as a vector field on spacetime, because it is only defined with respect to a specific frame of reference. (I am aware of the formulation of Electrodynamics using the EM field tensor $F\colon M\to \Lambda^2T^*M.)$

Filippo
  • 2,000

5 Answers5

5

The notion of vector (not just in the sense of a point in a vector space) can be and is generalized in countless different ways. Here I just mention one that I personally find very intriguing, especially in relation to electromagnetism, and give some personal thoughts. References are given throughout the answer and at the end.

I think I heard of theories that use differential manifolds (eg to model spacetime) with a bundle of affine spaces, rather than vector spaces. Curtis & Miller mention such bundles in chap. 16:

Personally I see affine space more like a particular case of a differential manifold: one in which there is a flat affine connection. From this viewpoint the notion of vector on an affine space is just the same as that on a differential manifold (tangent vector), with the only difference that there's a "friendlier" (but less interesting) parallel transport in affine space.

At the same time, there's also an alternative way of seeing vectors (and their generalizations discussed below) in an affine space. It appears in the geometric calculus of Grassmann and Peano:

In geometric calculus, vectors aren't introduced as translations (even though they can also be used that way there), but as sort of points at infinity. This leads to a framework in which one works simultaneously on affine space, projective space, and (multi)vector space, yet keeping them somewhat distinct. I think it has great pedagogic potential for introducing and teaching these kinds of space.

There are a couple of brilliant and clear articles by Goldman that explain this "dialogue" between these three spaces in geometric calculus (which can only be glimpsed in Peano, although it's there on closer inspection). Goldman uses it for computer graphics:

I think it would be great if this viewpoint could be brought to differential geometry, but don't know of any works that have done that.


Ideas and objects extremely close to those of geometric calculus can also be constructed on a vector space, however, and from there they're naturally brought to differential geometry. This is an old and well-developed framework, albeit not yet as widely known as it deserves to be. The basic idea is very intuitive, and it's the same as in the geometric calculus:

A vector is usually identified by: (1) a direction, in the sense of a specific straight line; (2) an orientation on that line, with two possible choices; (3) a magnitude on that line, in the sense of a chosen segment there. All these three characteristics can be generalized, in an ambient space of dimension $D$:

  1. Instead of a line we can take any $k$-dimensional flat subset with $k\le D$ (plane, hyperplane, etc.).

  2. The orientation can be chosen on the $k$-dimensional flat subset itself (two choices), or on its complement, that is, the equivalence class of all subsets parallel to the specific subset. Figuratively, this corresponds to taking the orientation "around" the subset, rather than on it.

  3. Also the magnitude ($k$-volume) can be taken on the flat subset, or on its complement. (This notion of magnitude doesn't require a metric; e.g. see Affine and convex spaces: blending the analytic and geometric viewpoints for an explanation.)

The freedom in these three choices, which can be combined, leads to "generalized vectors" that also have intuitive visual representations in 2D and 3D. Here are the drawings from Schouten's book, for the case of a 3D space:

generalized vector from Schouten 1 generalized vector from Schouten 1

The first kind of generalization leads to multivectors, which represent oriented areas, volumes, and so on (in differential geometry, tangent areas, volumes, and so on). The second kind of generalization leads to twisted vectors, which can represent notions such as rotational momentum, or the direction of flux through areas, and so on. The third kind of generalization leads to (multi)covectors (or forms in differential geometry), objects that can acts as basic measurement operations, or, in differential geometry, meant to be integrated.

So what we obtain are none other than the multivectors and multicovectors (that is, elements of the dual space) of linear algebra, with so called "twisted" or "straight" orientations, or "odd" and "even" as de Rham called them. They also correspond to completely antisymmetric, fully contravariant or fully covariant tensors. And they also correspond to the objects treated by the geometric calculus of Grassmann and Peano.

These objects represent very well many physical quantities, such as densities, forces, and especially the objects in electromagnetism. They fully express the symmetries and operational meaning of those objects.

Just two examples:
Charge density is something that's supposed to be integrated in a region of space, to give the total charge in that region. And such integration must not depend on distances or metric, since it's a purely topological notion: we choose a closed surface in space, and we say what's the total charge in there. It turns out that this notion and operational meaning can be fully expressed by one of the objects above: a twisted 3-covector.
– The electric field corresponds to a straight 1-covector, and again this stems from its operational meaning: integrated on a curve (again, this is all topological), it yields a potential difference.

...And many other physical notions: all of electromagnetism and most part of mechanics. The most fascinating aspect is that these notions allow you to interpret the electromagnetic (Faraday) field as the motion of a continuum of strings (1D objects) in spacetime, just like classical mechanics treats of a continuum of particles (0D objects).

One object that still has some mystery from this point of view is the stress-energy-momentum tensor.

I recommend the works of Bossavit and of Hehl & al. below to see what the various electromagnetic fields correspond to.

There is a very extensive literature on these notions and their applications. Here are some further references.

To get acquainted with them and their use in electromagnetism:

For their geometric properties and application to physics in general:

Some remarks about the "natural" multivector form of the stress-energy-momentum tensor are given here:


This is just the tip of the iceberg, and it's a huge iceberg. Some forward- and backward-searches from these references will reveal many many other interesting ones.

pglpm
  • 4,109
2

I will note that these two notions of a vector field are actually related by a concept known as flow. Essentially, a vector field generates a set of "translations" on a manifold, and similarly such a set of "translations" will define a vector field.

Flows are extremely important both in geometry generally and physics specifically. For example, these give an exceptional framework within which the canonical formalism of classical mechanics (and large swaths of quantum mechanics) can be understood. This Wiki page might get you started on the topic if you're unfamiliar, though any reasonable book covering differential geometry will discuss flows. For example, Nakahara's book has a section dedicated to them.

As for your thoughts on E&M specifically, let me say that you are correct that there are issues defining the $E$ and $B$ fields in a Lorentz invariant way. The correct way to do electrodynamics in a Lorentz invariant way is to use the vector potential. Not the 3-component vector, but the 4-vector potential (the zero component is the potential while the other three components are the 3-vector potential). For example, see this Wiki page.

The vector potential is, in essentially all respects, the correct quantity to work with if you want Lorentz invariance. For instance, it actually transforms correctly as a 4-vector under Lorentz transformations whereas the $E$ and $B$ fields do not (look up their transformations in any E&M book, it's not pretty). The $E$ and $B$ fields should instead be understood as the components of the field strength tensor, $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$ and transform as such under Lorentz transformations.

The vector potential is also needed to write down the Lagrangian of Maxwell's equations coupled to currents/densities.

So in the end, you're right that there are issues with choosing reference frames, but as is almost always the case in physics, there's a very neat and tidy way to circumvent all those issues at the cost of dealing instead with a slightly different object (in this case, the vector potential).

1

I would carefully separate the concept of the vector from the more complex notion of a vector field. The latter implies the definition of the former.

I would say that there is only one definition of a vector: a member of a set whose elements satisfy all the axioms defining a vector space. See, for instance, the corresponding Wikipedia page.

Once that abstract definition is clear, one can check that the set of arrows with coinciding tails is an example of vectors simple to visualize. However, one should not forget that the arrows' set is just a simple geometric example of vector space. Physical vectors are not arrows.

Moreover, the arrows' example may induce the false definition of vectors in 3D as something having direction and magnitude. Even if it is an often stated definition, it is wrong. The composition laws are an equally important part of the concept of a vector. If the composition law does not satisfy the vector space's axioms, even an object equipped with magnitude and direction cannot be considered a vector. A well-known example is the case of rotations in space. It is undoubtedly possible to assign a direction to a rotation around an axis (the axis direction) and a magnitude (the angle of rotation). However, suppose we define the sum of two rotations as the resulting final rotation. In that case, we realize that this composition law is not commutative. Take a book, rotate it of pi/2 around an x-axis, and then by the same amount around a y-axis, restart from the initial orientation, perform before the y-axis, and then the x-axis rotation. The final result will be different.

On the other hand, every particular example (vectors in an affine space, vector in the tangent plane of a differential manifold, elements of a Hilbert space of functions, etc.) may hide the concept's underlying unity of a vector. In my opinion, even at the elementary level, it could be possible to start with a few examples of vector spaces, including the case of vectors quite far from the idea of an arrow (for instance, function spaces or polynomial spaces).

1

As Charlie mentioned in the comments, the mathematical definition of vector is very precise. You begin with a set whose elements satisfying certain axioms of operations (see the Wiki page), and that set is referred to as a vector space. Elements of that set are referred to as vectors.

In fact, the vector operation doesn't need to be addition, nor does the field which the vector space is over needs to be continuous. For instance, in graph theory, the set of cycles is a vector space over a discrete field, with the operation being symmetric difference. All the usual concepts such as dimensionality still applies, albeit being a bit more abstract.

For physics (or differential geometry), it's generally assumed that whenever we talk about "vector space", it's the tangent space of some manifold. And when the manifold is transformed via some diffeomorphism, the elements in the tangent space transforms, as the name implies, "like vectors", which essentially means the transformation is linear.

PeaBrane
  • 723
0

To have a vector field, you need a vector space at each point of a manifold (spatial manifold in nonrelativistic physics, or spacetime manifold in relativity). In general this structure is called a vector bundle. Most commonly in physics, one considers vectors in the tangent bundle (i.e. the bundle of tangent vectors to a manifold). Vectors in the tangent bundle relate to infinitesimal motions in the manifold (like velocity vectors), unlike vectors in an arbitrary bundle.

The electric field and magnetic field can be considered tangent vector and tangent pseudovector fields (respectively) on a spatial manifold in some reference frame. Together the electromagnetic field forms a tangent vector field on spacetime.