Linear algebra (Osnabrück 2024-2025)/Part II/Lecture 35
- Angle-preserving mappings
between Euclidean vector spaces and is called angle-preserving, if for any two vectors (different from ) the relation
Angles are only defined for vectors different from . Therefore, angle-preserving mappings are injective. An isometry is always angle-preserving, because the norm and the angles are defined with respect to an inner product, and are not changed under an isometry. Further examples of angle-preserving mappings are homotheties by a scalar different from , see Exercise 35.1 . An angle-preserving mapping maps orthogonal vectors to orthogonal vectors.
Let
be a -linear mapping that is given by the multiplication with the complex number
With respect to the real basis of , this mapping is described by the real -matrix
We write this matrix as
Therefore, we have a composition of an isometry (a plane rotation) and of a homothety with the scalar factor
in particular, this is an angle-preserving mapping.
Let
denote an angle-preserving linear mapping on the Euclidean vector space . Then there exists an isometry
and a homothety
such that
Let
and set
where denotes the dimension of . Let be the homothety with factor . We consider the mapping
This mapping is still angle-preserving, and its determinant is or . Because of Exercise 33.18 , is an isometry.
For an angle-preserving mapping
between Euclidean vector spaces and , not only the angles at the origin, but all angles are preserved. For points , the angle of the triangle at coincides with the angle at of the image triangle , because of
- Distances between sets
We will apply this concept for normed vector spaces and for Euclidean vector spaces. For two points , the distance between the sets and equals .
We will mainly work in situations where the infimum is obtained, that is, the infimum is also a minimum. This is the typical behavior for linear objects.
Let be a Euclidean vector space, a linear subspace, and . Then is the point on that has, among all points of , the minimal distance to . In particular, we have
For , we have, due to the Pythagorean theorem,
as and are perpendicular to each other. This expression is minimal if and only if , and this holds if and only if
In this context, is also called the dropped perpendicular foot of on .
Let be a Euclidean vector space, a linear subspace, and . Let denote an orthonormal basis of . Then
Because of Lemma 35.6 , we have
and, according to Lemma 32.14 , we have
The vectors and are orthogonal to each other. Therefore, using the Pythagorean theorem, we have
Let , and let denote the linear subspace spanned by this choice of standard vectors. Let
Then, the distance of to equals
The dropped perpendicular foot of on is
where
Let be a vector with , and let
denote the linear subspace defined by as normal vector. Then, for a vector , the distance to equals
Let be an orthonormal basis of , and write
Then
and, due to Lemma 35.6 , we have
In conjunction with
this yields the result.
These considerations do also hold for affine subspaces.
Let be a real affine space over the Euclidean vector space , let be a point, and let denote an affine subspace. In case , the distance of to equals . In general, we write
with a point and with a linear subspace . We determine the orthogonal complement of in . If is a basis of , and is a basis of , then there exists a unique representation
In this case,
is the dropped perpendicular foot of on , and the distance of to is
If the form an orthonormal basis of , then this equals .
We want to determine in the Euclidean plane the distance between the point and the line given by . The line has the form
and is a vector perpendicular to . We have
Therefore, the dropped perpendicular foot is
and the distance is
Let be a Euclidean vector space, and let and denote nonempty affine subspaces with the linear subspaces . Let
with , , and . Then the distance equals ; it is obtained in the points and
. In particular, the connecting vector of the points, where the minimal distance is obtained, is perpendicular to and to .We write with , , and ; such a decomposition does always exist, are not uniquely determined (in case ), but is uniquely determined. We have
and , and . The distance between and is . For arbitrary points and fulfilling and , we have
that is,

In the previous statement, the points where the minimum is obtained, are not unique determined; for example, think about two parallel lines in the plane. If the intersection of the linear spaces corresponding to equals , then we have uniqueness. This is true in the case of skew lines.
Two (affine) lines are called skew if they do not have any point in common and if they are also not parallel, meaning their vectors are linearly independent. Then, these vectors generate a plane; there exists a vector perpendicular to this plane. We can compute one such vector, the normal vector, with the cross product. Let
and
The linear system of equations
has a unique solution . Here, and are the dropped perpendicular feet, where the distance of the lines is obtained, according to Lemma 35.12 . This distance is .
Let
and
be skew lines in , with vectors . Let be a normed vector that is perpendicular to and . Then
We use Example 35.13 , and consider
According to Cramer's rule, we obtain, using Lemma 33.3 (5), and the property that is a linear multiple of ,
Let
and
be skew lines. We want to understand the distance problem between the two lines as an extremal problem in the sense of higher-dimensional analysis. Let
and
The square of the distance between the two points
and
is (setting )
We interpret this expression with methods of Analysis 2. We consider the data given by the lines as fixed parameters, so that we have a real-valued expression in the two real variables and , and we want to determine its extrema. The partial derivatives are
and
If we equate this with , then we obtain an inhomogeneous linear system of equations with two equations in the variables and . Using Cramer's rule, we get
and
If and are normed, then these expressions can be simplified to
and
| << | Linear algebra (Osnabrück 2024-2025)/Part II | >> PDF-version of this lecture Exercise sheet for this lecture (PDF) |
|---|