Linear algebra (Osnabrück 2024-2025)/Part II/Lecture 38

< Linear algebra (Osnabrück 2024-2025) < Part II



Bilinear forms

Real inner products are positive definite symmetric bilinear forms. In the following, we discuss bilinear forms in general. Beside inner products, there are the Hesse-forms, which are important in higher dimensional analysis in order to determine extrema, and the Minkowski-forms, which are used to describe special relativity theory (see Lecture 40).


Let be a field, and let denote a -vector space. A mapping

is called a bilinear form, if, for all , the induced mappings

and for all , the induced mappings

are

-linear.

Bilinear simply means being multilinear in two components. An extreme example is the zero form, which assigns to every pair the value . It is easy to describe many different bilinear forms on .


Let , and let denote fixed numbers, for . Then, the assignment

is a bilinear form. In case

for all , this is the zero form; in case

we have the standard inner product (where the expression makes sense for every field; the property of being positive definite does only make sense for an ordered field). In case and

we talk about a Minkowski-form. For and

this is the determinant in the two-dimensional case.

An important property of a bilinear form (which inner products fulfill) is formulated in the next definition.


Let be a field, and let denote a -vector space. A bilinear form

is called nondegenerate, if for every , , the induced mapping

and, for every , , the induced mapping

is not the

zero mapping.

We will prove in Lemma 38.5 , for a vector space endowed with a nondegenerate bilinear form, the existence of a natural bijective relation between vectors and linear forms. This holds in particular for inner products. In general, there is a strong relation between bilinear forms and linear mappings to the dual space.


Let be a field, let denote a -vector space with its dual space . Let

be a linear mapping. Then

defines a bilinear form

on .

Since , the evaluation at a vector yields an element of the base field. The linearity in the second component rests on the fact that belongs to the dual space. The linearity in the first component rest on the linearity of .



The gradient

Let be a field, and let denote a -vector space, endowed with a bilinear form

. Then the following statements hold.
  1. For every vector , the assignments

    and

    are -linear.

  2. The assignment

    is -linear.

  3. If is nongenerate, then the assignment in (2) is injective. If, moreover, is finite-dimensional, then this assignment is bijective.

(1) follows immediately from bilinearity.
(2). Let and . Then, for every vector , we have

and this means the linearity of the assignment.
(3). Since the assignment is linear by part (2), we have to show that its kernel is not trivial. So let be such that is the zero mapping. This means that for all . This implies, by the definition of nondegenerate, that .
If has finite dimension, then we have an injective linear mapping between vector spaces of the same dimension. Such a mapping is also bijective, by Corollary 11.9 .


If a finite-dimensional vector space is endowed with a fixed nondegenerate bilinear form, then there exists, for every linear form, a uniquely determined vector that describes this linear form. More precisely: there exists a vector such that

holds for all , and a vector such that

holds. In this situation, is called the left gradient for with respect to the bilinear form, and is called the right gradient. For an inner product and, more generally, for any nondegenerate symmetric bilinear form (see below), the two concepts coincide, and we just talk about the gradient. For a Euclidean vector space, we formulate this relation explicitly.


Let be a Euclidean space, and let

denote a linear form. Then there exists a uniquely determined vector such that

If is an

orthonormal basis of , and , then this vector equals

.

This follows immediately from Lemma 38.5   (3). The extra statement is clear because of



The Gram matrix

Let be a field, a finite-dimensional -vector space, and let denote a bilinear form on . Let be a basis of . The the -matrix

is called the Gram matrix of with respect to this basis.

In Example 38. , the matrix is the Gram matrix with respect to the standard basis of . If the Gram matrix of a bilinear form with respect to a basis is given, then one can compute for arbitrary vectors. For this, just write and ; then we get, using the general distributive law,

Thus we obtain the value of the bilinear form at two vectors in applying the Gram matrix to the coordinate tuple of the second vector, and multiplying the result (which is a column vector) with the coordinate tuple of the first vector, considered as a row tuple. Put briefly,


Let be a field, a finite-dimensional -vector space, and let denote a bilinear form on . Let and be two bases of , and let and be the Gram matrices of with respect to these bases. Suppose that we have the relations

between the basis elements, which we encode in the transformation matrix . Then, we have the relation

among the Gram matrices.

We have



Symmetric bilinear forms

Let be a field, let be a -vector space, and let denote a bilinear form on . The bilinear form is called symmetric, if

holds for all

.

As in the case of an inner product, we have again a polarization formula.


Let be a field, and suppose that its characteristic is not . Let denote a symmetric bilinear form on the -vector space . Then the relation

holds.

Proof



Let be a finite-dimensional -vector space, and let denote a nondegenerate symmetric bilinear form on . For a given linear form

the uniquely determined vector fulfilling

for all , is called the gradient

of with respect to the bilinear form.

Let be a field, a -vector space, and let denote a symmetric bilinear form on . Two vectors are called orthogonal, if

holds.

Let be a field, a -vector space, and let denote a symmetric bilinear form on . A basis , , of is called orthogonal basis, if

holds for all

.

Far a symmetric bilinear form, it is possible, different to the case of an inner product, that a vector is orthogonal to itself. It is possible, at least in the degenerate case, that a vector is orthogonal to all vectors. Like in the case of an inner product, there exist orthogonal bases.


Let be a field, a -vector space, and let denote a symmetric bilinear form on . The linear subspace

is called the degeneracy space

of the bilinear form.

The degeneracy space is indeed a linear subspace of , see Exercise 38.13 .


Let be a field, and let be a finite-dimensional -vector space endowed with a symmetric bilinear form . Then there exists an orthogonal basis

on .

Proof



The vector space of bilinear forms

Let be a vector space over a field , and let and denote bilinear forms on . The sum of these two bilinear forms is defined pointwisely, that is,

In the same way, for a scalar , the form is defined via

These functions are again bilinear, see Exercise 38.23 . With these definition, we obtain a vector space structure on the set of all bilinear forms on .


Let be a vector space over the field . The set of all bilinear forms on , endowed with pointwise addition and scalar multiplication, is called the vector space of bilinear forms.

It is denoted by .

Let be a finite-dimensional -vector space. For every basis , the mapping

which assigns to a bilinear form its Gram matrix with respect to the given basis, is an isomorphism

of vector spaces.

The injectivity of the mapping follows from Lemma 16.6 . The surjectivity follows from the fact that an arbitrary matrix might b interpret as a bilinear form in the sense of Example 38. . The linearity follows immediately from the pointwise definition of the vector space structure on .



Sesquilinear forms

Let and be vector spaces over the complex numbers . A mapping

is called antilinear, if

holds for all , and if

holds for all and

.

If we consider the complex vector spaces as a real vector space, then this is, in particular, a real-linear mapping. We have encountered this property already in the context of a complex inner product.


Let be a -vector space. A mapping

is called a sesquilinear form, if, for all , the induced mappinga

are -antilinear, and, for all , the induced mappings

are

-linear.

We impose linearity in the first and antilinearity in the second component. There exists also the other convention.

Many concepts and statements carry over, with some minor changes, from the real to the complex situation.


Let be a finite-dimensional -vector space, endowed with a sesquilinear form . Let be a basis of . The -matrix

is called the Gram matrix of with respect to this basis.

If the Gram matrix of a Sesquilinear form with respect to a basis is given, then we can compute for arbitrary vectors. We write and , and we obtain, using the general distributive law,

Thus, we get the value of the sesquilinear form at two vectors by applying the Gram matrix to the coordinate tuple of the complex-conjugated second vector, and by multiplying the result (which is a column vector) with the coordinate tuple of the first vector, considered as a row tuple. Therefore,


Let be a finite-dimensional -vector space, endowed with a sesquilinear form . Let and denote two bases of , and let and denote the Gram matrices of with respect to these bases. Suppose that the basis elements are related by

which we encode in the transformation matrix . Then the Gram matrices are related by

We have



The set of all sesquilinear forms on a -vector space form a -vector space. It is denoted by .



Hermitian forms

A sesquilinear form on a complex vector space is called Hermitian, if

holds for all

.

A complex square matrix

is called Hermitian, if

holds for all .


<< | Linear algebra (Osnabrück 2024-2025)/Part II | >>
PDF-version of this lecture
Exercise sheet for this lecture (PDF)