0

I am currently doing research work on an inversion of geophysical data using Machine Learning. I have come across some research work where a Convolutional Neural Network (CNN) has been used effectively for this purpose (for example, his).

I am particularly interested in how to prepare my input and output labelled data for this machine learning application, since the input will be the observed geophysical signal, and the label output will be the causative density or susceptibility distribution (for gravity and magnetic, respectively).

I need some assistance and insight as to how to prepare the data for this CNN application.

Additional Explanation

Experimental setup: Measurements are taking above the ground surface. These measurements are signals that reflect the distribution of a physical property (e.g., density) in the ground beneath. For modelling, the subsurface is discretised into squares or cubes each having a constant but unknown physical property (e.g., density).

How it applies to CNN: I want my input data to be the Measurements taken above ground. The output should then be the causative density distribution (that is, the value of the density in each cube/squares)

See attached picture (flat top is the "above ground", all other prisms represent the discretisation of the subsurface. I want to train the CNN to give out a density value for each cube in the subsurface, given the above ground measurements)

enter image description here

nbro
  • 42,615
  • 12
  • 119
  • 217
W. Oni
  • 1
  • 1

1 Answers1

1

I haven't done similar work with CNNs, but I can list a couple of approaches, maybe it helps you get started.

If I understand it correctly, the question is mostly about shapes of the data, so that's what I'll focus on as well.

Option A: You can keep your input as a 2D "image" with a single channel and just use 2D convolutions to expand to the required output size. This could work, but it doesn't incorporate the spatial dependency in the 3D output.

Option B: You can consider your 2D input to be 3D but with only one unit in the extra dimension, then use a couple of 3d transposed convolutions to get to the correct output shape. This is nice because you rely on 3D translation invariance which is probably what you want for the densities, but it would need to be tested. Also, in this case, You would only have one channel both in the input and the output, this doesn't mean you can only use one channel inside the NN, but you need to reduce it towards the end.

Option C: You can have your input as a 2D "image" with a single channel and do a couple of 2D convolutions to expand the number channels, then expand the dimensions of the tensors within the neural network and continue with 3D convolutions, considering the previous channels as the 3rd dimension and initially a single channel for the 3D "image". I could imagine this working, but the transition from channels (without spacial relations) to the 3rd dimension feels weird, and I'm not sure about the validity of this setup.