ChallengeDeep learning approaches for geobody detection on seismic data face a common hurdle: having enough training data. Many solutions involve generating synthetic seismic data to solve this problem. There are troves real world datasets that geoscientists have created over the years that could also be used as training data but these interpretations are in formats that can be challenging to translate to deep learning approaches.
SolutionThe team has built a workflow takes two surfaces (i.e. top_channel, base_channel) and the extent of the original seismic volume and outputs an array in the same dimensions as of the interpreted seismic survey with 1's between the two surfaces and 0's everywhere else. This array can then be exported, along with the original seismic, into deep learning frameworks for training.
Next Steps: More export formats for different deep learning techniques, ability to add multiple horizons, ability to label multiple geobodies, and a Flask app.
Our StoryOur initial challenge was to improve upon a deep learning approach that Nam Pham has built for detecting channels on seismic data by adding labeled, real seismic data, which has only been trained on synthetics. However, we ran into the problem of how to turn an interpreted seismic volume into the labeled data array of 1's and 0's (i.e. channel and no channel) to be used for the training. This problem provided the grounds for the coding that was done for the Hackathon.
Thank you to Evan Bianco, Diego Castañeda, and the Open Seismic Repository. https://www.opendtect.org/osr/