Skip to content

Latest commit

 

History

History
41 lines (37 loc) · 1.72 KB

README.md

File metadata and controls

41 lines (37 loc) · 1.72 KB

vit-efwi

This repository contains both the dataset and the codebase for ViT-EFWI, a project aimed at replicating the findings detailed in our submitted manuscript titled "Improved elastic full-waveform inversion with Vision Transformer reparameterization and Recurrent Neural Network reformulation".

vitefwi_flow Fig1. The schematic architecture of the proposed ViT parameterization shown in red box integrated within RNN-based EFWI framework shown in blue box. marmousi2_vit_deepwave Fig2. The inverted elastic models for Marmousi2 model using clean data, (a) Vp, (b) Vs, and (c) density obtained by Deepwave EFWI; (d) Vp, (e) Vs, and (f) density obtained by the proposed ViT-EFWI. The MSEs are marked on each plot for quantitative comparison.

Folder structure

The list of files:

.
├── README.md
├── data
│   ├── generate_model_polt.ipynb
│   ├── model
│   ├── observed
│   └── raw_model
├── fig
│   ├── marmousi2_vit_deepwave.png
│   └── vitefwi_flow.png
├── paper_result
│   ├── bp
│   ├── marmousi2
│   └── toy
├── result
│   └── marmousi2
├── vit-efwi.ipynb
└── vit-efwi.yml
  • All the models and associated seismic data are generated by a Jupyter Notebook ./data/generate_model_polt.ipynb
  • All trained models used in the paper are save in ./paper_results/
  • All codes to generate results in the paper are gathered into a sigle Jupyter Notebook ./vit-efwi.ipynb

Get started

You can then set up a conda environment with all dependencies like so:

conda env create -f vit-efwi.yaml
conda activate vit-efwi