Skip to content

Commit

Permalink
Merge pull request #90 from kleok/dev
Browse files Browse the repository at this point in the history
Oct 2024 improves
  • Loading branch information
kleok authored Oct 12, 2024
2 parents 8615f8f + d23d710 commit e4dd2db
Show file tree
Hide file tree
Showing 8 changed files with 287 additions and 161 deletions.
281 changes: 155 additions & 126 deletions Floodpyapp_Vit.ipynb

Large diffs are not rendered by default.

25 changes: 10 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,14 @@
# <img src="https://github.com/kleok/FLOODPY/blob/main/figures/Floodpy_logo.png" width="58"> FLOODPY - FLOOD PYthon toolbox
[![GitHub license](https://img.shields.io/badge/License-GNU3-green.svg)](https://github.com/kleok/FLOODPY)
[![Release](https://img.shields.io/badge/Release-0.7.0-brightgreen)](https://github.com/kleok/FLOODPY)
[![Release](https://img.shields.io/badge/Release-Floodpy_Oct_2024-brightgreen)](https://github.com/kleok/FLOODPY)
[![contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/kleok/FLOODPY/issues)
[![Documentation](https://readthedocs.org/projects/floodpy/badge/?version=latest)](https://floodpy.readthedocs.io/en/latest/)

## Introduction

The FLOod Mapping PYthon toolbox is a free and open-source python toolbox for mapping of floodwater. It exploits the dense Sentinel-1 GRD intensity time series and is based on four processing steps. In the first step, a selection of Sentinel-1 images related to pre-flood (baseline) state and flood state is performed. In the second step, the preprocessing of the selected images is performed in order to create a co-registered stack with all the pre-flood and flood images. In the third step, a statistical temporal analysis is performed and a t-score map that represents the changes due to flood event is calculated. Finally, in the fourth step, a multi-scale iterative thresholding algorithm based on t-score map is performed to extract the final flood map. We believe that the end-user community can benefit by exploiting the FLOODPY's floodwater maps.
The Flood mapping python toolbox (Floodpy) is a free and open-source python toolbox for mapping the non-urban flooded regions. It exploits the dense Sentinel-1 GRD intensity time series using a statistical or a ViT (Visual Transfomer) approach. Before running Floodpy make use you know the following information of the flood event of your interest
- Date and time of the flood event
- Spatial information (e.g. min,max latitude and min,max longitude) of the flood event

This is research code provided to you "as is" with NO WARRANTIES OF CORRECTNESS. Use at your own risk.

Expand Down Expand Up @@ -36,21 +38,14 @@ traffic.

### 1.3 Account setup for downloading global atmospheric model data

Currently, FloodPy is based on ERA-5 data. ERA-5 data set is redistributed over the Copernicus Climate Data Store (CDS).
You have to create a new account [here](https://cds.climate.copernicus.eu/user/register?destination=%2F%23!%2Fhome) if you don't own a user account yet.
After the creation of your profile, you will find your user id (UID) and your personal API Key on your User profile page.
FloodPy can download meteorological data from based on ERA-5 data.
You have to create a new account [here](https://cds.climate.copernicus.eu/) if you don't own a user account yet.
After the creation of your profile, you will find your Personal Access Token on your User profile page.
Create manually a ```.cdsapirc``` file under your ```HOME``` directory with the following information:
- Option 1: create manually a ```.cdsapirc``` file under your ```HOME``` directory with the following information:
```
url: https://cds.climate.copernicus.eu/api/v2
key: UID:personal API Key
```
- Option 2: Run [aux/install_CDS_key.sh](https://github.com/kleok/FLOODPY/blob/main/aux/install_CDS_key.sh) script as follows:
```bash
chmod +x install_CDS_key.sh
./install_CDS_key.sh
url: https://cds.climate.copernicus.eu/api
key: Your Personal Access Token
```
### 1.4 Download FLOODPY
Expand Down
4 changes: 2 additions & 2 deletions floodpy/Download/Download_ERA5_precipitation.py
Original file line number Diff line number Diff line change
Expand Up @@ -163,9 +163,9 @@ def Get_ERA5_data(ERA5_variables:list,
df_dict={}
for ERA5_variable in ERA5_variables:

if ERA5_variable in ['longitude', 'latitude']:
if ERA5_variable in ['longitude', 'latitude', 'number']:
pass
elif ERA5_variable=='time':
elif ERA5_variable=='valid_time':
time_var=ERA5_data.variables[ERA5_variable]
t_cal = ERA5_data.variables[ERA5_variable].calendar
dtime = netCDF4.num2date(time_var[:],time_var.units, calendar = t_cal)
Expand Down
23 changes: 21 additions & 2 deletions floodpy/Download/Query_Sentinel_1_products.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,20 @@
import requests
import geopandas as gpd
import pandas as pd
from datetime import timedelta

def filter_datetimes(datetime_list, seconds_thres = 60):
if not datetime_list:
return []

filtered_list = [datetime_list[0]] # Always keep the first element

for i in range(1, len(datetime_list)):
time_diff = datetime_list[i] - filtered_list[-1] # Difference with the last kept element
if time_diff >= timedelta(seconds=seconds_thres):
filtered_list.append(datetime_list[i])

return filtered_list

def get_attribute_value(attribute_column, attr_name):
for attr_dict in attribute_column:
Expand Down Expand Up @@ -50,5 +64,10 @@ def query_Sentinel_1(Floodpy_app):
query_df.index = pd.to_datetime(query_df['beginningDateTime'])
query_df = query_df.drop_duplicates('beginningDateTime').sort_index().tz_localize(None)

flood_candidate_dates = query_df['relativeOrbitNumber'][Floodpy_app.flood_datetime_start:Floodpy_app.flood_datetime_end].index.values
return query_df, flood_candidate_dates
flood_datetimes = query_df['relativeOrbitNumber'][Floodpy_app.flood_datetime_start:Floodpy_app.flood_datetime_end].index.values

sorted_flood_datetimes = sorted([pd.to_datetime(flood_datetime) for flood_datetime in flood_datetimes])

filtered_flood_datetimes = filter_datetimes(sorted_flood_datetimes)

return query_df, filtered_flood_datetimes
17 changes: 8 additions & 9 deletions floodpy/FLOODPYapp.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@ def __init__(self, params_dict:dict):

# Project Definition
self.projectfolder = params_dict['projectfolder']
self.flood_event = params_dict['flood_event']
self.src = params_dict['src_dir']
self.gpt = params_dict['GPTBIN_PATH']
self.snap_orbit_dir = params_dict['snap_orbit_dir']
Expand Down Expand Up @@ -137,11 +138,11 @@ def plot_ERA5_precipitation_data(self):
self.era5_fig = plot_ERA5(self)

def query_S1_data(self):
self.query_S1_df, self.flood_candidate_dates = query_Sentinel_1(self)
self.query_S1_df, self.flood_datetimes = query_Sentinel_1(self)

def sel_S1_data(self, sel_flood_date):
if pd.to_datetime(sel_flood_date) not in self.flood_candidate_dates:
print('Please select one of the available dates for flood mapping: {}'.format(self.flood_candidate_dates))
if pd.to_datetime(sel_flood_date) not in self.flood_datetimes:
print('Please select one of the available dates for flood mapping: {}'.format(self.flood_datetimes))

self.flood_datetime = sel_flood_date
self.flood_datetime_str = pd.to_datetime(self.flood_datetime).strftime('%Y%m%dT%H%M%S')
Expand Down Expand Up @@ -193,8 +194,10 @@ def calc_floodmap_dataset(self):
def calc_flooded_regions_ViT(self, ViT_model_filename, device = 'cuda', generate_vector = True, overwrite = True):
assert device in ['cuda', 'cpu'], 'device parameter must be cuda or cpu'

self.Flood_map_dataset_filename = os.path.join(self.Results_dir, 'Flood_map_ViT_{}.nc'.format(self.flood_datetime_str))
self.Flood_map_vector_dataset_filename = os.path.join(self.Results_dir, 'Flood_map_ViT_{}.geojson'.format(self.flood_datetime_str))
self.Flood_map_dataset_filename = os.path.join(self.Results_dir, 'Flooded_regions_{}_{}(UTC).nc'.format(self.flood_event,
self.flood_datetime_str))
self.Flood_map_vector_dataset_filename = os.path.join(self.Results_dir, 'Flooded_regions_{}_{}(UTC).geojson'.format(self.flood_event,
self.flood_datetime_str))

if os.path.exists(self.Flood_map_dataset_filename):
if overwrite:
Expand All @@ -211,9 +214,5 @@ def calc_flooded_regions_ViT(self, ViT_model_filename, device = 'cuda', generate
else:
convert_to_vector(self)

def plot_flood_map(self):
self.interactive_map = plot_interactive_map(self)
return self.interactive_map



12 changes: 6 additions & 6 deletions floodpy/Preprocessing_S1_data/Preprocessing_S1_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,15 +37,15 @@ def Run_Preprocessing(Floodpy_app, overwrite):
'preprocessing_pair_primary_2GRD_secondary_2GRD.xml'),
}

# Find the S1 unique dates
S1_datetimes = Floodpy_app.query_S1_sel_df.sort_index().index.values
S1_dates = [pd.to_datetime(S1_datetime).date() for S1_datetime in S1_datetimes]
S1_unique_dates = np.unique(S1_dates)

S1_datetimes = Floodpy_app.query_S1_sel_df.sort_index().index.values
Pre_flood_indices = pd.to_datetime(S1_datetimes)<Floodpy_app.pre_flood_datetime_end
Pre_flood_datetimes = S1_datetimes[Pre_flood_indices]

# Find the dates for Flood and Pre-flood S1 images
Flood_date = pd.to_datetime(Floodpy_app.flood_datetime).date()
assert Flood_date in S1_unique_dates
Pre_flood_dates = np.delete(S1_unique_dates, np.where(S1_unique_dates == Flood_date))
S1_dates = [pd.to_datetime(Pre_flood_datetime).date() for Pre_flood_datetime in Pre_flood_datetimes]
Pre_flood_dates = np.unique(S1_dates)

S1_flood_rows = Floodpy_app.query_S1_sel_df.loc[pd.to_datetime(Flood_date): pd.to_datetime(Flood_date) + pd.Timedelta(hours=24)]
AOI_polygon = gpd.read_file(Floodpy_app.geojson_bbox)['geometry'][0]
Expand Down
51 changes: 51 additions & 0 deletions floodpy/Visualization/flood_over_time.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
import geopandas as gpd
import pandas as pd
import os
import matplotlib.pyplot as plt

def plot_flooded_area_over_time(Floodpy_app, Floodpy_app_objs):

colorTones = {
6: '#CC3A5D', # dark pink
5: '#555555', # dark grey
4: '#A17C44', # dark brown
3: '#8751A1', # dark purple
2: '#C1403D', # dark red
1: '#2E5A87', # dark blue
0: '#57A35D', # dark green
}

Flooded_regions_areas_km2 = {}
for flood_date in Floodpy_app_objs.keys():
# calculate the area of flooded regions
Flood_map_vector_data = gpd.read_file(Floodpy_app_objs[flood_date].Flood_map_vector_dataset_filename)
Flood_map_vector_data_projected = Flood_map_vector_data.to_crs(Flood_map_vector_data.estimate_utm_crs())
area_km2 = round(Flood_map_vector_data_projected.area.sum()/1000000,2 )
Flooded_regions_areas_km2[flood_date] = area_km2


def getcolor(val):
return colorTones[Floodpy_app.flood_datetimes.index(val)]

Flooded_regions_areas_km2_df = pd.DataFrame.from_dict(Flooded_regions_areas_km2, orient='index', columns=['Flooded area (km2)'])
Flooded_regions_areas_km2_df['Datetime'] = pd.to_datetime(Flooded_regions_areas_km2_df.index)
Flooded_regions_areas_km2_df['color'] = Flooded_regions_areas_km2_df['Datetime'].apply(getcolor)

df = Flooded_regions_areas_km2_df.copy()
# Plot the data
fig = plt.figure(figsize=(6, 5))
plt.bar(df['Datetime'].astype(str), df['Flooded area (km2)'], color=df['color'], width=0.7)

# Adjust the plot
plt.ylabel('Flooded area (km²)', fontsize=16)
plt.title('Flooded Area(km²) Over Time', fontsize=16)
plt.xticks(df['Datetime'].astype(str), df['Datetime'].dt.strftime('%d-%b-%Y'), rotation=30, ha='right', fontsize=16) # Set custom date format
plt.yticks(fontsize=16)
plt.tight_layout() # Adjust layout for better fit

# Display the plot
fig_filename = os.path.join(Floodpy_app.Results_dir, '{}.svg'.format(Floodpy_app.flood_event))
plt.savefig(fig_filename,format="svg")
# plt.close()
print('The figure can be found at: {}'.format(fig_filename))

35 changes: 34 additions & 1 deletion floodpy/utils/geo_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,19 @@
from shapely.geometry import shape
import rasterio
import numpy as np
import datetime
import json

colorTones = {
6: '#CC3A5D', # dark pink
5: '#555555', # dark grey
4: '#A17C44', # dark brown
3: '#8751A1', # dark purple
2: '#C1403D', # dark red
1: '#2E5A87', # dark blue
0: '#57A35D', # dark green
}


def create_polygon(coordinates):
return Polygon(coordinates['coordinates'][0])
Expand All @@ -25,4 +38,24 @@ def convert_to_vector(Floodpy_app):
gdf = gdf.loc[gdf.flooded_regions == 1,:]
gdf.datetime = Floodpy_app.flood_datetime_str

gdf.to_file(Floodpy_app.Flood_map_vector_dataset_filename, driver='GeoJSON')
#Convert GeoDataFrame to GeoJSON format (as a dictionary)
geojson_str = gdf.to_json() # This gives the GeoJSON as a string
geojson_dict = json.loads(geojson_str) # Convert the string to a dictionary

# find the color of plotting
color_ind = Floodpy_app.flood_datetimes.index(Floodpy_app.flood_datetime)
plot_color = colorTones[color_ind]
#Add top-level metadata (e.g., title, description, etc.)
geojson_dict['flood_event'] = Floodpy_app.flood_event
geojson_dict['description'] = "This GeoJSON contains polygons of flooded regions using Sentinel-1 data."
geojson_dict['produced_by'] = "Floodpy"
geojson_dict['creation_date_UTC'] = datetime.datetime.now(datetime.timezone.utc).strftime('%Y%m%dT%H%M%S')
geojson_dict['flood_datetime_UTC'] = Floodpy_app.flood_datetime_str
geojson_dict['plot_color'] = plot_color
geojson_dict['bbox'] = Floodpy_app.bbox

#Save the modified GeoJSON with metadata to a file
with open(Floodpy_app.Flood_map_vector_dataset_filename, "w") as f:
json.dump(geojson_dict, f, indent=2)

#gdf.to_file(Floodpy_app.Flood_map_vector_dataset_filename, driver='GeoJSON')

0 comments on commit e4dd2db

Please sign in to comment.