⚠️ This is the first operational version of the handbook, but it is still a work in progress and will be heavily updated during 2024! ⚠️

Risk assessment for heatwaves based on satellite-derived data#

Under climate change, the occurrence of heatwaves is expected to increase in the future in Europe. The main negative impacts caused by heatwave events are related to the overheating of the urban areas, which lowers the comfort of living or causes health issues among vulnerable population (see also: Integrated Assessment of Urban Overheating Impacts on Human Life), drought, and water scarcity. Nowadays, there are a lot of studies and methodologies on how we can mitigate the impacts of these events. This toolbox wants to answer simple questions that are more frequently asked by crisis management local authorities, urban planners, or policymakers.

These questions are:

  1. What are the problematic areas? (most overheated areas)

  2. Who or what is exposed?

This workflow contains the following steps used to generate a risk map for your selected area:

  • Preparatory work: installing packages and creating directory structure

  • Understanding the trends in the occurence of hot days in the climate change scenarios

  • Identifying the heat islands (areas most exposed to heat) in your selected area, based on the observed land surface temperature (from RSLab Landsat8, resolution: 30x30m)

  • Analysing the distribution of vulnerable population, based on population distribution data.

  • Creating a heatwave risk map based on the risk matrix where risk = (level of exposure to heat in the area) x (level of vulnerability)

Preparation work#

Prepare your workspace#

import zipfile
import os
from glob import glob
from datetime import datetime

import numpy as np
import xarray as xr
import pandas as pd
import rasterio
import geopandas as gpd

import matplotlib.pyplot as plt
from ipyleaflet import Map, LayersControl, WidgetControl
import ipywidgets as widgets
from localtileserver import get_leaflet_tile_layer, TileClient
import plotly.express as px
# Set host forwarding for remote jupyterlab sessions
if 'JUPYTERHUB_SERVICE_PREFIX' in os.environ:
    os.environ['LOCALTILESERVER_CLIENT_PREFIX'] = f"{os.environ['JUPYTERHUB_SERVICE_PREFIX']}/proxy/{{port}}"

Create directory structure#

# Define the directory for the flashflood workflow preprocess
workflow_folder = 'Heatwave_risk'

# Define directories for data and results within the previously defined workflow directory
data_dir = os.path.join(workflow_folder,'data')
LST_dir = os.path.join(workflow_folder,'data','LST')
pop_dir = os.path.join(workflow_folder,'data','population')
results_dir = os.path.join(workflow_folder,'results')

# Check if the workflow directory exists, if not, create it along with subdirectories for data and results
if not os.path.exists(workflow_folder):
    os.makedirs(workflow_folder)
    os.makedirs(os.path.join(data_dir))
    os.makedirs(os.path.join(LST_dir))
    os.makedirs(os.path.join(pop_dir))
    os.makedirs(os.path.join(results_dir))

Identifying overheated areas in the urban environment#

Data description and download#

Heat islands are urbanized areas that experience higher temperatures than surrounding areas. Structures such as buildings, roads, and other infrastructure absorb and re-emit the sun’s heat, thus resulting in warmer temperatures than in the natural landscapes (such as forests and water bodies). Urban areas, where these structures are highly concentrated and greenery is limited, become “islands” of higher temperatures relative to outlying areas (see also: Heat islands definition). For the identification of the heat islands from the historical data we can use these data:

Satellite-derived land surface temperature data in this workflow is based on the Landsat8 land surface temperature (LST) (spatial resolution: 15-30m; at 8-16 days interval). We use this data product for the identification of the heat islands. The LST product is available on the RSLAB website or can be calculated from the L8 imagery bands.

For this workflow, land surface temperature data needs to be manually downloaded from the RSLab portal. To do this, you need to follow these steps:

  1. Go to the RSLab portal and draw a closed polygon around your area of interest by zooming in and clicking on the map. If you are focusing on urban areas, it is important to try to select only the urban areas in the rectangular polygon.

  2. Select the time period, e.g. June to August in a specific year, where particularly high temperatures were observed. From Landsat8, data is available for 2013-2021, it is recommended to choose a year in this period (optional: consult the local meteorological authorities for more information on past heat waves).

  3. Select your preferred Landsat source and emissivity. For the dates in the period 2013-2021, please choose Landsat8 as the source. For urban areas, it is recommended to use MODIS and NDVI-based emissivity (ASTER emissivity is preferred for natural landscapes). Refer to the documentation for more details.

  4. Click the “Calculate LST” button.

  5. The results will be presented below the map. For each LST result you will find a “show” and “download” buttons. “Show” button allows to display the LST as a layer on the map and calculates the mean/min/max LST value for the selected image.

  6. To dowload the images, please press “Download - All images”, this will start the download of the “AllImages_LST.zip” file that contains .tiff images.

  7. Once downloaded, please move the downloaded file to the folder specified above under variable LST_dir (folder ./Heatwave_risk/data_dir/LST).

This code unzips all downloaded LST data:

# Loop through all files in the directory
for file in os.listdir(LST_dir):
    file_path = os.path.join(LST_dir, file)
    # Check if the file is a zipfile
    if zipfile.is_zipfile(file_path):
        # Open the zipfile
        with zipfile.ZipFile(file_path, 'r') as zip_ref:
            # Extract all contents of the zipfile into the working directory
            zip_ref.extractall(LST_dir)

The code below extracts all dates from LST data files:

# Create a list of file paths to your LST files using glob
L8list = glob(f"{LST_dir}/*LST*.tif")
# Initialize an empty list to store the datetime objects
lst_dates = []
# Loop through each filename
for file in L8list:
    # Extract the filename without the directory path
    filename = file.split('/')[-1]   
    # Extract the date and time part from the filename
    if "AllImages_LST" in filename:
        date_time_str = filename.split('.')[1]
        date_str = date_time_str.split('_')[0]
        time_str = date_time_str.split('_')[1]
    else:
        date_str = filename[4:12]  # Extract date part directly
        time_str = filename[13:19]  # Extract time part directly    
    # Combine date and time strings into a single string
    datetime_str = date_str + '_' + time_str    
    # Convert the combined datetime string to a datetime object
    datetime_obj = datetime.strptime(datetime_str, '%Y%m%d_%H%M%S')   
    # Append the datetime object to the lst_dates list
    lst_dates.append(datetime_obj)
lst_dates
[datetime.datetime(2013, 6, 20, 9, 34, 46),
 datetime.datetime(2013, 7, 6, 9, 34, 50),
 datetime.datetime(2013, 7, 13, 9, 41, 1),
 datetime.datetime(2013, 7, 22, 9, 34, 48),
 datetime.datetime(2013, 7, 29, 9, 41, 1),
 datetime.datetime(2013, 8, 7, 9, 34, 51),
 datetime.datetime(2013, 8, 14, 9, 41, 1),
 datetime.datetime(2013, 8, 23, 9, 34, 53),
 datetime.datetime(2013, 8, 30, 9, 41, 4),
 datetime.datetime(2014, 6, 7, 9, 32, 30),
 datetime.datetime(2014, 6, 14, 9, 38, 44),
 datetime.datetime(2014, 6, 23, 9, 32, 33),
 datetime.datetime(2014, 7, 16, 9, 38, 54),
 datetime.datetime(2014, 7, 25, 9, 32, 43),
 datetime.datetime(2014, 8, 1, 9, 39),
 datetime.datetime(2014, 8, 10, 9, 32, 53),
 datetime.datetime(2014, 8, 17, 9, 39, 5),
 datetime.datetime(2015, 6, 1, 9, 38, 8),
 datetime.datetime(2015, 6, 10, 9, 32, 5),
 datetime.datetime(2015, 6, 26, 9, 32, 10),
 datetime.datetime(2015, 7, 3, 9, 38, 26),
 datetime.datetime(2015, 7, 12, 9, 32, 22),
 datetime.datetime(2015, 7, 19, 9, 38, 36),
 datetime.datetime(2015, 7, 28, 9, 32, 27),
 datetime.datetime(2015, 8, 4, 9, 38, 39),
 datetime.datetime(2015, 8, 13, 9, 32, 32),
 datetime.datetime(2015, 8, 20, 9, 38, 46),
 datetime.datetime(2015, 8, 29, 9, 32, 38),
 datetime.datetime(2016, 6, 3, 9, 38, 41),
 datetime.datetime(2016, 6, 19, 9, 38, 45),
 datetime.datetime(2016, 7, 5, 9, 38, 54),
 datetime.datetime(2016, 7, 21, 9, 39),
 datetime.datetime(2016, 7, 30, 9, 32, 51),
 datetime.datetime(2016, 8, 15, 9, 32, 54),
 datetime.datetime(2017, 6, 6, 9, 38, 40),
 datetime.datetime(2017, 6, 15, 9, 32, 33),
 datetime.datetime(2017, 6, 22, 9, 38, 46),
 datetime.datetime(2017, 7, 8, 9, 38, 49),
 datetime.datetime(2017, 7, 17, 9, 32, 41),
 datetime.datetime(2017, 8, 2, 9, 32, 49),
 datetime.datetime(2017, 8, 9, 9, 39, 2),
 datetime.datetime(2017, 8, 18, 9, 32, 54),
 datetime.datetime(2017, 8, 25, 9, 39, 7),
 datetime.datetime(2018, 6, 2, 9, 31, 37),
 datetime.datetime(2018, 7, 4, 9, 31, 55),
 datetime.datetime(2018, 7, 27, 9, 38, 16),
 datetime.datetime(2018, 8, 5, 9, 32, 11),
 datetime.datetime(2018, 8, 12, 9, 38, 26),
 datetime.datetime(2018, 8, 21, 9, 32, 19),
 datetime.datetime(2018, 8, 28, 9, 38, 33),
 datetime.datetime(2019, 6, 5, 9, 32, 36),
 datetime.datetime(2019, 6, 12, 9, 38, 50),
 datetime.datetime(2019, 6, 21, 9, 32, 42),
 datetime.datetime(2019, 6, 28, 9, 38, 55),
 datetime.datetime(2019, 7, 7, 9, 32, 46),
 datetime.datetime(2019, 7, 14, 9, 38, 58),
 datetime.datetime(2019, 7, 30, 9, 39, 4),
 datetime.datetime(2019, 8, 8, 9, 32, 56),
 datetime.datetime(2019, 8, 15, 9, 39, 9),
 datetime.datetime(2019, 8, 24, 9, 33, 1),
 datetime.datetime(2020, 6, 7, 9, 32, 24),
 datetime.datetime(2020, 6, 14, 9, 38, 40),
 datetime.datetime(2020, 6, 23, 9, 32, 34),
 datetime.datetime(2020, 6, 30, 9, 38, 48),
 datetime.datetime(2020, 7, 9, 9, 32, 41),
 datetime.datetime(2020, 7, 25, 9, 32, 46),
 datetime.datetime(2020, 8, 1, 9, 38, 58),
 datetime.datetime(2020, 8, 10, 9, 32, 49),
 datetime.datetime(2020, 8, 26, 9, 32, 57),
 datetime.datetime(2021, 6, 1, 9, 38, 46),
 datetime.datetime(2021, 6, 10, 9, 32, 39),
 datetime.datetime(2021, 6, 17, 9, 38, 52),
 datetime.datetime(2021, 6, 26, 9, 32, 43),
 datetime.datetime(2021, 7, 3, 9, 38, 55),
 datetime.datetime(2021, 7, 19, 9, 38, 58),
 datetime.datetime(2021, 7, 28, 9, 32, 52),
 datetime.datetime(2021, 8, 13, 9, 32, 59),
 datetime.datetime(2021, 8, 20, 9, 39, 11),
 datetime.datetime(2021, 8, 29, 9, 33, 3)]
# List all files containing LST data
L8list = glob(f"{LST_dir}/*LST*.tif")

This code will create a raster stack from all downloaded data:

# Load the data and create a raster stack from all maps
with rasterio.open(L8list[0]) as src0:
    meta = src0.meta
#
meta.update(count = len(L8list))
# Save a data to working directory
with rasterio.open(f'{data_dir}/L8_raster_stack.tif', 'w', **meta) as dst:
    for id, layer in enumerate(L8list, start=1):
        with rasterio.open(layer) as src1:
            dst.write_band(id, src1.read(1))

This code calculates mean values from the LST rasters which are not significantly influenced by clouds:

# Open the raster stack
stack_path = f'{data_dir}/L8_raster_stack.tif'
raster_dataset = xr.open_dataset(stack_path)
# Count the number of pixels greater than 0 for each band
num_pixels_gt_zero = (raster_dataset > 0).sum(dim=('x', 'y'))
# Calculate the total number of pixels for each band
total_pixels = raster_dataset.sizes['x'] * raster_dataset.sizes['y']
# Calculate the proportion of pixels greater than 0
prop_gt_zero = num_pixels_gt_zero / total_pixels
# Filter bands where at least 50% of pixels are greater than 0
filtered_bands = raster_dataset.where(prop_gt_zero >= 0.5)
# Calculate the mean values for the filtered bands
mean_values_filtered = filtered_bands.mean(dim=('x', 'y'))
#mean_values_filtered = mean_values_filtered.fillna(0)
mean_values_filtered=mean_values_filtered['band_data']
mean_values_list = mean_values_filtered.values.tolist()

Classifying the LST values based on the LST temperature#

  • In this step, you reclassify the LST into 5 categories (Very low - Very high) based on the temperature, each category will contain two values, we divided the values into 10 groups because of the better sensitivity in the urban areas.

  • You can change the treshold values for each category:

  • Very low < 20-25°C [values 1-2]

  • Low 25-35 °C [values 3-4]

  • Medium 35-45 °C [values 5-6]

  • High 45-55 °C [values 7-8]

  • Very High 55-60 <°C [values 9-10]

    Note: If needed, you can change the threshold values for each category to see more of a difference between the different areas in the city.

Important: The LST values represent the surface temperature, not the air temperature. The surface temperature reaches higher values than the air temperature. The ground surface temperature is an important influencing factor for the weather as it is perceived by humans. The temperature of the ground surface can be more than 10°C higher than the air temperature on a sunny day, and up to 10°C below air temperature on clear nights when the surface loses heat by radiation. (see also: Land surface temperature vs air temperature).

# Define classes for grouping LST data
lc_bins = [20, 25, 30, 35, 40, 45, 50, 55, 60]  # Effect on the human health
lc_values = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
# Load the data and calculate maximum values from raster stack
L8_path = f'{data_dir}/L8_raster_stack.tif'
L8 = xr.open_dataset(L8_path)
L8 = L8.max(dim='band', skipna=True, keep_attrs=True)
L8lst2016 = L8['band_data']
lstbbox = L8.rio.bounds()

# Function to reclassify data using numpy
def reclassify_with_numpy_ge(array, bins, new_values):
    reclassified_array = np.zeros_like(array)
    reclassified_array[array < bins[0]] = new_values[0]
    for i in range(len(bins) - 1):
        reclassified_array[(array >= bins[i]) & (array < bins[i + 1])] = new_values[i + 1]
    reclassified_array[array >= bins[-1]] = new_values[-1]
    return reclassified_array

# Apply the reclassification function
lc_class = reclassify_with_numpy_ge(L8lst2016.values, bins=lc_bins, new_values=lc_values)

# Convert the reclassified array back to xarray DataArray for plotting
lc_class_da = xr.DataArray(lc_class, coords=L8lst2016.coords, dims=L8lst2016.dims)

# Plot the data
fig, ax = plt.subplots()
cmap = plt.get_cmap('Reds', 10) 
oa = lc_class_da.plot(ax=ax, cmap=cmap, add_colorbar=False)
cbar = fig.colorbar(oa, ticks=[1, 3.25, 5.5, 7.75, 10])
cbar.ax.set_yticklabels(['Very Low', 'Low', 'Medium', 'High', 'Very High'], size=10)
ax.set_xlabel('lon [deg_east]')
ax.set_ylabel('lat [deg_north]')
plt.title('Overheated areas in the area of interest', size=16, pad=10)
plt.show()
../../../../_images/c280a7dd45a29cc1c3b93acb1d75cf8f77834b18f5246d6fcb185c2384def867.png

Plot the observed maximum 2m air temperature together with LST temperature#

  • We display the measured data of the 2m air temperature together with the LST data, to see for which days we downloaded the LST data.

  • This step will give us the information if we downloaded the data for the days with the highest air temperature.

# Create a dictionary from your data
data = {'Dates': lst_dates, 'Values': mean_values_list}
# Create a DataFrame from the dictionary

df = pd.DataFrame(data)
# Plot using Plotly Express
fig = px.scatter(df, x='Dates', y='Values', labels={'Dates': 'Date', 'Values': 'LST Mean Values'}, color='Values')
fig.update_traces(marker=dict(color='green'))  # Set dot color to green
fig.update_traces(mode='markers')  # Display as dots
# Add title and center it
fig.update_layout(title_text='LST mean values (cloud coverage<50%)',
                  title_x=0.5,  # Center the title horizontally
                  title_y=0.9)  # Adjust vertical position if needed
fig.update_xaxes(tickformat='%d-%m-%Y')
fig.show()

This plot gives you the information about land surface temperature mean values for the selected area, per Landsat8 image. Only the days corresponding to the LST pictures with cloud coverage lower then 50% are plotted.

Identify vulnerable population groups#

We can now use the maps of population distribution and combine these with LST maps to assess how the most populated areas overlap with the most overheated areas.

The default option in this workflow is to retreive the population data from the WorldPop dataset.

Download the vulnerable population data#

  1. Go to the portal for accessing WorldPop data:

    • Select the WorldPop-Age and sex structures

    • Select the Location

    • Select the Formats to GeoTIFF.

  2. Download the maps for the most vulnerable groups of the population. This is done by manually downloading the relevant files. The data files have names according to the following pattern: [country code]_[sex]_[age]_2020.tif. For example, svk_m_1_2020.tif =

    • svk = Slovakia

    • m = male (please download both male and female (f))

    • 1 = 1 to 5 years of age, download also for 65, 70, 75, 80

    • 2020 = age structures in 2020

  3. Copy the data into the directory that was defined earlier (pop_dir variable, or ./Heatwave_risk/data/population/).

Now we can use the below code to do the following:

  • load all the maps of the vulnerable population (based on age)

  • calculate the sum of the vulnerable population across ages and sexes

  • classify the population data into 5 groups (equal intervals)

  • plot it next to a map of overheated areas

Process population data#

First we will load the population data:

# This code loads all population data and creates a raster stack from them 
poplist = glob( f'{pop_dir}/*.tif')
#
with rasterio.open(poplist[0]) as src0:
    meta = src0.meta
#
meta.update(count = len(poplist))
#
with rasterio.open(f'{data_dir}/Population_raster_stack.tif', 'w', **meta) as dst:
    for id, layer in enumerate(poplist, start=1):
        with rasterio.open(layer) as src1:
            dst.write_band(id, src1.read(1))
  • We reclassified the vulnerable population data to 5 categories (Very low - Very high) based on the density of the population, each category contains two values, because of the better sensitivity in the urban areas we decided to classify data into 10 groups

# Load the population data and calculate the sum from raster stack
pop_path = f'{data_dir}/Population_raster_stack.tif'
pop = xr.open_dataset(pop_path)
pop = pop.sum(dim='band', skipna=True, keep_attrs=True)
pop = pop['band_data']
pop = pop.rio.clip_box(minx=lstbbox[0], miny=lstbbox[1], maxx=lstbbox[2], maxy=lstbbox[3])

# Function to reclassify data using numpy
def reclassify_with_numpy_gt(array, bins, new_values):
    reclassified_array = np.zeros_like(array)
    reclassified_array[array <= bins[0]] = new_values[0]
    for i in range(len(bins) - 1):
        reclassified_array[(array > bins[i]) & (array <= bins[i + 1])] = new_values[i + 1]
    reclassified_array[array > bins[-1]] = new_values[-1]
    return reclassified_array

# Calculate the number of bins (classes)
num_bins = 9
# Equal interval classification
min_value = np.nanmin(pop)  # Minimum population value
max_value = np.nanmax(pop)  # Maximum population value
bin_width = (max_value - min_value) / num_bins  # Width of each bin
pop_bins = [min_value + i * bin_width for i in range(num_bins)]  # Define bin boundaries

# Define reclassification values
pop_values = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

# Apply the reclassification function to the population data
pop_class = reclassify_with_numpy_gt(pop.values, bins=pop_bins, new_values=pop_values)

# Convert the reclassified array back to xarray DataArray for plotting
pop_class_da = xr.DataArray(pop_class, coords=pop.coords, dims=pop.dims)

# Plot the data
fig, ax = plt.subplots()
cmap = plt.get_cmap('RdYlGn_r', 10) 
oa = pop_class_da.plot(ax=ax, cmap=cmap, add_colorbar=False)
cbar = fig.colorbar(oa, ticks=[1, 3.25, 5.5, 7.75, 10])
cbar.ax.set_yticklabels(['Very Low', 'Low', 'Medium', 'High', 'Very High'], size=10)
ax.set_xlabel('lon [deg_east]')
ax.set_ylabel('lat [deg_north]')
plt.title('Density of the vulnerable population in the region of interest', size=16, pad=10)
plt.show()
../../../../_images/9e606fe2afcca09c7b26f508beb13b1256a81bf0204663d8ad3d25b130e6235a.png

Plot maps of the overheated areas next to the map of vulnerable population density#

# This plots the overheated areas and population density maps
fig, axes=plt.subplots(ncols=2, figsize=(18,6))
cmap = plt.get_cmap('RdYlGn_r', 10) 
# Plot a data
oa1 = lc_class_da.plot(ax = axes[0], cmap=cmap, add_colorbar=False)
cbar = fig.colorbar(oa1, ticks=[1, 3.25, 5.5, 7.75, 10])
cbar.ax.set_yticklabels([ 'Very Low', 'Low', 'Medium', 'High', 'Very high'], size=15)
axes[0].set_xlabel('')
axes[0].set_ylabel('')
axes[0].set_title('Overheated areas in the area of interest - exposure', size=15, pad = 10)
# Plot a data
oa2 = pop_class_da.plot(ax = axes[1], cmap=cmap, add_colorbar=False)
cbar = fig.colorbar(oa2, ticks=[1, 3.25, 5.5, 7.75, 10])
cbar.ax.set_yticklabels([ 'Very Low', 'Low', 'Medium', 'High', 'Very high'], size=15)
axes[1].set_xlabel('')
axes[1].set_ylabel('')
axes[1].set_title('Population density in the area of interest - vulnerability', size=15, pad = 10)
plt.draw()
../../../../_images/eb8f88ba89ea86136481ea80dfe93355dc62c0e00298dc3000b4bb096bf6b127.png
  • On these plots we can see the most overheated areas together with the population density of the vulnerable groups of the population

  • These maps were reclassified to the 10 groups, for the computation of the final Risk map based on the 10+10 risk matrix

  • Risk matrix sum of the places that represent the greatest risk of exposure to high temperature and vulnerable population density (Very low group is the smallest because by sum we cannot get the value of 1)

  • The Risk matrix will be used in the next step for the estimation of the risk severity

heatwave ilustration

Save data LST and population map#

# This code saves the data to results_dir
lc_class_da.rio.to_raster(raster_path=f'{results_dir}/risk_LST.tif')
pop_class_da.rio.to_raster(raster_path=f'{results_dir}/risk_pop.tif')

Calculate the heatwave risk map#

In this step we calculate the heatwave risk map based on the exposure (LST - areas that heat up most) x vulnerability (density of vulnerable population). This risk map would then be based on historical data, and some interpretation is still needed to translate this result to future risk (see more information on this in the conclusions).

Load data and create a raster stack#

# This code creates a raster stack from risk_LST and risk_pop data, we need this step for the next processing of the data 
S2list = glob( f'{results_dir}/risk_*.tif')
#
with rasterio.open(S2list[0]) as src0:
    meta = src0.meta
#
meta.update(count = len(S2list))
#
with rasterio.open(f'{results_dir}/risk_raster_stack.tif', 'w', **meta) as dst:
    for id, layer in enumerate(S2list, start=1):
        with rasterio.open(layer) as src1:
            dst.write_band(id, src1.read(1))

Calculate the risk#

  • We also loaded the vector layer of the critical places selected by Zilina City into the results. It is an example of how you can upload a map of the places you select as vulnerable places in your city based on your local knowledge and see if they are exposed to heat (places like hospitals, overcrowded places during the day, squares, etc.).

# This code loads the Critical infrastructure data, these data were created by the Zilina municipality office
ci=f'{data_dir}/ci_features_ZA.shp'
CI=gpd.read_file(ci)
CI_WGS=CI.to_crs(epsg=4326)
# This code calculates a risk map by multiplying a risk_LST and risk_pop data 
risk=f'{results_dir}/risk_raster_stack.tif'
risk = xr.open_dataset(risk)
risk=risk['band_data']
risk=(risk[0])+(risk[1])
# Plot a data
fig, ax = plt.subplots(figsize=(12, 9))

cmap = plt.get_cmap('RdYlGn_r', 10) 
oa3 = risk.plot(ax = ax, cmap=cmap, add_colorbar=False,vmin=1, vmax=20)
cbar = fig.colorbar(oa3, ticks=[1, 5.75, 10.5, 15.25, 20])
cbar.ax.set_yticklabels([ 'Very Low', 'Low', 'Medium', 'High', 'Very high'], size=10)
ax.set_xlabel('')
ax.set_ylabel('')
plt.title('Heatwave risk level', size=19, pad = 14)

ci= CI_WGS.plot(ax=ax, color='blue', alpha=0.2)
../../../../_images/c4b3975d10597344b24bd7531e8e9feef7eb046a42f8e4c6fa34de4ecaf8c113.png

Save the risk map#

# This code saves the risk identification map in the results_dir
risk.rio.to_raster(raster_path=f'{results_dir}/Heatwave_risk_identification_map.tif')

Based on the risk interpretation map (above), we can identify the places most influenced by the heatwaves (dark red), for better visualization we can load a map with leafmap (below).

Plot Risk data on the interactive map#

To see these maps on the interactive zoom in/out map with the Open Street Base map run the code below.

# This code creates a tile client from risk maps
# First, create a tile server from local raster file
riskpop = TileClient(f'{results_dir}/risk_pop.tif')
riskLST = TileClient(f'{results_dir}/risk_LST.tif')
HWRI = TileClient(f'{results_dir}/Heatwave_risk_identification_map.tif')
# This code creates ipyleaflet tile layer from that server
tpop = get_leaflet_tile_layer(riskpop, colormap='Reds', opacity=0.7, nodata=0, name='Risk population')
tLST = get_leaflet_tile_layer(riskLST, colormap='bwr', opacity=0.7, nodata=0, name='LST')
tHWRI = get_leaflet_tile_layer(HWRI, colormap='RdYlGn_r', opacity=0.7, nodata=0, name='Heat wave risk identification')
# This code plots the results on the ipyleaflet map 

# Set the size of the map
map_layout = widgets.Layout(width='1100px', height='800px')

# This code plots all loaded rasters and vectors on the ipyleaflet map
m = Map(center=riskLST.center(), zoom=riskLST.default_zoom, layout=map_layout)

control = LayersControl(position='topright')

m.add(tpop)
m.add(tLST)
m.add(tHWRI)

labels = ["Very low", "Low", "Medium", "High", "Very High"]
colors = [(0, 104, 55), (35, 132, 67), (255, 255, 191), (255, 127, 0), (215, 25, 28)]

# Create legend HTML content with custom styles (smaller size)
legend_html = "<div style='position: absolute; bottom: 2px; left: 2px; padding: 10px; " \
              "background-color: #FFF; border-radius: 10px; box-shadow: 2px 2px 2px rgba(0,0,0,0.5); " \
              "width: 75px; height: 205px; '>"

# Append legend items (labels with colored markers)
for label, color in zip(labels, colors):
    color_hex = '#%02x%02x%02x' % color  # Convert RGB tuple to hex color
    legend_html += f"<p style='font-family: Arial, sans-serif; font-size: 14px; '>" \
                   f"<i style='background: {color_hex}; width: 10px; height: 10px; display: inline-block;'></i> {label}</p>"

legend_html += "</div>"

# Create a custom widget with the legend HTML
legend_widget = widgets.HTML(value=legend_html)

legend_control = WidgetControl(widget=legend_widget, position='bottomleft')
m.add_control(legend_control)

m.add(control)

m

How to use the map: You can add or remove a map by “click” on the “layer control” in the top right corner, or “Zoom in/out” by “click” on the [+]/[-] in the top left corner We recommend first unclicking all the maps and then displaying them one by one, the transparency of the maps allows you to see which areas on the OpenStreetMap are most exposed to the heat, and in combination with the distribution of the vulnerable population, you can identify which areas should be prioritized for the application of the heat mitigation measures.

Conclusion for the Risk identification results#

The results of the risk workflow help you identify the places that are the most exposed to the effects of heat in combination with the map of areas with high density of vulnerable groups of population (based on age).

Together with the results of the Hazard assessment workflow (example for the Zilina city in the picture below) that gives you the information about the probability of the heatwave occurrence in the future, gives you a picture of the heatwave-connected problems in your selected area.

Note: to generate such a graph for your own location please go to the hazard assessment workflow (e.g. EuroHEAT methodology found here).

Heat-wave ilustration

Important: There are important limitations to this analysis, associated with the source datasets:

  • The Land surface temperature data is derived from the Landsat 8 satellite imagery and data of acceptable quality (without cloud cover) is only available for a limited number of days. That means that we get limited information on the maximum LST (past heatwaves are not always captured in the images from Landsat8). The resolution of the 30x30m also has its limitations, especially in the densely built-up areas.

  • The world population data are available for 2020, and the distribution of the population may have changed (and will continue to change). Moreover, WorldPop data is based on modelling of populations distributions and not on local census data, therefore it may be inaccurate. Use of local data may help to reduce this uncertainty.

9: References#

10: Authors#