Risk assessment for relative drought#
A workflow from the CLIMAAX Handbook and DROUGHTS GitHub repository.
See our how to use risk workflows page for information on how to run this notebook.
How do we assess drought risk?#
There are many different metrics to assess drought risk, which account for at least one of the risk factors: hazard, exposure and vulnerability.
This workflow quantifies drought risk as the product of drought hazard, exposure and vulnerability. The methodology used here was developed and applied globally by Carrão et al. (2016) \(^2\). The result of this workflow is a risk map showing the relative drought risk of different spatial units (i.e., subnational administrative NUTS3 regions) from a larger region (i.e., NUTS2). Regional drought risk scores are on a scale of 0 to 1, with 0 representing the lowest risk and 1 the highest. The workflows takes each risk determinant (i.e. hazard, exposure and vulnerability) and normalised it taking into account its maximum and minimum values across all sub-national administrative regions. Thus, the results of this drought risk workflow are relative to the sample of geographic regions used for normalisation. The proposed risk scale is not a measure of absolute losses or actual damage, but a relative comparison of drought risk between the input regions. Therefore, the resulting data and mapping can help users to assess in which sub-administrative units within a jurisdictions the drought risk is or will be higher, allowing for better resouce allocation and better coordination within and between different levels of government.
Below is a description of the data and tools used to calculate drought exposure and vulnerability, both for the historic period and for future scenarios, and the outputs of this workflow. Workflow on how to calculate drought hazard can be found in the Hazard assessment notebook.
For the future scenarios, we decided to follow the SSP-RCPs combinations as in the IPCC 6th assessment report (https://www.ipcc.ch/assessment-report/ar6/).
More expert users can find more detailed and technical explanation of the methodology in the colored text boxes.
Spatial units#
We used GeoJSON maps of NUTS2 and NUTS3 regions to define the selected spatial units, which can be downloaded at this link https://gisco-services.ec.europa.eu/distribution/v2/nuts/geojson/
Datasets (historic and future projections)#
In this workflow the following data is used:
Hazard data and methods#
Drought hazard (dH) for a given region is estimated as the probability of exceedance the median of regional (e.g., EU level) severe precipitation deficits for an historical reference period (e.g. 1981-2015) or for a future projection period (e.g. 2031-2060; 2071-2100).
This workflow relies on pre-calculated drought hazard data. Workflow on how to calculate drought hazard can be found in the Hazard assessment notebook.
Exposure data and methods#
Drought exposure (dE) indicates the potential losses from different types of drought hazards in different geographical regions. In general, exposure data identifies and quantifies the different types of physical entities on the ground, including built assets, infrastructure, agricultural land, people, livestock, etc. that can be affected by drought (e.g. the number of cars does not count).
Quantyfing drought exposure utilizes a non-compensatory model to account for the spatial distribution of a potential impact for crops and livestock, competition on water (e.g., for industrial uses represented by the water stress indicator), and human direct need (e.g., for drinking represneted by population size). More information can be found in the dropdown box below.
The algorithm expects a table in which each row represent an area of interest, and each column a variable. The first column contains the codes of the area of interest (e.g., NUTS2), which have to be identical to the codes as they appear in the NUTS2 spatial data from the European Commision.
Depending on the region of interest, other indicators may also be relevant for estimating drought exposure. We recommend that users research the most relevant factors in the region that may be exposed to drought before starting the analysis.
Note
Quantyfing drought exposure uses a non-compensatory model to account for the spatial distribution of potential impacts on crops and livestock, competition for water (e.g. for industrial uses represented by the water stress indicator) and direct human demand (e.g. for drinking water represented by population size). We apply a Data Envelopment Analysis (DEA) to determine the relative exposure of each region to drought.
Data Envelopment Analysis (DEA) \(^5\)
Data Envelopment Analysis (DEA) has been widely used to assess the efficiency of decision making units (DMUs) in many areas of organisational performance improvement, such as financial institutions, manufacturing companies, hospitals, airlines and government agencies. In the same way that DEA estimates the relative efficiency of DMUs, it can also be used to quantify the relative exposure of a region (in this case the DMUs) to drought from a multidimensional set of indicators.
DEA works with a set of multiple inputs and outputs. In our case, the regions are only described by inputs, the indicators, so a dummy output can be used which has a unit value, i.e. all outputs are the same and equal, e.g. 1000. The efficiency of each region is then estimated as a weighted sum of outputs divided by a weighted sum of inputs, where all efficiencies are constrained to lie between zero and one. An optimisation algorithm is used for the weights to achieve the highest efficiency.
The exposure raw data is normalized using a linear transformation, as described in Eq. 2:
Eq. 2:
Vulnerability data and methods#
Vulnerability data describes the elements that make a system susceptible to a natural hazard, which vary depending on the type of hazard and the nature of the system. However, there are some generic indicators such as poverty, health status, economic inequality and aspects of governance, which apply to all types of exposed parts and therefore remain constant despite changes in the type of hazard that pose a risk.
In this workflow, the selection of proxy indicators representing the economic, social, and infrastructural factors of drought vulnerability in each geographic location follows the criteria defined by Naumann et al. (2014): the indicator has to represent a quantitative or qualitative aspect of vulnerability factors to drought (generic or specific to some exposed element), and public data need to be freely available at the global scale.
Drought vulnerability is calculated by combining indicators for each factor (economic, social and infrastructure) for each region with a non-compensatory model, as done for exposure, and then aggregating the DEA results for the three factors to obtain a drought vulnerability (dV) score (see colored box below for more details).
The algorithm expects a table in which each row represent an area of interest, and each column a variable. Each variable has to be named with a prefix according to the factor, i.e. Social_ Economic_ or Infrast_, followed by a number or the name of the variable. The first column contains the codes of the area of interest (e.g., NUTS2), which have to be identical to the codes as they appear in the NUTS2 spatial data from the European Commision.
As for exposure, the indicators listed here are a suggestion based on the most common proxies for economic, social, and infrastructural factors of drought vulnerability in each geographic location. We recommend that users research the most relevant factors in the region that make it vulnerable to drought before starting the analysis.
Note
Quantifying drought vulnerability Vulnerability to drought is computed as a 2-step composite model that derives from the aggregation of proxy indicators representing the economic, social, and infrastructural factors of vulnerability at each geographic location.
In the first step, indicators for each factor (i.e. economic, social and infrastructural) are combined using a DEA model (see above), as similar as for drought exposure. In the second step, individual factors resulting from independent DEA analyses are arithmetically aggregated (using the simple mean) into a composite model of drought vulnerability (dV):
Eq. 3:
where Soc\(_i\), Econ\(_i\), and Infr\(_i\) are the social, economic and infrastructural vulnerability factors for geographic location (or region) \(i\).
The normalization of the vulnerability indicator is also done using a linear transformation (see Eq. 2), and it accounts to the correlation of the indicator with drought vulberability. In case of negative correlation (e.g., GDP per capita), the normalized score is estimated as \(1 - Z_i\).
Workflow implementation#
Load libraries#
Find more info about the libraries used in this workflow here
os - To create directories and work with files
pandas - To create and manage data frames (tables) in Python
geopandas - Extend pandas to store and manipulate spatial data
numpy - For basic math tools and operations
plotly - For dynamic and interactive plotting
datetime - For handling dates in Python
import os
import urllib
import pooch
os.environ['USE_PYGEOS'] = '0'
import pandas as pd
import geopandas as gpd
import numpy as np
import plotly.express as px
# READ SCRIPTS
# adapted from https://github.com/metjush/envelopment-py/tree/master used for DEA
from envelopmentpy.envelopment import *
Define working environment and global parameters#
This workflow relies on pre-proceessed data. The user will define the path to the data folder and the code below would create a folder for outputs.
# Set working environment
workflow_folder = './sample_data_nuts3/'
# Define scenario 0: historic; 1: SSP1-2.6; 2: SSP3-7.0. 3: SSP5-8.5
scn = 0
# Define time (applicable only for the future): 0: near-future (2050); 1: far-future (2080)
time = 0
pattern = "historic"
if scn != 0:
pattern = ['ssp126', 'ssp370', 'ssp585'][scn - 1] + '_' + ['nf', 'ff'][time]
# debug if folder does not exist - issue an error to check path
# Create outputs folder
name_output_folder = 'outputs'
os.makedirs(os.path.join(workflow_folder, name_output_folder), exist_ok=True)
Access to sample dataset#
Load the file registry for the droughtrisk_sample_nuts3
dataset in the CLIMAAX cloud storage with pooch.
sample_data_pooch = pooch.create(
path=workflow_folder,
base_url="https://object-store.os-api.cci1.ecmwf.int/climaax/droughtrisk_sample_nuts3/"
)
sample_data_pooch.load_registry("files_registry.txt")
If any files requested below were downloaded before, pooch will inspect the local file contents and skip the download if the contents match expectations.
Load NUTS3 spatial data and define regions of interest#
NUTS3 data is available in various resolutions: 1:1M (01M
), 1:3M (03M
), 1:10M (10M
), 1:20M (20M
) and 1:60M (60M
).
nuts3_resolution = "10M"
# Load nuts3 spatial data
print('Load NUTS3 map with three sample regions')
nuts = None
while nuts is None:
try:
nuts = gpd.read_file(
'https://gisco-services.ec.europa.eu/distribution/v2/nuts/geojson/'
f'NUTS_RG_{nuts3_resolution}_2021_4326_LEVL_3.geojson'
)
except urllib.error.HTTPError as e:
# Retry download for 503 errors
if e.code != 503:
raise e
nuts['Location'] = nuts['CNTR_CODE'] + ': ' + nuts['NAME_LATN']
nuts = nuts.set_index('Location')
#nuts.to_crs(pyproj.CRS.from_epsg(4326), inplace=True)
# set country = 0 to map all Europe
#nuts['NUTS_ID2'] = nuts['NUTS_ID'].str.slice(0,4)
print("Choose country code from: ", nuts['CNTR_CODE'].unique())
Load NUTS3 map with three sample regions
Choose country code from: ['BG' 'CH' 'AL' 'AT' 'BE' 'DE' 'CY' 'CZ' 'DK' 'EE' 'EL' 'FI' 'FR' 'ES'
'HU' 'HR' 'LT' 'IE' 'IS' 'IT' 'NL' 'LU' 'LV' 'ME' 'MK' 'LI' 'PL' 'NO'
'MT' 'SK' 'TR' 'RS' 'SE' 'SI' 'PT' 'RO' 'UK']
Choose country code:#
ccode = "AL"
# validate country selection and subset regions
if not nuts['CNTR_CODE'].str.contains(ccode).any:
print("Country code: ", ccode, " is not valid; please choose a valid country code.")
else:
nuts = nuts.query('CNTR_CODE in @ccode')
regions = nuts['NUTS_ID']
#print("List of nuts2: ", nuts['NUTS_ID2'].unique())
Load pre-calculated hazard#
# Load precipitation data
print("Analyzing drought hazard. This process may take few minutes...")
print('\n')
precip_file = sample_data_pooch.fetch(f"outputs_hazards/droughthazard_{ccode}_{pattern}.csv")
precip = pd.read_csv(precip_file)
# Drop missing regions
col_subset = np.isin(regions, precip['NUTS_ID'])
regions = regions[col_subset]
output = pd.DataFrame(regions, columns = ['NUTS_ID'])
# Build output dataset
output = pd.merge(output, precip[['NUTS_ID', 'hazard_raw']], on = 'NUTS_ID')
print(output.head(3))
Analyzing drought hazard. This process may take few minutes...
NUTS_ID hazard_raw
0 AL013 0.791
1 AL015 0.780
2 AL014 0.736
Exposure workflow#
Set global parameter#
# Set to print scatter plot to evaluate the DEA results against the maximum exposure/vulnerability.
# The DEA of a region should approximate, or to be higher from the maximum exposure/vulnerability factor.
# Evaluation is more meaningful in application for multiple coutnries.
evaluateDEA = False
Load exposure data#
Exposure indicators for EU countries at the NUTS3 level are provided in the sample_data folder (file: “drought_exposure.csv”).
print("Analyzing drought exposure. This process may take few minutes...")
print('\n')
exposure_file = sample_data_pooch.fetch(f"drought_exposure_{pattern}.csv")
exposure = pd.read_csv(exposure_file)
# Take out country statistics for stretching
# np.array (0: min, 1: max; NUTS_ID..+variables)
exposure = exposure.query('NUTS_ID.str.contains(@ccode)') # see how to use ^ to only use the beginning
cnt_range = pd.Series(index=['min','max'],data=[exposure.min(),exposure.max()])
exposure = exposure.query('NUTS_ID in @regions')
# Normalize the exposure using a min-max strech.
cols = exposure.columns[1:]
for varname in cols:
# save maximum and minimum values
mx_exposure = cnt_range[1][varname]#np.nanmax(exposure[varname])
mn_exposure = cnt_range[0][varname]#np.nanmin(exposure[varname])
# stretch values between 0 -1
exposure.loc[:, varname] = np.maximum((exposure.loc[:, varname] - mn_exposure)/(mx_exposure - mn_exposure), 0.01)
# Load exposure and sort to match nuts['NUTS_ID'] order
sorterIndex = dict(zip(nuts['NUTS_ID'], range(len(nuts['NUTS_ID']))))
exposure['sort_col'] = exposure['NUTS_ID'].map(sorterIndex)
exposure.sort_values(['sort_col'],
ascending = [True], inplace = True)
exposure = exposure.drop(columns='sort_col')
# Show data
print('Input exposure data (top 3 rows): ')
print(exposure.head(3))
print('\n')
Analyzing drought exposure. This process may take few minutes...
Input exposure data (top 3 rows):
NUTS_ID cropland livestock population waterstress
2 AL013 0.010000 0.010000 0.050951 0.010000
4 AL015 0.027875 0.046728 0.049342 0.184549
3 AL014 0.255774 0.107759 0.173301 0.905582
/etc/ecmwf/ssd/ssd1/tmpdirs/ecm5975.40909911/ipykernel_332455/1274123378.py:15: FutureWarning: Series.__getitem__ treating keys as positions is deprecated. In a future version, integer keys will always be treated as labels (consistent with DataFrame behavior). To access a value by position, use `ser.iloc[pos]`
mx_exposure = cnt_range[1][varname]#np.nanmax(exposure[varname])
/etc/ecmwf/ssd/ssd1/tmpdirs/ecm5975.40909911/ipykernel_332455/1274123378.py:16: FutureWarning: Series.__getitem__ treating keys as positions is deprecated. In a future version, integer keys will always be treated as labels (consistent with DataFrame behavior). To access a value by position, use `ser.iloc[pos]`
mn_exposure = cnt_range[0][varname]#np.nanmin(exposure[varname])
Calculate DEA and dE#
Data Envelopment Analysis (DEA) is used to quantify the relative exposure of a region to drought (dE) from a multidimensional set of indicators.
# set DEA(loud = True) to print optimization status/details
dea_e = DEA(np.array([1.] * len(regions)).reshape(len(regions),1),\
exposure.to_numpy()[:,1:],\
loud = False) # we use a dummy factor for the input
dea_e.name_units(regions)
# returns a list with regional efficiencies
dE = dea_e.fit()
if evaluateDEA:
dEmax = exposure.iloc[:,1:].max(axis = 1)
print("plot max vs DEA:")
fig = px.scatter(
x=list(dEmax),
y=dE,
title = 'Evaluate exposure\'s DEA',\
labels={
"x": "Maximum exposure",
"y": "DEA"
}
)
fig.show()
output['exposure_raw'] = dE
print('>>>>> Drought exposure is completed.')
>>>>> Drought exposure is completed.
Vulnerability workflow#
Load vulnerability data#
Vulnerability indicators for EU countries at the NUTS3 level are provided in the sample_data folder (file : “drought_vulnerability.csv”).
print("Analyzing drought vulnerability. This process may take few minutes...")
print('\n')
vulnerability_file = sample_data_pooch.fetch(f"drought_vulnerability_{pattern}.csv")
vulnerability = pd.read_csv(vulnerability_file)
# Take out country statistics for stretching
# np.array (0: min, 1: max; NUTS_ID..+variables)
vulnerability = vulnerability.query('NUTS_ID.str.contains(@ccode)') # see how to use ^ to only use the beginning
cnt_range = pd.Series(index=['min','max'],data=[vulnerability.min(),vulnerability.max()])
vulnerability = vulnerability.query('NUTS_ID in @regions')
cols = vulnerability.columns[1:]
print("Define correlation's directions for the following indicators: ", list(cols))
Analyzing drought vulnerability. This process may take few minutes...
Define correlation's directions for the following indicators: ['overall_ruralshr', 'overall_gdpcap']
# Pre-define the correlation's direction between exposure and drought risk
# The example shows that:
# correlation of the rural population share with vulnerability is positive (True, below), i.e.,
# rural regions are more vulnerable to droughts
# correlation of the gdp/capita with vulnerability is negative (False, below)
corelDirection = [True, False]
# Get vulnebrability factors, e.g., Social, Economic, Infrast
def sclt(x):
return(x[0])
factorsString = list(cols.str.split('_').map(sclt).drop_duplicates())
# Normalize the exposure using a min-max strech.
for varname in cols:
# Save maximum and minimum values
mx_vulnerability = cnt_range[1][varname]#np.nanmax(vulnerability[varname])
mn_vulnerability = cnt_range[0][varname]#np.nanmin(vulnerability[varname])
# Stretch values between 0 -1
if corelDirection[list(cols.values).index(varname)]:
# Positive correlation between vulnerability indicator and vulnerability
vulnerability.loc[:, varname] = np.maximum((vulnerability.loc[:, varname] - mn_vulnerability)/(mx_vulnerability - mn_vulnerability), 0.01)
else:
# Negative correlation between vulnerability indicator and vulnerability
vulnerability.loc[:, varname] = np.maximum(1 - (vulnerability.loc[:, varname] - mn_vulnerability)/(mx_vulnerability - mn_vulnerability), 0.01)
# Load exposure and sort to match nuts['NUTS_ID'] order
sorterIndex = dict(zip(nuts['NUTS_ID'], range(len(nuts['NUTS_ID']))))
vulnerability['sort_col'] = vulnerability['NUTS_ID'].map(sorterIndex)
vulnerability.sort_values(['sort_col'],
ascending = [True], inplace = True)
vulnerability = vulnerability.drop(columns='sort_col')
# Filter the data based on the regions
row_subset = np.isin(vulnerability['NUTS_ID'], regions)
vulnerability = vulnerability.loc[row_subset, :]
# Show the data
print('Input vulnerability data (top 3 rows): ')
print(vulnerability.head(3))
print('\n')
Input vulnerability data (top 3 rows):
NUTS_ID overall_ruralshr overall_gdpcap
2 AL013 1.000000 0.010000
4 AL015 0.441375 0.454615
3 AL014 0.472398 0.947513
Calculate the vulnerability index dV#
#calculate dV
#this is done in a two step process including a DEA
d_v = []
for fac_ in factorsString:
#for each factor category, i.e. economy, social or infrastructure, do the following:
print(">>>>> Analyzing the '" + fac_ + "' factors")
#select the indicators for each factor category
factor_subset = vulnerability.loc[:, vulnerability.columns.str.contains(fac_)]
dea_v = DEA(np.array([1.] * len(regions)).reshape(len(regions),1),\
factor_subset.to_numpy()[:, 1:],\
loud = False)
dea_v.name_units(regions)
d_v_last = dea_v.fit()
d_v.append(d_v_last)
if evaluateDEA:
dVmax = factor_subset.iloc[:,1:].max(axis = 1)
print("plot max vs DEA:")
fig = px.scatter(
x=list(dVmax),
y=d_v_last,
title = f'Evaluate vulnerabiliy\'s DEA ({fac_})',
labels={
"x": "Maximum vulnerabiliy",
"y": "DEA"
}
)
fig.show()
# returns three lists with regional efficiencies for each factor
d_v = np.array(d_v).reshape(len(factorsString), len(regions))
#calculate dV
dV = np.nanmean(d_v, axis = 0)
output['vulnerability_raw'] = dV
print('>>>>> Drought vulnerability is completed.')
>>>>> Analyzing the 'overall' factors
>>>>> Drought vulnerability is completed.
Calculate the Risk Index for each region#
# Risk = Hazard * Exposure * Vulnerability
#for i in range(0, len(regions)):
# R_last = round(dH[i] * dE[i] * dV[i], 3)
# R.append(R_last)
output['risk_raw'] = round(output['hazard_raw'] * output['exposure_raw'] * output['vulnerability_raw'], 3)
# Categorized risk and merge results with the spatial data
output['risk_cat'] = [(int(np.ceil(x * 5))) for x in output['risk_raw']]
# Keep index
nuts = nuts.merge(output, on='NUTS_ID')
nuts_idx = nuts['NUTS_ID']
nuts = nuts.set_index(nuts_idx)
Plot results#
print('\n')
print("NUTS2 with the highest drought risk (TOP 15): ")
print(pd.DataFrame(nuts.drop(columns='geometry')).sort_values(by=['risk_raw'],\
ascending = False)[['NUTS_ID', 'hazard_raw', 'exposure_raw',\
'vulnerability_raw', 'risk_raw', 'risk_cat']].head(15))
print('\n')
# plot risk map
x_nuts, y_nuts = gpd.GeoSeries(nuts.geometry).unary_union.centroid.xy
fig = px.choropleth_mapbox(nuts, geojson=nuts.geometry, locations=nuts.index, color='risk_cat',\
color_continuous_scale="reds", range_color = [1,5], mapbox_style="open-street-map")
fig.update_geos(fitbounds="locations", visible=False)
fig.update_layout(title="Drought Risk",
mapbox_center = {"lat": list(y_nuts)[0], "lon": list(x_nuts)[0]},
mapbox_zoom=5,
coloraxis_colorbar=dict(
title= "Risk category",
tickvals = [1, 2, 3, 4, 5],
ticktext = [1, 2, 3, 4, 5]
))
fig.show()
# plot risk components scatter plot
print('\n')
print('Explore drought risk dimensions (marker size indicates risk category): ')
print('Deselect specific countries by click on the country codes on the right.')
print('Select a specific country by double clicking on it.')
fig2 = px.scatter_3d(nuts, x='hazard_raw',\
y='exposure_raw',\
z='vulnerability_raw',\
size = 'risk_cat',\
color='CNTR_CODE') # nuts.index
fig2.update_layout(
scene = dict(
xaxis = dict(nticks=6, range=[0,1]),\
xaxis_title = 'Hazard',\
yaxis = dict(nticks=6, range=[0,1]),\
yaxis_title = 'Exposure',\
zaxis = dict(nticks=6, range=[0,1]),
zaxis_title='Vulnerability',\
aspectmode = "manual",
aspectratio = dict(x = 0.9, y = 0.9, z = 0.9)),
legend = dict(title = "Country code"),
height = 700)
fig2.show()
print('\n')
output.to_csv(os.path.join(workflow_folder, name_output_folder, f'droughtrisk_{ccode}_{pattern}.csv'))
NUTS2 with the highest drought risk (TOP 15):
NUTS_ID hazard_raw exposure_raw vulnerability_raw risk_raw \
NUTS_ID
AL032 AL032 0.754 1.000 0.892 0.673
AL014 AL014 0.736 0.906 0.948 0.632
AL021 AL021 0.726 0.691 0.912 0.458
AL012 AL012 0.797 1.000 0.459 0.366
AL035 AL035 0.728 0.603 0.793 0.348
AL011 AL011 0.787 0.432 1.000 0.340
AL031 AL031 0.681 0.435 0.832 0.246
AL034 AL034 0.800 0.300 0.940 0.226
AL033 AL033 0.687 0.219 0.895 0.135
AL022 AL022 0.739 1.000 0.124 0.092
AL015 AL015 0.780 0.185 0.455 0.066
AL013 AL013 0.791 0.051 0.010 0.000
risk_cat
NUTS_ID
AL032 4
AL014 4
AL021 3
AL012 2
AL035 2
AL011 2
AL031 2
AL034 2
AL033 1
AL022 1
AL015 1
AL013 0
Explore drought risk dimensions (marker size indicates risk category):
Deselect specific countries by click on the country codes on the right.
Select a specific country by double clicking on it.
Conclusions#
The above workflow estimates the relative drought risk of each NUTS3 region of a selected European country (NUTS2) as the product of drought hazard, exposure, and vulnerability. It results in relative drought risk classes ranging between 1 (low risk, 0 -0.2 risk) to 5 (high risk, 0.8 -1).
Users can use this workflow to compare the relative drought risk in each NUTS3 region of the selected country within a given historical period. Furthemore, they can explore how the relative drought risk between NUTS3 regions changes in different future scenarios. However, note that being the risk category for each region always relative to the other regions considered in the workflow (here: country level), the risk categories are not directly comparable between different times period. This means that the risk category of one region may be higher or lower compared to the other regions, but not between e.g. historical vs. future datasets. Also, please note that the workflow is not applicable to the following countries, as they do not have NUTS3 level: Montenegro (ME), Cyprus (CY), Malta (MT), Lithuania (LI), Luxemburg (LU).
Contributors#
The workflow has beend developed by Silvia Artuso and Dor Fridman from IIASA’s Water Security Research Group, and supported by Michaela Bachmann from IIASA’s Systemic Risk and Reslience Research Group.
References#
[1] Zargar, A., Sadiq, R., Naser, B., & Khan, F. I. (2011). A review of drought indices. Environmental Reviews, 19: 333-349.
[2] Carrão, H., Naumann, G., & Barbosa, P. (2016). Mapping global patterns of drought risk: An empirical framework based on sub-national estimates of hazard, exposure and vulnerability. Global Environmental Change, 39, 108-124.
[3] Lyon, B., & Barnston, A. G. (2005). ENSO and the spatial extent of interannual precipitation extremes in tropical land areas. Journal of climate, 18(23), 5095-5109.
[4] Carrão, H., Singleton, A., Naumann, G., Barbosa, P., & Vogt, J. V. (2014). An optimized system for the classification of meteorological drought intensity with applications in drought frequency analysis. Journal of Applied Meteorology and Climatology, 53(8), 1943-1960.
[5] Sherman, H. D., & Zhu, J. (2006). Service productivity management: Improving service performance using data envelopment analysis (DEA). Springer science & business media.