Decision makers increasingly rely on forecasts or predictions generated by quantitative models. Best prac- tices recommend that a forecast report be accompanied by a sensitivity analysis. A wide variety of prob- abilistic sensitivity measures have been suggested; however, model inputs may be ranked differently by different sensitivity measures. Is there some way to reduce this disparity by identifying what probabilis- tic sensitivity measures are most appropriate for a given reporting context? We address this question by postulating that importance rankings of model inputs generated by a sensitivity measure should corre- spond to the information value for those inputs in the problem of constructing an optimal report based on some proper scoring rule. While some sensitivity measures have already been identified as informa- tion value under proper scoring rules, we identify others and provide some generalizations. We address the general question of when a sensitivity measure has this property, presenting necessary and sufficient conditions. We directly examine whether sensitivity measures retain important properties such as trans- formation invariance and compliance with Renyi’s Postulate D for measures of statistical dependence. These results provide a means for selecting the most appropriate sensitivity measures for a particular reporting context and provide the analyst reasonable justifications for that selection. We illustrate these ideas using a large scale probabilistic safety assessment case study used to support decision making in the design and planning of a lunar space mission.
Probabilistic sensitivity measures as information value
Borgonovo, Emanuele
;Plischke, Elmar
2021
Abstract
Decision makers increasingly rely on forecasts or predictions generated by quantitative models. Best prac- tices recommend that a forecast report be accompanied by a sensitivity analysis. A wide variety of prob- abilistic sensitivity measures have been suggested; however, model inputs may be ranked differently by different sensitivity measures. Is there some way to reduce this disparity by identifying what probabilis- tic sensitivity measures are most appropriate for a given reporting context? We address this question by postulating that importance rankings of model inputs generated by a sensitivity measure should corre- spond to the information value for those inputs in the problem of constructing an optimal report based on some proper scoring rule. While some sensitivity measures have already been identified as informa- tion value under proper scoring rules, we identify others and provide some generalizations. We address the general question of when a sensitivity measure has this property, presenting necessary and sufficient conditions. We directly examine whether sensitivity measures retain important properties such as trans- formation invariance and compliance with Renyi’s Postulate D for measures of statistical dependence. These results provide a means for selecting the most appropriate sensitivity measures for a particular reporting context and provide the analyst reasonable justifications for that selection. We illustrate these ideas using a large scale probabilistic safety assessment case study used to support decision making in the design and planning of a lunar space mission.File | Dimensione | Formato | |
---|---|---|---|
EJOR_2020_VOI_Sensitivity.pdf
non disponibili
Descrizione: article
Tipologia:
Pdf editoriale (Publisher's layout)
Licenza:
NON PUBBLICO - Accesso privato/ristretto
Dimensione
1.01 MB
Formato
Adobe PDF
|
1.01 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.