Quantitative models support investigators in several risk analysis applications. The calculation of sensitivity measures is an integral part of this analysis. However, it becomes a computationally challenging task, especially when the number of model inputs is large and the model output is spread over orders of magnitude. We introduce and test a new method for the estimation of global sensitivity measures. The new method relies on the intuition of exploiting the empirical cumulative distribution function of the simulator output. This choice allows the estimators of global sensitivity measures to be based on numbers between 0 and 1, thus fighting the curse of sparsity. For density-based sensitivity measures, we devise an approach based on moving averages that bypasses kernel-density estimation. We compare the new method to approaches for calculating popular risk analysis global sensitivity measures as well as to approaches for computing dependence measures gathering increasing interest in the machine learning and statistics literature (the Hilbert-Schmidt independence criterion and distance covariance). The comparison involves also the number of operations needed to obtain the estimates, an aspect often neglected in global sensitivity studies. We let the estimators undergo several tests, first with the wing-weight test case, then with a computationally challenging code with up to k=30,000 inputs, and finally with the traditional Level E benchmark code.
Fighting the curse of sparsity: probabilistic sensitivity measures from cumulative distribution functions
Borgonovo, Emanuele
2020
Abstract
Quantitative models support investigators in several risk analysis applications. The calculation of sensitivity measures is an integral part of this analysis. However, it becomes a computationally challenging task, especially when the number of model inputs is large and the model output is spread over orders of magnitude. We introduce and test a new method for the estimation of global sensitivity measures. The new method relies on the intuition of exploiting the empirical cumulative distribution function of the simulator output. This choice allows the estimators of global sensitivity measures to be based on numbers between 0 and 1, thus fighting the curse of sparsity. For density-based sensitivity measures, we devise an approach based on moving averages that bypasses kernel-density estimation. We compare the new method to approaches for calculating popular risk analysis global sensitivity measures as well as to approaches for computing dependence measures gathering increasing interest in the machine learning and statistics literature (the Hilbert-Schmidt independence criterion and distance covariance). The comparison involves also the number of operations needed to obtain the estimates, an aspect often neglected in global sensitivity studies. We let the estimators undergo several tests, first with the wing-weight test case, then with a computationally challenging code with up to k=30,000 inputs, and finally with the traditional Level E benchmark code.File | Dimensione | Formato | |
---|---|---|---|
RA_2020_Final.pdf
accesso aperto
Descrizione: Versione Finale Pubblicata Sulla Rivista
Tipologia:
Pdf editoriale (Publisher's layout)
Licenza:
Creative commons
Dimensione
3.1 MB
Formato
Adobe PDF
|
3.1 MB | Adobe PDF | Visualizza/Apri |
RiskAnalysis2020.docx
non disponibili
Descrizione: Lettera di accettazione
Tipologia:
Allegato per valutazione Bocconi (Attachment for Bocconi evaluation)
Licenza:
NON PUBBLICO - Accesso privato/ristretto
Dimensione
23.89 kB
Formato
Microsoft Word XML
|
23.89 kB | Microsoft Word XML | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.