The diffusion model (DDM) is a prominent account of how people make decisions. Many of these decisions involve comparing two alternatives based on differences of perceived stimulus magnitudes, such as economic values. Here, we propose a consistent estimator for the parameters of a DDM in such cases. This estimator allows us to derive decision thresholds, drift rates, and subjective percepts (i.e., utilities in economic choice) directly from the experimental data. This eliminates the need to measure these values separately or to assume specific functional forms for them. Our method also allows one to predict drift rates for comparisons that did not occur in the dataset. We apply the method to two datasets, one comparing probabilities of earning a fixed reward and one comparing objects of variable reward value. Our analysis indicates that both datasets conform well to the DDM. Interestingly, we find that utilities are linear in probability and slightly convex in reward.

Measuring utility with diffusion models

Berlinghieri, Renato;Maccheroni, Fabio;Marinacci, Massimo;Pirazzini, Marco
2023

Abstract

The diffusion model (DDM) is a prominent account of how people make decisions. Many of these decisions involve comparing two alternatives based on differences of perceived stimulus magnitudes, such as economic values. Here, we propose a consistent estimator for the parameters of a DDM in such cases. This estimator allows us to derive decision thresholds, drift rates, and subjective percepts (i.e., utilities in economic choice) directly from the experimental data. This eliminates the need to measure these values separately or to assume specific functional forms for them. Our method also allows one to predict drift rates for comparisons that did not occur in the dataset. We apply the method to two datasets, one comparing probabilities of earning a fixed reward and one comparing objects of variable reward value. Our analysis indicates that both datasets conform well to the DDM. Interestingly, we find that utilities are linear in probability and slightly convex in reward.
2023
2023
Berlinghieri, Renato; Krajbich, Ian; Maccheroni, Fabio; Marinacci, Massimo; Pirazzini, Marco
File in questo prodotto:
File Dimensione Formato  
BerlinghieriKrajbichMaccheroniMarinacciPirazzini2023.pdf

accesso aperto

Descrizione: article
Tipologia: Pdf editoriale (Publisher's layout)
Licenza: Creative commons
Dimensione 6.75 MB
Formato Adobe PDF
6.75 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11565/4058236
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact