Материал: искусственный интеллект

Внимание! Если размещение файла нарушает Ваши авторские права, то обязательно сообщите нам

 

suai.ru/our-contacts

quantum machine learning

 

 

 

 

 

 

 

 

 

 

 

 

40.Pettibone, J.C. (2012) Testing the effect of time pressure on asymmetric dominance and compromise decoys in choice.

Judgm. Decis. Mak. 7, 513–523

41.Gluth, S. et al. (2018) Value-based attentional capture affects multi-alternative decision making. eLife 7, e39659

42.Tversky, A. and Simonson, I. (1993) Context-dependent preferences. Manag. Sci. 39, 1179–1189

43.Bhatia, S. (2017) Comparing theories of reference-dependent choice. J. Exp. Psychol. Learn. Mem. Cogn. 43, 1490–1507

44.Howes, A. et al. (2016) Why contextual preference reversals maximize expected value. Psychol. Rev. 123, 368–391

45.Soltani, A. et al. (2012) A range-normalization model of contextdependent choice: a new model and evidence. PLoS Comput. Biol. 8, e1002607

46.Ronayne, D. and Brown, G.D. (2017) Multi-attribute decision by sampling: an account of the attraction, compromise and similarity effects. J. Math. Psychol. 81, 11–27

47.Rigoli, F. et al. (2017) A unifying Bayesian account of contextual effects in value-based choice. PLoS Comput. Biol. 13, e1005769

48.Turner, B.M. et al. (2018) Competing theories of multialternative, multiattribute preferential choice. Psychol. Rev. 125, 329–362

49.Dai, J. and Busemeyer, J.R. (2014) A probabilistic, dynamic, and attribute-wise model of intertemporal choice. J. Exp. Psychol. Gen. 143, 1489–1514

50.Diederich, A. (2003) Mdft account of decision making under time pressure. Psychon. Bull. Rev. 10, 157–166

51.Diederich, A. (2003) Decision making under conflict: decision time as a measure of conflict strength. Psychon. Bull. Rev. 10, 167–176

52.Krajbich, I. et al. (2012) The attentional drift-diffusion model extends to simple purchasing decisions. Front. Psychol. 3, 193

53.Molloy, M.F. et al. (2018) What is in a response time? On the importance of response time measures in constraining models of context effects. Decision Published online July 16, 2018. http://dx.doi.org/10.1037/dec0000097

54.Mullett, T.L. and Stewart, N. (2016) Implications of visual attention phenomena for models of preferential choice. Decision 3, 231–253

55.Turner, B.M. et al. (2016) Why more is better: a method for simultaneously modeling EEG, fMRI, and behavior. Neuroimage 128, 96–115

56.Basten, U. et al. (2010) How the brain integrates costs and benefits during decision making. Proc. Natl. Acad. Sci. 107, 21767–21772

57.Gluth, S. et al. (2012) Deciding when to decide: time-variant sequential sampling models explain the emergence of valuebased decisions in the human brain. J. Neurosci. 32, 10686– 10698

58.Hare, T.A. et al. (2011) Transformation of stimulus value signals into motor commands during simple choice. Proc. Natl. Acad. Sci. 108, 18120–18125

59.Hunt, L.T. et al. (2012) Mechanisms underlying cortical activity during value-guided choice. Nat. Neurosci. 15, 470–476

60.Clithero, J.A. and Rangel, A. (2014) Informatic parcellation of the network involved in the computation of subjective value. Soc. Cogn. Affect. Neurosci. 9, 1289–1302

61.Levy, D.J. and Glimcher, P.W. (2012) The root of all value: a neural common currency for choice. Curr. Opin. Neurobiol. 22, 1027–1038

62.Strait, C.E. et al. (2014) Reward value comparison via mutual inhibition in ventromedial prefrontal cortex. Neuron 82, 1357– 1366

63.Gluth, S. et al. (2015) Effective connectivity between hippocampus and ventromedial prefrontal cortex controls preferential choices from memory. Neuron 86, 1078–1090

64.Pisauro, M.A. et al. (2017) Neural correlates of evidence accumulation during value-based decisions revealed via simultaneous EEG-fMRI. Nat. Commun. 8, 15808

65.Chau, B.K.H. et al. (2014) A neural mechanism underlying failure of optimal choice with multiple alternatives. Nat. Neurosci. 17, 463–470

66.Polanía, R. et al. (2014) Neural oscillations and synchronization differentially support evidence accumulation in perceptual and value-based decision making. Neuron 82, 709–720

67.Brunton, B.W. et al. (2013) Rats and humans can optimally accumulate evidence for decision-making. Science 340, 95–98

68.Hanks, T.D. et al. (2015) Distinct relationships of parietal and prefrontal cortices to evidence accumulation. Nature 520, 220– 223

69.Chung, H.-K. et al. (2017) Why do irrelevant alternatives matter? An fMRI-TMS study of context-dependent preferences. J. Neurosci. 37, 11647–11661

70.Hedgcock, W. and Rao, A.R. (2009) Trade-off aversion as an explanation for the attraction effect: a functional magnetic resonance imaging study. J. Mark. Res. 46, 1–13

71.Hu, J. and Yu, R. (2014) The neural correlates of the decoy effect in decisions. Front. Behav. Neurosci. 8, 271

72.Mohr, P.N.C. et al. (2017) Attraction effect in risky choice can be explained by subjective distance between choice alternatives.

Sci. Rep. 7, 8942

73.Gluth, S. et al. (2017) The attraction effect modulates reward prediction errors and intertemporal choices. J. Neurosci. 37, 371–382

74.Menon, V. and Uddin, L.Q. (2010) Saliency, switching, attention and control: a network model of insula function. Brain Struct. Funct. 214, 655–667

75.Hunt, L.T. et al. (2014) Hierarchical competitions subserving multi-attribute choice. Nat. Neurosci. 17, 1613

76.Turner, B.M. et al. (2018) On the neural and mechanistic bases of self-control. Cereb. Cortex 1–19

77.Botvinick, M.M. et al. (2001) Conflict monitoring and cognitive control. Psychol. Rev. 108, 624–652

78.Botvinick, M.M. et al. (2004) Conflict monitoring and anterior cingulate cortex: an update. Trends Cogn. Sci. 8, 539–546

79.Gluth, S. and Rieskamp, J. (2017) Variability in behavior that cognitive models do not explain can be linked to neuroimaging data. J. Math. Psychol. 76, 104–116

80.Turner, B.M. et al. (2017) Approaches to analysis in modelbased cognitive neuroscience. J. Math. Psychol. 76, 65–79

81.Purcell, B.A. et al. (2010) Neurally constrained modeling of perceptual decision making. Psychol. Rev. 117, 1113

82.Anderson, J.R. et al. (2008) Using fMRI to test models of complex cognition. Cogn. Sci. 32, 1323–1348

83.Turner, B.M. et al. (2013) A Bayesian framework for simultaneously modeling neural and behavioral data. Neuroimage 72, 193–206

84.Turner, B.M. et al. (2017) Approaches to analysis in modelbased cognitive neuroscience. J. Math. Psychol. 76, 65–79

85.van Ravenzwaaij, D. et al. (2017) A confirmatory approach for integrating neural and behavioral data into a single model. J. Math. Psychol. 76, 131–141

86.Turner, B.M. et al. (2015) Informing cognitive abstractions through neuroimaging: the neural drift diffusion model. Psychol. Rev. 122, 312

87.Berkowitsch, N.A. et al. (2014) Rigorously testing multialternative decision field theory against random utility models. J. Exp. Psychol. Gen. 143, 1331–1348

88.Hancock, T.O. et al. (2018) Decision field theory: improvements to current methodology and comparisons with standard choice modelling techniques. Transp. Res. B Methodol. 107, 18–40

89.Hotaling, J.M. and Rieskamp, J. (2018) A quantitative test of computational models of multialternative context effects. Decision Published online July 12, 2018. http://dx.doi.org/10.1037/ dec0000096

90.Liew, S.X. et al. (2016) The appropriacy of averaging in the study of context effects. Psychon. Bull. Rev. 23, 1639–1646

262 Trends in Cognitive Sciences, March 2019, Vol. 23, No. 3

suai.ru/our-contacts

quantum machine learning

 

 

 

 

 

 

 

 

 

 

 

 

 

91.Hutchinson, J.W. et al. (2000) Unobserved heterogeneity as an alternative explanation for reversal effects in behavioral research.

J. Consum. Res. 27, 324–344

92.Trueblood, J.S. et al. (2015) The fragile nature of contextual preference reversals: reply to Tsetsos, Chater, and Usher (2015). Psychol. Rev. 122, 848–853

93.Trueblood, J.S. et al. (2013) Not just for consumers: context effects are fundamental to decision making. Psychol. Sci. 24, 901–908

94.Dutilh, G. and Rieskamp, J. (2016) Comparing perceptual and preferential decision making. Psychon. Bull. Rev. 23, 723–737

95.Hotaling, J.M. et al. (2010) Theoretical developments in decision field theory: comment on Tsetsos, Usher, and Chater (2010).

Psychol. Rev. 117, 1294–1298

96.Turner, B.M. et al. (2013) A method for efficiently sampling from distributions with correlated dimensions. Psychol. Methods 18, 368–384

97.Morey, R.D. et al. (2016) The philosophy of Bayes factors and the quantification of statistical evidence. J. Math. Psychol. 72, 6–18

98.Turner, B.M. and Van Zandt, T. (2012) A tutorial on approximate Bayesian computation. J. Math. Psychol. 56, 69–85

99.Turner, B.M. and Sederberg, P.B. (2012) Approximate Bayesian computation with differential evolution. J. Math. Psychol. 56, 375–385

100.Turner, B.M. and Sederberg, P.B. (2014) A generalized, likeli- hood-free method for parameter estimation. Psychon. Bull. Rev. 21, 227–250

101.Turner, B.M. et al. (2016) Bayesian analysis of simulation-based models. J. Math. Psychol. 72, 191–199

102.Turner, B.M. et al. (2013) Bayesian analysis of memory models.

Psychon. Bull. Rev. 120, 667–678

103.Turner, B.M. and Van Zandt, T. (2014) Hierarchical approximate Bayesian computation. Psychometrika 79, 185–209

Trends in Cognitive Sciences, March 2019, Vol. 23, No. 3 263

suai.ru/our-contacts

quantum machine learning

Received: 10 October 2019 Revised: 21 January 2020 Accepted: 4 February 2020

DOI: 10.1002/wcs.1526

F O C U S A R T I C L E

Comparison of Markov versus quantum dynamical models of human decision making

Jerome R. Busemeyer1

| Peter D. Kvam2

| Timothy J. Pleskac3

1Department of Psychological and Brain

Sciences, Indiana University,

Bloomington, Indiana

2Department of Psychology, University of

Florida, Gainesville, Florida

3Department of Psychology, University of

Kansas, Lawrence, Kansas

Correspondence

Timothy J. Pleskac, Department of Psychology, University of Kansas, Lawrence, KS.

Email: pleskac@ku.edu

Abstract

What kind of dynamic decision process do humans use to make decisions? In

this article, two different types of processes are reviewed and compared: Mar-

kov and quantum. Markov processes are based on the idea that at any given

point in time a decision maker has a definite and specific level of support for

available choice alternatives, and the dynamic decision process is represented

by a single trajectory that traces out a path across time. When a response is

requested, a person's decision or judgment is generated from the current loca-

tion along the trajectory. By contrast, quantum processes are founded on the

idea that a person's state can be represented by a superposition over different

degrees of support for available choice options, and that the dynamics of this

state form a wave moving across levels of support over time. When a response

is requested, a decision or judgment is constructed out of the superposition by

actualizinga specific degree or range of degrees of support to create a defi-

nite state. The purpose of this article is to introduce these two contrasting theo-

ries, review empirical studies comparing the two theories, and identify

conditions that determine when each theory is more accurate and useful than

the other.

This article is categorized under:

Economics > Individual Decision-Making

Psychology > Reasoning and Decision Making

Psychology > Theory and Methods

K E Y W O R D S

confidence, decision, interference, quantum cognition, random walk

1 | INTRODUCTION

Imagine watching a murder mystery film with a friend. As you watch, you become aware of your beliefs about guilt or innocence of a suspect. Your beliefs move up and down across time as different kinds of evidence are presented during the movie scenes. At any point in time, your friend may ask which character seems the most guilty, or evaluate the evidence with respect to a particular character's guilt or innocence. When prompted, you can express the likelihood that the suspect is guilty or innocent to your friend.

All authors contributed equally to this work.

WIREs Cogn Sci. 2020;e1526.

wires.wiley.com/cogsci

© 2020 Wiley Periodicals, Inc.

1 of 19

https://doi.org/10.1002/wcs.1526

suai.ru/our2

 

-

contacts

 

 

quantum machine learning

 

 

 

 

of 19

 

 

 

 

 

BUSEMEYER ET AL.

 

 

 

 

 

 

 

 

 

Now imagine trying to decide whether or not to risk passing a car on a two lane highway. As you deliberate, you become aware of your affective evaluations toward taking the risk or not. Your tendency to approach or avoid changes across time as you monitor the driving situation. At each point in time you can decide to take the risk (pass) or not (stay).

These are two different kinds of decision problems. In the first case, the decision variable being monitored is the evidence for or against an hypothesis: an inferential choice problem. In the second case, the decision variable being monitored is your preference for or against an action: a preferential choice problem. Despite their differences, both decisions seem to be based on the same or similar process where evidence or preference is accumulated over time to eventually trigger a choice (Dutilh & Rieskamp, 2016; Pleskac, Yu, Hopwood, & Liu, 2019; Summerfield & Tsetsos, 2012; Usher & McClelland, 2001; Usher & McClelland, 2004; Zeigenfuse, Pleskac, & Liu, 2014). What are the basic dynamics that underlie the changes in these decision variables, evidence or preference, during decision making? And how are the judgments and actions about these decision variables generated from the latent monitoring and accumulation process? This article compares and contrasts two different ways to model the decision processes underlying these choices: a classical Markov process, and a nonclassical quantum process. These two approaches to modeling dynamic decision behavior have been compared on a variety of tasks and measures, including sequential decisions (Busemeyer, Wang, & Lambert-Mogiliansky, 2009a; Wang & Busemeyer, 2016b), decisions with subsequent confidence (Kvam, Pleskac, Yu, & Busemeyer, 2015) or preference ratings (Kvam, 2014; Wang & Busemeyer, 2016a), sequences of judgments and ratings (Busemeyer, Kvam, & Pleskac, 2019), and response time distributions (Busemeyer, Wang, & Townsend, 2006; Fuss & Navarro, 2013). We begin by introducing the basic ideas underlying these two theories using the evidence accumulation problem. Later in the article we consider the preference accumulation problem. In the end, we introduce a more general open systemquantumMarkov approach that incorporates elements of both frameworks to provide a more complete description of how support for different choice options changes over time.

2 | MARKOV AND QUANTUM VIEWS OF EVIDENCE ACCUMULATION

The classicaldynamical view of evidence accumulation, including the popular decision diffusion models (Ratcliff, Smith, Brown, & McCoon, 2016), asserts that a person's state of belief about a hypothesis at any single moment can be represented as a specific point along some internal scale of evidence, as illustrated in the left panel of Figure 2. This belief state changes moment by moment from one location to another on the evidence scale, carving out a trajectory as illustrated in the top panel of Figure 1. At the point in time when you are asked to report your belief, you simply read out the location on the evidence scale that existed before you were asked. That is, the report is determined by the preexisting location of the belief state. This classical view of belief change is typically formalized as a Markov process (Bhattacharya & Waymire, 1990), which describes a probability distribution over the evidence scale at each moment in

Measurement

Markov model

Quantum model

time

With measurement Without measurement

F I G U R E 1 Markov and quantum random walk models generate diverging predictions for how evidence evolves over time and how measurements like decisions interact with subsequent responses

suai.ru/our-contacts

quantum

machine

 

learning

 

 

BUSEMEYER ET AL.

 

 

 

 

 

3 of 19

 

time (shown in the left panel of Figure 3). This probability distribution represents a modeler's uncertainty about the location of the state that exists for a decision-maker at any point in time, rather than the decision maker's inherent uncertainty about the location of his or her own. A key assumption of Markov processes is that the probability distribution at the next moment in time only depends on the previous probability distribution and the dynamics of the state.

An alternative nonclassicaldynamical view of evidence accumulation asserts that your belief about a hypothesis at any single moment is not located at a specific point on the mental evidence scale. Instead, at any moment, it is superposed with some potential (called an amplitude) of realization across the scale, as illustrated by the shades of gray spread across the scale in right panel of Figure 2. As this superposition changes, it forms a wave that flows across levels of support over time. A representation of this wave, where darker regions correspond to greater squared probability amplitudes (corresponding to a greater likelihood of observing the state in that location), is shown in the bottom panel of Figure 1. When a person is asked to report their belief, a location of the evidence must be identified from this superposed state. This collapses the wave to a specific location or range of levels of support, depending on what type of measurement (binary choice, confidence judgment, or some other response) is applied. This nonclassicalview of belief change has been formalized as a quantum process (Gudder, 1979), which describes the amplitude distribution over the evidence scale across time as shown in the right panel of Figure 3 (but using squared amplitudes). Like the Markov process, it is based on the assumption that the amplitude distribution at the next moment in time only depends on the previous amplitude distribution.

Both Markov and quantum processes are stochastic processes. However, the probabilities used in Markov processes (Figure 3, left panel) and the squared amplitudes used in quantum processes (Figure 3, right panel) are conceptually quite

 

 

 

Markov random walk

 

 

 

 

 

Quantum random walk

 

0%

10

20

30

40

50

60

70

80

90 100%

0%

10

20

30

40

50

60

70

80

90 100%

‘Certain

 

 

 

 

 

 

 

‘Certain

‘Certain

 

 

 

 

 

 

 

‘Certain

left’

 

 

 

 

 

 

 

 

right’

left’

 

 

 

 

 

 

 

 

right’

F I G U R E 2 Diagram of a state representation of a Markov and a quantum random walk model. In the Markov model, evidence (shaded state) evolves over time by moving from state to state, occupying one definite evidence level at any given time. In the quantum model the decision-maker is in an indefinite evidence state, with each evidence level having a probability amplitude (shadings) at each point in time

Probability

Markov model

0.8

 

 

 

 

 

0.7

alpha = 100

 

 

 

 

beta = 150

 

 

 

 

 

 

 

 

 

0.6

 

 

 

 

 

 

 

 

 

0 s

 

0.5

 

 

 

.25 s

 

 

 

 

 

.50 s

 

0.4

 

 

 

.75 s

 

 

 

 

 

1.0 s

 

0.3

 

 

 

 

 

0.2

 

 

 

 

 

0.1

 

 

 

 

 

0

 

 

 

 

 

0

20

40

60

80

100

Confidence

Probability

Quantum model

0.2

0.18drift = 300 diff = 35

0.16

0 s

 

.25 s

0.14

.50 s

.75 s

 

 

1.0 s

0.12

 

0.1

 

0.08

 

0.06

 

0.04

 

0.02

 

0

0

20

40

60

80

100

Confidence

F I G U R E 3 Illustration of Markov (left) and quantum (right) evolution of probability distributions over time. The horizontal axis represents 101 belief states associated with subjective evidence scale values ranging from 0 to 100 in one-unit steps. The vertical axis represents probability corresponding to each evidence level. The separate curves moving from left to right represent increasing processing time