Материал: искусственный интеллект

Внимание! Если размещение файла нарушает Ваши авторские права, то обязательно сообщите нам

suai.ru/our-contacts

quantum machine learning

J.B. Broekaert, et al. Cognitive Psychology 117 (2020) 101262

KW =

1

W

, KL = 1

1 L

 

1

W

L

(20)

0, and where in each column all rows need to add up to zero for the conservation of probability. Depending on the magnitude of , these transition rate matrices will either increase action potential for Gamble ( > 1) or increase action potential for Stop ( < 1).11

KW &L =

1

W

0

0

 

1

W

0

0

 

0

0

 

1 L

(22)

 

0

0

1

L

which acts separately on each of the subspaces since the upper left matrix quadrant only engages the two first W probability components of a vector to produce the first two components of the output vector and similarly, the lower right matrix quadrant only engages the last two L probability components of a vector to produce the two last components of the output. Under such a transition matrix Win-related and Lose-related belief changes are fully independent.

The rate of transfer, parameter , between the vector components is made to depend on the utility, Eqs. (16) of the gamble by a logistic function

W = s·(1 + e uW (X)) 1, L = s·(1 + e uL (X )) 1

(23)

where X [.5, 14] is the parameter for the size of the payoff. The parameter s controls the sensitivity to the linear utility expression, s 0.

The assumed cognitive process mixes the W and L beliefs under all first-stage gamble outcome conditions, but most extensively so under Unknown first-stage outcome condition. In the latter case the uncertainty about being inflicted a loss or endowed a win engenders an uncertainty about Gambling or Stopping, but also in the Known first-stage outcome gambles mixing of Win and Lose beliefs will occur as a contextual effect. Therefore a mixing operator was implemented to cause an attention switching between Win and Lose and concurrently switching the decision between Gambling and Stopping. In practice, this operator thus transfers actionpotential from ‘Gamble on Win’ (pGW ) to ‘Stop on Lose’ (pSL) and, from ‘Stop on Win’ (pSW ) to ‘Gamble on Lose’ (pGL). These two redistributions are respectively implemented by the two matrices

KMix =

1

0

0

1

+

0

0

1

0

0

 

0

0

0

0

0

 

1

0

 

0

0

0

0

0

1

 

1

0

(24)

 

1

0

0

 

1

0

0

 

0

0

in which the first matrix causes a transfer between pWG and pLS and the second matrix a transfer between pWS and pLG. The magnitude of the mixing process is monitored by the parameter .

The full transition rate matrix K, which implements the cognitive process for Win, Lose and Unknown condition is then composed

of all four matrices together

 

 

 

K =

1

W

0

 

 

1

W

 

0

 

0

 

1

L

 

 

 

0

1

L

(25)

The time evolution driving matrix for the belief-action state under the transition rate K is the transition matrix T, which is a solution of the Kolmogorov Forward equation (Busemeyer & Bruza, 2012):

T (t) = eKt

(26)

The belief-action state at time t and under condition C of the initial stage gamble outcome is then given by

 

C (t) = T (t) (0, C).

(27)

From the belief-action state at the moment of decision the probability for taking the second-stage gamble is then obtained by selecting and adding the gamble components, Eq. (19). This can be operationally expressed using the selection matrix MGamble

11 For a two-dimensional Markov model with transition matrix, Eq. (20), the propagator T (t), Eq. (26), can be easily calculated analytically

T (t)

=

1

+

K

(1 e

(1+ )t).

 

1 +

(21)

 

 

 

 

One can verify that independent of the initial belief-action state the time-asymptotic state is ( /(1 + ), 1/(1 + )) . Hence for

> 1 the first

component (Gamble) dominates the second (Stop), and vice versa for < 1.The encompassing transition matrix for Win and Lose is given by

15

suai.ru/our-contacts

quantum machine learning

J.B. Broekaert, et al. Cognitive Psychology 117 (2020) 101262

MGamble =

1

0

0

0

 

0

0

0

0

 

0

0

1

0

(28)

 

0

0

0

0

to produce the gamble probability

 

p(gamble X, Cond)

=

|Mgamble T ( /2) (0, C)|1 ,

(29)

where · 1 is shorthand notation for summing of the (absolute) values of the vector components. We have fixed the time of measurement to the conventional choice t = /2, which is a standard procedure that is also applied in the quantum-like model as well (Busemeyer & Bruza, 2012). This procedure –of setting a conventional time of measurement– is typically applied in a Markov dynamical approach in order to avoid independence of the final belief-action state on the initial belief-action state. (One can easily check this independence from initial conditions at larger time scales, Eq. (21).)

For each of the two periods for decision making in each flow order (K-to-U and U-to-K in Fig. 2) a separate evolved belief-action state will be obtained. Each of these evolved states will differ due to their respective initial belief-action states. Therefore, even if the transition rate matrix K is the same in both flow orders and for all outcome conditions, the theoretical gamble probabilities will be different.

In the first period, the initial belief-action state on a Win and Lose condition of the first-stage outcome are formally given by the vectors;

0,W = 2 , 2 , (1

2

) , (1

2

) ,

0,L = (1

2

) , (1

2

) , 2 , 2

(30)

where is a weight parameter, 0 1. Should = 1 then the states 0,W and 0,L are precisely allocated to their Win and Lose components respectively, while at the same time for both states a uniform probability to Gamble or Stop is assumed. Due to the context effect from the other gambles in the block, regulated by , these belief-action states respectively express that Win or Lose information only partially determines the belief state in the block where the outcome is Known. This ambiguous belief-action state reflects incompletely registered information notwithstanding unambiguous Win, or Lose, information in the gamble description. This belief-action state occurs because of its embedding in the mixed context of the Win-outcome and Lose-outcome block. It implements an effect of contextual anchoring which leads to compounding information of the present gamble outcome condition and outcome conditions of previously taken gambles within the same block.

The initial belief-action state –in first period– on Unknown outcome of the first-stage gamble is

0,U =

1

,

1

,

1

,

1

(31)

4

4

4

4

which expresses the belief-action state with uniformly weighted Win and Lose outcomes and is similarly indifferent to either Gamble and Stop decisions due to lack of previously experienced gambles. The state is caused by the uncertainty due to missing information on the first-stage outcome in the Unknown-outcome condition in first period.

In the second period similar effects of context are at play, but now due to the carry-over effect the initial state in the second period will depend also on the participant’s history of gambling in the first period. The initial belief-action states for Win and Lose first-stage outcome conditions will also contain residual belief support for the opposite condition. The magnitude of the context effect will be changed by the carry-over effect

00,W =

µ µ

(1

µ)

 

(1

µ)

 

00,L =

(1

µ)

 

(1

µ)

 

µ µ

 

2 , 2 ,

 

2

,

 

2

,

 

2

,

 

2

,

2 , 2

(32)

where µ is a weight parameter, 0

µ

1. The weighting parameter µ in the second period differs from

in first period.

Because of previous exposure to Known outcome conditioned first-stage gambles, the initial belief-action state on Unknown

conditioned first-stage gambles is not uniform anymore

 

00,U = 0,W ( ) + (1

) 0,L ( ).

(33)

This is a belief-action state weighted by , 0 1, on Win and Lose states of the first period, which is caused by a carry-over effect of the belief tendencies about the two possible outcomes of Win and Lose in first period.

The Markov model processes the belief-action state for each outcome condition, payoff and period by evolving from the appropriate initial state. Each time a new second-stage gamble is proposed the participant will thus first regain a dedicated initial beliefaction state. In our experimental paradigm, the participant is assumed to do so, Eqs. (32) and (33), for each of the five payoff values and for each of the three types of first-stage outcome condition {W , L, U}. During the experiment each participant thus produces fifteen final belief-action states which lead up to the appropriate gamble decisions according to the first-stage gamble outcome condition, payoff size and flow order.

Parametrization. The Markov model requires four dynamical parameters for the utility expression of the second-stage gambles. Two parameters - intercept and slope - for each condition of Win and Lose express the different motivational utility of the two conditions, denoted by {0W , 1W } and {0L, 1L}. The effect of this utility difference on the decision is controlled by the sensitivity parameter {s} in the logistic form, Eqs. (23). The ‘coupled-switching’ dynamics that implements the attention switching from Win to Lose and its reversal of related Gamble or Stop decision is controlled by the strength of the mixing parameter {}. The context effect on

16

suai.ru/our-contacts

quantum machine learning

J.B. Broekaert, et al.

Cognitive Psychology 117 (2020) 101262

the belief-action state is implemented by the weight parameters {, µ} on the Win and Lose states, for first and second period respectively.

Finally the carry-over effect from first to second period on the U condition belief-action state is implemented by the weight parameter {}.

The Markov model therefore relies on 9 parameters to cover the process dynamics and the initial beliefs in both flow orders, both periods and all payoffs, amounting to providing theoretical values to 30 data points. In the Supplementary Materials section (SM 1) the full temporal evolution description of the belief-action state is provided for the full sample of participants who passed the attention test.

4.3. The quantum-like model

The quantum-like model applies a state vector to represent the belief-action state of the participant but instead of having probability components like in the Markov approach, it has probability amplitude components. These components can be complex valued and only lead to probabilities after taking the squared norm. In Appendix A, an elementary introduction to the application of the quantum formalism in cognition is given, which also provides an exposition of its close resemblance to the Markov formalism.

The similarity with the Markov model allows a fairly straightforward formulation of the quantum-like model that runs parallel to the previous section on the Markov model and only requires some clarification for a few distinct features.

The minimal representation of the gamble paradigm crosses the conditions for Win or Lose and the decision to Gamble or Stop. The associated belief-action state will be denoted as

= ( WG, WS, LG, LS) ,

(34)

where the amplitude components represent the respective belief support for first-stage gamble outcome condition combined with action-potential for different gamble decisions in the second-stage gamble. In the quantum-like model the probability for the participant to take the second-stage gamble is obtained by adding the modulus squared of the components for ‘Gamble in the secondstage and Won-first-stage belief’ and ‘Gamble in the second-stage and Lost-first-stage belief’

p (g) = WG 2 + LG 2 ,

(35)

In general, since the belief-action states covers the full event space for the decisions Gamble and Stop and categories Win and Lose, the corresponding probabilities add up to unity:

1 = WG 2 + LG 2 + WS 2 + LS 2.

(36)

which is the normalization of the belief-action state vector. In the quantum-like model the belief-action state at the moment of decision is realized through a measurement operation. In particular the outcome state for the decision to gamble is obtained through

the corresponding projector MGamble for the question ‘Take the second-stage gamble?’ and the projector MStop for ‘Stop the secondstage gamble?’.12

MGamble =

1

0

0

0

 

MStop =

0

0

0

0

 

0

0

0

0

 

0

1

0

0

 

0

0

1

0

,

0

0

0

0 .

(37)

 

0

0

0

0

 

 

0

0

0

1

Notice that formally this projection matrix is identical to the selection matrix in the Markov model, Eq. (28). The modulus square of the projected vector for the measurement ‘Take the second-stage gamble?’ then gives the gamble probability, Eq. (35).

In the quantum-like model the process of change of the belief-action state occurs at the level of the probability amplitudes. The transforming effect of incoming information is controlled by the Hamiltonian operator H. Now the specific parameter positions in this matrix will cause the transfer of probability amplitude between the different vector components of the belief-action state. In the quantum-like case - due to the original relation of the Hamiltonian operator to the real-valued ‘energy’ of a system - the operator for change has to be Hermitian. This means the component Hij for transferring probability amplitude from vector component with index j to i has to be complex conjugated with respect to the component Hji, which transfers probability amplitude from component i to j. The Hermiticity requirement is expressed as H = H. In the two stage gamble paradigm the main factor of transfer in the belief-action state depends on the condition of the outcome of the initial gamble. This information will re-distribute the Gamble or Stop components in the Win subspace and also the Gamble or Stop component in the Lose subspace. The transformation within the subspace of Win and subspace of Lose requires the two respective Hamiltonian sub matrices –satisfying the Hermitian condition;13

12Note that a projector is any matrix M which is idempotent, M2 = M. The projection occurs on the span of its eigenvectors. See also (Appendix A) for an elementary introduction to quantum modeling.

13A two-dimensional quantum-like model with Hamiltonian matrix, Eq. (39), allows to analytically calculate the unitary propagator U (t), Eq. (44), Broekaert et al., 2016),

U (t)

=

cos( 1

+

2 t) i sin( 1 +

2

2 t) H

(38)

 

 

1 +

 

 

17

suai.ru/our-contacts

quantum machine learning

J.B. Broekaert, et al. Cognitive Psychology 117 (2020) 101262

HW =

1

W

, HL =

1

L

(39)

W

1

L

1

where . The encompassing Hamiltonian, with HW in the upper left matrix quadrant and with HL in the lower right quadrant, acts separately on the subspaces for Win and Lose

HW &L =

1

W

0

0

 

W

1

0

0

 

0

0

1

L

(40)

 

0

0

L

1

This type of Hamiltonian would keep Win-related and Lose-related belief amplitudes fully independent.

Similarly as in the Markov model the magnitude of the transfer process will depend on the utility of the second-stage gamble. In contrast to the driving parameters in the transition rate matrix of the Markov process, Eq. (23), in the Hamiltonian the driving parameters can be positive or negative valued. More generally we could also parametrise the Hamiltonian with complex valued parameters while assuring Hermiticity. To accommodate both signs, the parameters are modeled by a hyperbolic tangent (version of the logistic) function of the linear utility expression:

W = s·(2 (1 + e uW (X ) ) 1 1), L = s·(2 (1 + e uL (X )) 1 1)

(41)

with X [.5, 14] and with scaling parameter s.

The assumed cognitive process –similar as to the Markov model– will mix the W and L beliefs under all first-stage gamble outcome conditions. In the Unknown first-stage outcome condition the uncertainty about loss or win engenders an uncertainty about Gambling or Stopping, but also in the Known first-stage outcome gambles mixing of Win and Lose beliefs will occur due to a contextual effect in the block. The mixing operator will cause attention switching between Win and Lose beliefs to happen concurrently with switching decisions for Gambling or Stopping. In practice the mixing Hamiltonian thus transfers action-potential from ‘Gamble on Win’ ( GW ) to ‘Stop on Lose’ ( SL) and, from ‘Stop on Win’ ( SW ) to ‘Gamble on Lose’ ( GL). The mixing dynamics corresponds to an explorative attention switching between potential outcomes of the gamble in which a switch between Win and Lose belief always correlates with a switch in the decision between to Gamble and to Stop in the second-stage gamble. These two correlated attention switching processes are implemented by

HMix =

0

0

0

1

+

0

0

0

0

 

0

0

0

0

0

0

1

0

 

0

0

0

0

0

1

0

0

(42)

 

1

0

0

0

 

0

0

0

0

in which the first matrix controls the transfer between

WG and

LS and the second matrix controls transfer between

WS and LG, and

where monitors the magnitude of the mixing process.

 

 

The full Hamiltonian matrix H, which implements the cognitive process for Win, Lose and Unknown condition is then composed

of all four matrices together

 

H =

1

W

0

 

 

W

1

 

0

 

0

 

1

L .

 

 

 

0

L

1

(43)

The temporal change of the belief-action state is produced by the unitary evolution operator U, and is itself driven by Hamiltonian operator H. The unitary operator satisfies the Schrödinger equation (Busemeyer & Bruza, 2012), in accordance with the dynamics of quantum theory

U (t) = e iHt.

(44)

In the quantum-like model, the belief-action state at time t and under condition C of the initial stage gamble outcome is given by

C (t) = U (t) (0, C).

(45)

The probability for taking the second-stage gamble under condition C is then obtained from the evolved belief-action state at the time of measurement, by projecting with Mgamble for ‘taking the second-stage gamble’ and taking the modulus square of that outcome

p (gamble X, Cond) = ||Mgamble U ( /2) 0,C||2.

(46)

The time of measurement is fixed to the conventional choice t = /2, corresponding to the choice of measurement time in the Markov model, Eq. (29). Since the time-scale of the evolution, Eq. (45), for the cognitive realm is undefined and since no response time observations are involved, a designated time of measurement can be fixed by convention. Notice too that both in the Markov model

(footnote continued)

In contrast to the Markov propagator, the oscillatory evolution of the quantum-like propagator requires a choice of measurement time that remains within the system’s period.

18

suai.ru/our-contacts

quantum machine learning

J.B. Broekaert, et al.

Cognitive Psychology 117 (2020) 101262

and in the quantum-like model the optimized parameter fitting will be adapted to this conventional time choice.

For each of the two periods for decision making in each flow order (K-to-U and U-to-K) a separate final belief-action state will be obtained. These final states will differ due to their respective initial belief-action states. Since the quantum-like model uses vectors of probability amplitudes - that require modulus squaring for probabilities - it is more transparent to write the 4-dimensional vectors as a tensor product of two 2-dimensional vectors, the first one for category Win/Lose and the second one for decision Gamble/Stop (see Supplementary Materials, Eq. (A2) for details).In the first period, the initial belief-action states on Win, respectively Lose, outcome condition of the first-stage gamble are formally given by the vectors;14

0,W = 1 2

1

 

0,L =

1

2

1

 

2

,

2

,

12

 

 

12

where is a weight parameter, 0 1. Should = 1 then these respective states are precisely allocated to the Win and Lose components, while the probability to Gamble or Stop for each of them is uniformly .5, as can easily be verified by squaring the entries in the Gamble/Stop vector. In the block with Known outcome conditions the participant is exposed to both Win and Lose outcome gambles. These two conditions create a mutual context for each gamble. The context effect will be present when < 1 and expresses the idea that the information on the condition of the first-stage gamble is only partially integrated into the belief-action state.

The initial belief-action state –in first period– in the Unknown outcome case of the first-stage gamble is expressed as

11

0,U =

2

2

 

12

12

(47)

which reveals that the belief support for Win or Lose is uniform and also the action-potential for the Gamble or Stop decision is indifferent due to lack of any prior experience with gambles. The state is caused by the uncertainty due to missing information on the first-stage outcome in the Unknown-outcome condition.

In the second period the context effect is modified by the carry-over effect, hence the initial state now depends on the block’s gamble condition as well as on the participant’s history of the first period. The initial belief-action states for Win and Lose conditions will reflect residual belief support for the opposite condition modified by the carry-over from the previous period

µ

1

 

00,L =

1

µ2

1

 

2

 

2

 

00,W = 1 µ2

12

,

 

µ

12 .

(48)

Since the initial belief-action state in the second period is influenced by the first block’s condition, in the Unknown condition the belief-action state will be a superposition of the two states for the possible outcome conditions W and L of the Known outcome conditioned gambles block. In particular, the quantum-formalism allows to weight both conditions equally but also to include a relative complex phase between the two states for Win and Lose. The sign and amplitude of this phase allows for constructive or destructive interference between the two states and thus brings about a subjective tendency towards either of the known outcome beliefs

00,U = ( 0,W ( ) + ei 0,L( ))/N,

(49)

where the normalization of the initial state requires N = 2 + 4 1

2 cos . From the quantum-like perspective, the potential for

interference of belief-action states indicates a susceptibility for an amplifying or reducing relation between beliefs. In the event of ‘decoherence’ between these belief-action states –by their reduction to separate contexts– interference between them will be diminished or impeded. In the second period the probability for taking the second-stage gamble under either of conditions {W , L, U} is

again obtained according to Eq. (46), by projecting the evolved belief-action state using Mgamble, the projector for ‘taking the secondstage gamble’, and taking the modulus squared.

In the Supplementary Materials, (SM 2), a graph of the time development of the probability of the decision process for the secondstage gamble shows the build up of the gamble probabilities emerging from each of the initial belief-action states.

Parametrization. The quantum-like model requires a parametrization that closely resembles the parametrization of the Markov model. It requires the four dynamical parameters for the driving utility difference of the second-stage gamble for the two conditions of Win and Lose, namely {0W , 1W } and {0L, 1L}. Also the effect of the utility difference on the decision is controlled by a sensitivity

14 The tensor product notation is used here to distinguish more easily the effect of the parameter on the belief support in the Win/Lose evaluation. The two dimensional vector for Win/Lose appears as the left factor of the tensor product, the right factor is the two dimensional vector for Gamble/ Stop. Both subspace vectors can be blended into the four dimensional vector according to the usual rule

ac

adbc = (ab) (dc ). bd

19