Материал: искусственный интеллект

Внимание! Если размещение файла нарушает Ваши авторские права, то обязательно сообщите нам

suai.ru/our-contacts

quantum machine learning

Balanced Quantum-Like Model

for Decision Making

Andreas Wichert1,2(B) and Catarina Moreira1,2

1 Department of Computer Science and Engineering, INESC-ID and Instituto Superior T´ecnico, Universidade de Lisboa,

Porto Salvo, Portugal andreas.wichert@tecnico.ulisboa.pt

2 School of Business, University of Leicester,

University Road, Leicester LE1 7RH, UK cam74@le.ac.uk

Abstract. Clues from psychology indicate that human cognition is not only based on classical probability theory as explained by Kolmogorov’s axioms but additionally on quantum probability. We explore the relation between the law of total probability and its violation resulting in the law of total quantum probability. The violation results from an additional interference that influences the classical probabilities. Outgoing from this exploration we introduce a balanced Bayesian quantum-like model that is based on probability waves. The law of maximum uncertainty indicates how to choose a possible phase value of the wave resulting in a meaningful probability value.

Keywords: Quantum cognition · Law of total probability ·

Probability waves · Decision making

1 Introduction

Clues from psychology indicate that human cognition is not only based on traditional probability theory as explained by Kolmogorov’s axioms but additionally on quantum probability [48, 14]. For example, humans when making decisions violate the law of total probability. The emerging field that studies the corresponding models is called quantum cognition. In this work, we introduce a balanced Bayesian quantum-like model that is based on probability waves. The law of maximum uncertainty indicates how to choose a possible phase value of the wave resulting in a meaningful probability value. We demonstrate the model and the law on several experiments of the literature concerned the prisoner’s dilemma game and the two stage gambling game. We compare the results with previous works that deal with predictive quantum-like models for decision making. The results obtained show that the model can make predictions regarding human decision-making with a meaningful interpretation.

c Springer Nature Switzerland AG 2019

B. Coecke and A. Lambert-Mogiliansky (Eds.): QI 2018, LNCS 11690, pp. 79–90, 2019. https://doi.org/10.1007/978-3-030-35895-2_6

suai.ru/our-contacts

quantum machine learning

80 A. Wichert and C. Moreira

1.1Prisoner’s Dilemma Game and Probability Waves

In the prisoner’s dilemma game, there are two prisoners, prisoner x and prisoner y. They have no means of communicating with each other. Each prisoner is o ered by the prosecutors a bargain: by testifying against the other one she can betray the other one (Defect). On the other hand, the prisoner can refuse the deal and cooperate with the other one by remaining silent [18].

Several psychological experiments were made assuming that the probability of prisoner x cooperating is p(x) = 0.5 and the probability of defecting is p(¬x) = 0.5. The participants of the experiment were asked three di erent questions.

What is the probability that the prisoner y defects given x defects, p(¬y|¬x).

What is the probability that the prisoner y defects given x cooperates, p(¬y|x).

What is the probability that the prisoner y defects given there is no information present about knowing if prisoner x cooperates or defects. This can be expressed by

p(¬y) = p(¬y, x) + p(¬y, ¬x) = p(¬y|x) · p(x) + p(¬y|¬x) · p(¬x). (1)

This relationship can be represented by a graph (see Fig. 1) that indicates the influence between events x and y.

Fig. 1. The causal relation between events x and y represented by a direct graph of two nodes. Note that each node is followed by a conditional probability table that specifies the probability distribution of that node according to its parent node. This direct graph of two nodes representation corresponds to a simple Bayesian network.

In Table 1, we summarise the results of several experiments of the literature concerned with the prisoner’s dilemma experiment.

suai.ru/our-contacts

quantum machine learning

Balanced Quantum-Like Model for Decision Making

81

Table 1. Experimental results obtained in four di erent works of the literature for the prisoner’s dilemma game. The column p(¬y|¬x) corresponds to the probability of def ecting given that it is known that the other participant chose to def ect. The column p(¬y|x) corresponds to the probability of def ecting given that it is known that the other participant chose to cooperate. Finally, the column psub (¬y) corresponds to the subjective probability of the second participant choosing the def ect action given there is no information present about knowing if prisoner x cooperates or defects. The column p(¬y) corresponds to the classical probability.

Experiment

p(¬y|¬x)

p(¬y|x)

psub (¬y)

p(¬y)

(a) [19]

0.97

0.84

0.63

0.9050

 

 

 

 

 

(b) [17]

0.82

0.77

0.72

0.7950

 

 

 

 

 

(c) [3]

0.91

0.84

0.66

0.8750

 

 

 

 

 

(d) [10]

0.97

0.93

0.88

0.9500

 

 

 

 

 

(e) Average

0.92

0.85

0.72

0.8813

 

 

 

 

 

1.2Two Stage Gambling Game

In the two stage gambling game the participants were asked whether they want to play a gamble that has an equal chance of winning p(x) = 0.5 or losing p(¬x) = 0.5 [18]. The participants of the experiment were asked three di erent questions.

What is the probability that they play the gamble y if they had lost the first gamble x, p(y|¬x)

What is the probability that they play the gamble y if they had won the first gamble x, p(y|x)

What is the probability that they play the gamble y given there is no information present knowing if they had won the first gamble x. This would by the law of total probability

p(y) = p(y|x) · p(x) + p(y|¬x) · p(¬x)

(2)

In Table 2, we summarise the results of several experiments of the literature concerned with the two stage gambling game.

2 Quantum Probabilities and Waves

Beside quantum cognition quantum physics was the only branch in science that evaluated a probability p(x) an state x as the mode-square of a probability amplitude A(x) represented by a complex number

p(x) = |A(x)|2 = A(x) 2 = A(x) · A(x).

(3)

This is because the product of complex number with its conjugate is always a real number. With

A(x) = α + β · i

(4)

suai.ru/our-contacts

quantum machine learning

82 A. Wichert and C. Moreira

Table 2. Experimental results obtained in three di erent works of the literature indicating the probability of a player choosing to make a second gamble for the two stage gambling game. The column p(y|¬x) corresponds to the probability when the outcome of the first gamble is known to be lose. The column p(y|x) corresponds to the probability when the outcome of the first gamble is known to be win. Finally, the column psub (y) corresponds to the subjective probability when the outcome of the first gamble is not known. The column p(y) corresponds to the classical probability.

 

Experiment

p(y|¬x)

p(y|x)

psub (y)

p(y)

 

 

 

(i) [20]

0.58

0.69

0.37

0.6350

 

 

 

 

 

 

 

 

 

 

 

(ii) [15]

0.47

0.72

0.48

0.5950

 

 

 

 

 

 

 

 

 

 

 

(iii) [16]

0.45

0.63

0.41

0.5400

 

 

 

 

 

 

 

 

 

 

 

(iv) Average

0.50

0.68

0.42

0.5900

 

 

 

 

 

 

 

 

 

 

A(x) · A(x) = (α − β · i) · (α + β · i) = α2 + β2 = |A(x)|2.

(5)

Quantum physics by itself does not o er any justification or explanation beside the statement that it just works fine [2]. We can map the classical probabilities into amplitudes using the polar coordinate representation

a(x, θ1) = p(x) · ei·θ1 = A(x), a(y, θ2) = p(y) · ei·θ2 = A(y). (6)

The amplitudes represented in polar coordinate form contains a new free parameter θ, which corresponds to the phase of the wave.

2.1 Intensity Waves

 

 

 

 

 

 

 

 

 

 

The intensity wave is defined as

 

 

 

 

 

 

 

 

 

 

I (y, θ1, θ2) = |a(y, x, θ1) + a(y, ¬x, θ2)|2

(7)

I (y, θ1, θ2) = p(y, x) + p(y, ¬x) + 2 ·

p(y, x) · p(y, ¬x)

· cos(θ1 − θ2)

(8)

Note that for simplification we can replace θ1 − θ2 with θ,

 

θ = θ1 − θ2

 

 

 

 

cos(θ)

(9)

I (y, θ) = p(y) + 2

 

p(y, x)

·

p(y,

x)

·

and

·

 

 

 

¬

 

 

 

 

 

 

 

 

 

 

 

 

 

I (¬y, θ¬1, θ¬2) = |a(¬y, x, θ¬1) + a(¬y, ¬x, θ¬2)|2

(10)

with

θ¬ = θ¬1 − θ¬2

I (¬y, θ¬) = p(¬y) + 2 · p(¬y, x) · p(¬y, ¬x) · cos(θ¬) (11) and for certain phase values

I (y, θ) + I (¬y, θ¬) = 1.

(12)

In Fig. 2 (a) we see two intensity waves in relation to the phase with the parametrisation as indicated in corresponding to the values of Fig. 1 corresponding to the Table 1 values in (e).

suai.ru/our-contacts

quantum machine learning

 

 

Balanced Quantum-Like Model for Decision Making

83

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Fig. 2. (a) Two intensity waves I (y, θ), I (¬y, θ¬ ) in relation to the phase (2 · π, 2 · π) with the parametrisation as indicated in corresponding to the values of Fig. 1. Note that the two waves oscillate around p(y) = 0.1950 and p(¬y) = 0.8050 (the two lines).

(b) The resulting probability waves as determined by the law of balance, the bigger wave is replaced by the negative smaller one.

2.2The Law of Balance and Probability Waves

Intensity waves I (y, θ) and I (¬y, θ¬) are probability waves p(y, θ) and p(¬y, θ¬) if:

1. they are positive

 

0 ≤ p(y, θ),

0 ≤ p(¬y, θ¬);

(13)

2.

they sum to one

 

 

 

p(y, θ) + p(¬y, θ¬) = p(y) + p(¬y) = 1;

(14)

3.

they are smaller or equal to one

 

 

 

p(y, θ) 1,

p(¬y, θ¬) 1.

(15)

Simply speaking, the law states that the bigger wave is replaced by a smaller negative one.

Probability Waves Are Positive. Since the norm is being positive or more precisely non-negative, we can represent a quadratic form by l2 norm

(a(x, θ1) + a(y, θ2) · a(x, θ1) + a(y, θ2)) = a(x, θ1) + a(y, θ2) 2

and it follows

0 ≤ a(x, θ1) + a(y, θ2) 2.

Probability Waves Sum to One According by the Law of Balance.

Instead of simple normalisation of the intensity we propose the law of balance. The interference is balanced, which means that the interference of p(y, θ) and p(¬y, θ¬) cancel each out.

 

p(y, x) · p(y, ¬x)

· cos(θ) =

p(¬y, x) · p(¬y, ¬x)

· cos(θ¬).

(16)