suai.ru/our-contacts |
quantum machine learning |
Balanced Quantum-Like Model for Decision Making |
87 |
3.1Principle of Entropy
For the case in which both interval do not overlap
the values of the waves that are closest to the equal distribution are chosen. By doing so the uncertainty is maximised and the information about the probability wave is not lost. The principle of maximum entropy states that the probability distribution which best represents the current state of knowledge is the one with largest entropy [11–13]. In the case of a binary event the highest entropy corresponds to an equal distribution
H = −p(y) · log2 p(y) − p(¬y) · log2 p(¬y) = −log2 · 0.5 = 1 bit. (39)
For p(y) ≤ p(¬y) the closest values to the equal distribution are for θ = 0, the ends of the interval for
|
|
|
|
|
|
pq (y) = p(y) + 2 · |
p(y, x) · p(y, ¬x) |
≈ 2 · p(y) |
(40) |
and |
|
pq (¬y) = 1 − pq (y). |
(41) |
For p(¬y) ≤ p(y) the closest values to the equal distribution are |
|
p(¬y)q = p(¬y) + 2 · |
|
≈ 2 · p(¬y) |
(42) |
p(¬y, x) · p(¬y, ¬x) |
and |
|
pq (y) = 1 − pq (¬y). |
(43) |
3.2 Mirror Principle |
|
For the case where the intervals overlap |
|
Iy ∩ I¬y = |
(44) |
an equal distribution maximises the uncertainty but loses the information about the probability wave. To avoid the loss we do not change the entropy of the system we use the positive interference as defined by the law of balance. When the intervals overlap the positive interference is approximately of the size of smaller probability value since the arithmetic and geometric means approach each other, see Eq. 26. We increase uncertainty by mirror the “probability values”. For the case p(y) ≤ p(¬y) we assume
pq (¬y) = 2 · |
p(y, x) · p(y, ¬x) |
≈ p(y) |
(45) |
and |
|
pq (y) = 1 − pq (¬y). |
(46) |