Материал: искусственный интеллект

Внимание! Если размещение файла нарушает Ваши авторские права, то обязательно сообщите нам

suai.ru/our-contacts

quantum machine learning

62 P. D. Bruza and P. Wittek

11.Chaves, R., Kueng, R., Brask, J.B., Gross, D.: Unifying framework for relaxations of the causal assumptions in Bell’s theorem. Phys. Rev. Lett. 114(14), 140403 (2015)

12.Dzhafarov, E., Kujala, J.: Probabilistic contextuality in EPR/Bohm-type systems with signaling allowed. In: Dzhafarov, E. (ed.) Contextuality from Quantum Physics to Psychology, chap. 12, pp. 287–308. World Scientific Press (2015)

13.Dzhafarov, E., Kujala, J., Larsson, J.: Contextuality in three types of quantummechanical systems. Found. Phys. 7, 762–782 (2015)

14.Dzhafarov, E., Zhang, R., Kujala, J.: Is there contextuality in behavioral and social systems? Philos. Trans. Roy. Soc. A 374, 20150099 (2015)

15.Goodman, N.D., Stuhlm¨uller, A.: The design and implementation of probabilistic programming languages (2014). http://dippl.org. Accessed 14 Sept 2017

16.Goodman, N.D., Tenenbaum, J.B.: Probabilistic Models of Cognition (2016). http://probmods.org/v2. Accessed 5 June 2017

17.Gordon, A., Henzinger, T., Nori, A., Rajamani, S.: Probabilistic programming. In: Proceedings of the on Future of Software Engineering (FOSE 2014), pp. 167–181 (2014)

18.Gronchi, G., Strambini, E.: Quantum cognition and Bell’s inequality: a model for probabilistic judgment bias. J. Math. Psychol. 78, 65–75 (2016)

19.Henson, J., Sainz, A.: Macroscopic noncontextuality as a principle for almostquantum correlations. Phyical Rev. A 91, 042114 (2015)

20.Obeid, A., Bruza, P.D., Wittek, P.: Evaluating probabilistic programming languages for simulating quantum correlations. PLoS One 14(1), e0208555 (2019)

21.Oreshkov, O., Costa, F., Brukner, C.: Quantum correlations with no causal order. Nat. Commun. 3, 1092 (2012)

22.Sainz, A., Wolfe, E.: Multipartite composition of contextuality scenarios. arXiv:1701.05171 [quant-ph] (2017)

23.Zhang, R., Dzhafarov, E.N.: Testing contextuality in cyclic psychophysical systems of high ranks. In: de Barros, J.A., Coecke, B., Pothos, E. (eds.) QI 2016. LNCS, vol. 10106, pp. 151–162. Springer, Cham (2017). https://doi.org/10.1007/978-3- 319-52289-0 12

suai.ru/our-contacts

quantum machine learning

Episodic Source Memory over

Distribution by Quantum-Like

Dynamics – A Model Exploration

J. B. Broekaert(B) and J. R. Busemeyer

Department of Psychological and Brain Sciences, Indiana University,

Bloomington, USA

{jbbroeka,jbusemey}@indiana.edu

Abstract. In source memory studies, a decision-maker is concerned with identifying the context in which a given episodic experience occurred. A common paradigm for studying source memory is the ‘threelist’ experimental paradigm, where a subject studies three lists of words and is later asked whether a given word appeared on one or more of the studied lists. Surprisingly, the sum total of the acceptance probabilities generated by asking for the source of a word separately for each list (‘list 1?’, ‘list 2?’, ‘list 3?’) exceeds the acceptance probability generated by asking whether that word occurred on the union of the lists (‘list 1 or 2 or 3?’). The episodic memory for a given word therefore appears over distributed on the disjoint contexts of the lists. A quantum episodic memory model [QEM] was proposed by Brainerd, Wang and Reyna [8] to explain this type of result. In this paper, we apply a Hamiltonian dynamical extension of QEM for over distribution of source memory. The Hamiltonian operators are simultaneously driven by parameters for re-allocation of gist -based and verbatim-based acceptance support as subjects are exposed to the cue word in the first temporal stage, and are attenuated for description-dependence by the querying probe in the second temporal stage. Overall, the model predicts well the choice proportions in both separate list and union list queries and the over distribution e ect, suggesting that a Hamiltonian dynamics for QEM can provide a good account of the acceptance processes involved in episodic memory tasks.

Keywords: Recognition memory · Over distribution · Quantum modeling · Word list · Verbatim · Gist

1 Familiarity and Recollection, Verbatim and Gist

Recognition memory models predict judgments of ‘prior occurrence of an event’. In recognition, Mandler distinguished a familiarity process and a retrieval - or recollection - process that would evolve separately but also additively [21]. The familiarity of a memory would relate to an ‘intra event organizational integrative

c Springer Nature Switzerland AG 2019

B. Coecke and A. Lambert-Mogiliansky (Eds.): QI 2018, LNCS 11690, pp. 63–75, 2019. https://doi.org/10.1007/978-3-030-35895-2_5

suai.ru/our-contacts

quantum machine learning

64 J. B. Broekaert and J. R. Busemeyer

process’, while retrieval relates to an ‘inter event elaborative process’. Extending this dual process modeling work, by Tulving [26] and Jacoby [17], a ‘conjoint recognition’ model was developed by Brainerd, Reyna and Mojardin [4] which provides separate parameters for the entangled processes of identity judgement, similarity judgment and response bias. Their model implements verbatim and gist dimensions to memories. Verbatim traces hold the detailed contextual features of a past event, while gist traces hold its semantic details. In recognition tasks we would access verbatim and gist trace in parallel. The verbatim trace of a verbal cue handles it surface content like orthography and phonology for words with its contextual features like in this case, colour of back ground and text font. The verbal cue’s gist trace will encode relational content like semantic content for words, also with its contextual features. This development recently received a quantum formalisation for its property of superposed states to cope with over distribution in memory tests [8, 9, 14]. In specifically designed expermental tests it appeared episodic memory of a given word is over distributed on the disjoint contexts of the lists, letting the acceptance probability behave as a subadditive function [6, 9].

Quantum-Like Memory Models. The Quantum Episodic Memory model (QEM) was proposed by Brainerd, Wang and Reyna [8]. It assumes a Hilbert space representation in which verbatim, gist, and non-related components are orthogonal, and in which recognition engages the gist trace in target memories as well. We will provide ample detail about this model in the next section, since our dynamical extension is implemented in essentially the same structural setting. QEM was extended to generalized-QEM (GQEM) by Trueblood and Hemmer to model for incompatible features of gist, verbatim and non-related traces [25]. Subjacent is the idea that these features are serially processed, and that gist precedes verbatim since it is processed faster. Independently, Denolf and Lambert-Mogiliansky have considered the accessing of gist and verbatim as incompatible process features. This aspect is implemented in an intrinsically quantum-like manner in their complementarity -based model for Complementary Memory Types (CMT) [15, 16, 20]. We previously developed a Hamiltonian dynamical extension of QEM for item memory tasks [11]. The dynamical formalism allows to describe time development of the acceptance decision based on gist, verbatim and non-related traces. Finally, also a semantic network approach by Bruza, Kitto, Nelson, and McEvoy [23] was developed in which the target word is adjacent to its associated terms and the network is in a quantum superposition state of either complete activation or non-activation (see also [12]).

We note that dynamical approaches to quantum-like models have been proposed previously, e.g. in decision theory by Busemeyer and Bruza [14], Pothos and Busemeyer [24], Mart´ınez-Mart´ınez [22], Kvam et al. [19], Busemeyer et al. [13] and by Yearsley and Pothos [28], in cognition by Aerts et al. [2], in perception theory by Atmanspacher and Filk [3]. An overview of quantum modelling techniques is given in Broekaert et al. [10].

suai.ru/our-contacts

quantum machine learning

Hamiltonian-QEM 65

2 True Memory, False Memory, over Distributed Memory

In the conjoint process dissociation model (CPD) a su cient parametrisation is present to capture the four distinct response patterns of true, false, over distributed and forgotten memories of the three-list paradigm. The precise identification of the types of memories for a given target requires a composite outcome for the acceptance to three lists at once (see Fig. 1 and Table 1). For instance, should a participant report “the word appeared on L1, on L2 but not on L3,” when the cue word came from list L2 then this participant clearly showed a case of memory over distribution. If however that same answer had been given for a cue coming from list L3 then this participant showed a case of false memory.

The participant is however well informed at the start that the word lists do not overlap. It makes therefor no sense to ask for an answer to a conjunctive composition query probe at one instance: multiple-yes answers would be absent and therefor no cases of over distribution could be produced. A quantum based model for the conjunction of queries moreover requires a procedure specification for its formal representation, since measurement outcomes in quantum models are sensitive to ordering of the measurement operators for non-compatible questions [1, 14, 24, 27]. While the projectors for list membership, Eq. 6, are commutative the dynamical process between two measurements will void that order invariance, as we will see in the next section. The dynamical process implies that the Hamiltonian-QEM predicts di erent acceptance probabilities for di erent query orderings, e.g. p(Li? ◦ Lj ?|Li) = p(Lj ? ◦ Li?|Li). It is therefor not possible in Hamiltonian-QEM to define a unique expression for expressions like p(¬Li? ∩ Lj ? ∩ Lk ?|Li) (cfr [8]) without additional information on the order of querying.

Li ? Lj ? Lk ? | Li

yes

yes

yes

over distribution

yes

yes

no

over distribution

yes

no

yes

over distribution

yes

no

no

true memory

no

yes

yes false memory

no

yes

no

false memory

no

no

yes false memory

no

no

no

forgotten

Fig. 1. Logic of false memory, true memory and over distribution in the three list paradigm for source memory, for a target which is a studied word from list Li . Indices [i, j, k] are permutations of [1, 2, 3]. For a distractor, which is an unstudied word from L4, all response triplets are erroneous memories, except the triple ‘no’ which is a correct no-memory evaluation.

p(L123?|L1)
p(¬L1∩L2∩L3?|L1)

suai.ru/our-contacts

quantum machine learning

66 J. B. Broekaert and J. R. Busemeyer

In the three list experimental paradigm the query probes are kept separated – ‘did the word appear on List 1’ (L1?), ‘did the word appear on List 2’ (L2?), ‘did the word appear on List 3’ (L2?) and the disjunctive probe ‘did the word appear on one of the lists’ (L123?) – and are randomised between other acceptance tasks for other words [8, 16, 18, 25].

From a classical set theoretic perspective we can relate the acceptance probability for the disjunctive probe with the acceptance probabilities of the single probes;

|S(L1 L2 L3?)| = i |S(Li ?)| − i<j |S(Li ∩ Lj ?)| + |S(L1 ∩ L2 ∩ L3?)| (1)

(for a word from a given set Lk ). Then we define the unpacking factor UF as the ratio of number of acceptance responses for the separate queries per list over the number of cases of the query for the joined lists:

UF (k) =

 

i |S(Li)|

(2)

|S(L1

L2 L3?)|

 

 

In terms of summed acceptance probabilities and taking into account classical set relation, Eq. (1), and some algebra, the interpretation of the unpacking factor is apparent;

UF (k) =

p(L1?|Lk ) + p(L2?|Lk ) + p(L3?|Lk )

 

 

 

p(L123?|Lk )

 

= 1 +

p(L1 ∩ L2 ∩ ¬L3?|Lk ) + p(¬L1 ∩ L2 ∩ L3?|Lk ) + p(L1 ∩ ¬L2 ∩ L3?|Lk )

 

 

p(L1 ∩ L2 ∩ L3?|Lk )

p(L123?|Lk )

 

 

+ 2

 

(3)

 

 

p(L123?|Lk )

 

For every index value k of the target’s list, the excess value of UF above 1 is caused by three over distribution terms of which the ‘always accept’ term is double weighted, and one false memory term of the type ‘accept on all lists except the true source’. For example, when k = 1, the term

relates to the case a target from L1 was not accepted on that list while it was accepted both on L2 and L3, constituting a false memory contribution to UF .

Since the lists in the experimental design are disjoint, according to classical logic the right hand side is equal to 1. For experimental choice proportions however the unpacking factor shows to be significantly larger than 1 [8]. Modulo the fact that along three terms for over distribution the unpacking factor always mixes in one term of false memory as well, we will still use the unpacking factor as a measure for over distribution, besides its correct measure for subadditivity.1

1In principle contributions of false memories and over distributions could be fully separated if one would measure the acceptance probabilities for disjunctions of all disjunctive list pairs as well.