- •Preface
- •Contents
- •Contributors
- •Modeling Meaning Associated with Documental Entities: Introducing the Brussels Quantum Approach
- •1 Introduction
- •2 The Double-Slit Experiment
- •3 Interrogative Processes
- •4 Modeling the QWeb
- •5 Adding Context
- •6 Conclusion
- •Appendix 1: Interference Plus Context Effects
- •Appendix 2: Meaning Bond
- •References
- •1 Introduction
- •2 Bell Test in the Problem of Cognitive Semantic Information Retrieval
- •2.1 Bell Inequality and Its Interpretation
- •2.2 Bell Test in Semantic Retrieving
- •3 Results
- •References
- •1 Introduction
- •2 Basics of Quantum Probability Theory
- •3 Steps to Build an HSM Model
- •3.1 How to Determine the Compatibility Relations
- •3.2 How to Determine the Dimension
- •3.5 Compute the Choice Probabilities
- •3.6 Estimate Model Parameters, Compare and Test Models
- •4 Computer Programs
- •5 Concluding Comments
- •References
- •Basics of Quantum Theory for Quantum-Like Modeling Information Retrieval
- •1 Introduction
- •3 Quantum Mathematics
- •3.1 Hermitian Operators in Hilbert Space
- •3.2 Pure and Mixed States: Normalized Vectors and Density Operators
- •4 Quantum Mechanics: Postulates
- •5 Compatible and Incompatible Observables
- •5.1 Post-Measurement State From the Projection Postulate
- •6 Interpretations of Quantum Mechanics
- •6.1 Ensemble and Individual Interpretations
- •6.2 Information Interpretations
- •7 Quantum Conditional (Transition) Probability
- •9 Formula of Total Probability with the Interference Term
- •9.1 Växjö (Realist Ensemble Contextual) Interpretation of Quantum Mechanics
- •10 Quantum Logic
- •11 Space of Square Integrable Functions as a State Space
- •12 Operation of Tensor Product
- •14 Qubit
- •15 Entanglement
- •References
- •1 Introduction
- •2 Background
- •2.1 Distributional Hypothesis
- •2.2 A Brief History of Word Embedding
- •3 Applications of Word Embedding
- •3.1 Word-Level Applications
- •3.2 Sentence-Level Application
- •3.3 Sentence-Pair Level Application
- •3.4 Seq2seq Application
- •3.5 Evaluation
- •4 Reconsidering Word Embedding
- •4.1 Limitations
- •4.2 Trends
- •4.4 Towards Dynamic Word Embedding
- •5 Conclusion
- •References
- •1 Introduction
- •2 Motivating Example: Car Dealership
- •3 Modelling Elementary Data Types
- •3.1 Orthogonal Data Types
- •3.2 Non-orthogonal Data Types
- •4 Data Type Construction
- •5 Quantum-Based Data Type Constructors
- •5.1 Tuple Data Type Constructor
- •5.2 Set Data Type Constructor
- •6 Conclusion
- •References
- •Incorporating Weights into a Quantum-Logic-Based Query Language
- •1 Introduction
- •2 A Motivating Example
- •5 Logic-Based Weighting
- •6 Related Work
- •7 Conclusion
- •References
- •Searching for Information with Meet and Join Operators
- •1 Introduction
- •2 Background
- •2.1 Vector Spaces
- •2.2 Sets Versus Vector Spaces
- •2.3 The Boolean Model for IR
- •2.5 The Probabilistic Models
- •3 Meet and Join
- •4 Structures of a Query-by-Theme Language
- •4.1 Features and Terms
- •4.2 Themes
- •4.3 Document Ranking
- •4.4 Meet and Join Operators
- •5 Implementation of a Query-by-Theme Language
- •6 Related Work
- •7 Discussion and Future Work
- •References
- •Index
- •Preface
- •Organization
- •Contents
- •Fundamentals
- •Why Should We Use Quantum Theory?
- •1 Introduction
- •2 On the Human Science/Natural Science Issue
- •3 The Human Roots of Quantum Science
- •4 Qualitative Parallels Between Quantum Theory and the Human Sciences
- •5 Early Quantitative Applications of Quantum Theory to the Human Sciences
- •6 Epilogue
- •References
- •Quantum Cognition
- •1 Introduction
- •2 The Quantum Persuasion Approach
- •3 Experimental Design
- •3.1 Testing for Perspective Incompatibility
- •3.2 Quantum Persuasion
- •3.3 Predictions
- •4 Results
- •4.1 Descriptive Statistics
- •4.2 Data Analysis
- •4.3 Interpretation
- •5 Discussion and Concluding Remarks
- •References
- •1 Introduction
- •2 A Probabilistic Fusion Model of Trust
- •3 Contextuality
- •4 Experiment
- •4.1 Subjects
- •4.2 Design and Materials
- •4.3 Procedure
- •4.4 Results
- •4.5 Discussion
- •5 Summary and Conclusions
- •References
- •Probabilistic Programs for Investigating Contextuality in Human Information Processing
- •1 Introduction
- •2 A Framework for Determining Contextuality in Human Information Processing
- •3 Using Probabilistic Programs to Simulate Bell Scenario Experiments
- •References
- •1 Familiarity and Recollection, Verbatim and Gist
- •2 True Memory, False Memory, over Distributed Memory
- •3 The Hamiltonian Based QEM Model
- •4 Data and Prediction
- •5 Discussion
- •References
- •Decision-Making
- •1 Introduction
- •1.2 Two Stage Gambling Game
- •2 Quantum Probabilities and Waves
- •2.1 Intensity Waves
- •2.2 The Law of Balance and Probability Waves
- •2.3 Probability Waves
- •3 Law of Maximal Uncertainty
- •3.1 Principle of Entropy
- •3.2 Mirror Principle
- •4 Conclusion
- •References
- •1 Introduction
- •4 Quantum-Like Bayesian Networks
- •7.1 Results and Discussion
- •8 Conclusion
- •References
- •Cybernetics and AI
- •1 Introduction
- •2 Modeling of the Vehicle
- •2.1 Introduction to Braitenberg Vehicles
- •2.2 Quantum Approach for BV Decision Making
- •3 Topics in Eigenlogic
- •3.1 The Eigenlogic Operators
- •3.2 Incorporation of Fuzzy Logic
- •4 BV Quantum Robot Simulation Results
- •4.1 Simulation Environment
- •5 Quantum Wheel of Emotions
- •6 Discussion and Conclusion
- •7 Credits and Acknowledgements
- •References
- •1 Introduction
- •2.1 What Is Intelligence?
- •2.2 Human Intelligence and Quantum Cognition
- •2.3 In Search of the General Principles of Intelligence
- •3 Towards a Moral Test
- •4 Compositional Quantum Cognition
- •4.1 Categorical Compositional Model of Meaning
- •4.2 Proof of Concept: Compositional Quantum Cognition
- •5 Implementation of a Moral Test
- •5.2 Step II: A Toy Example, Moral Dilemmas and Context Effects
- •5.4 Step IV. Application for AI
- •6 Discussion and Conclusion
- •Appendix A: Example of a Moral Dilemma
- •References
- •Probability and Beyond
- •1 Introduction
- •2 The Theory of Density Hypercubes
- •2.1 Construction of the Theory
- •2.2 Component Symmetries
- •2.3 Normalisation and Causality
- •3 Decoherence and Hyper-decoherence
- •3.1 Decoherence to Classical Theory
- •4 Higher Order Interference
- •5 Conclusions
- •A Proofs
- •References
- •Information Retrieval
- •1 Introduction
- •2 Related Work
- •3 Quantum Entanglement and Bell Inequality
- •5 Experiment Settings
- •5.1 Dataset
- •5.3 Experimental Procedure
- •6 Results and Discussion
- •7 Conclusion
- •A Appendix
- •References
- •Investigating Bell Inequalities for Multidimensional Relevance Judgments in Information Retrieval
- •1 Introduction
- •2 Quantifying Relevance Dimensions
- •3 Deriving a Bell Inequality for Documents
- •3.1 CHSH Inequality
- •3.2 CHSH Inequality for Documents Using the Trace Method
- •4 Experiment and Results
- •5 Conclusion and Future Work
- •A Appendix
- •References
- •Short Paper
- •An Update on Updating
- •References
- •Author Index
- •The Sure Thing principle, the Disjunction Effect and the Law of Total Probability
- •Material and methods
- •Experimental results.
- •Experiment 1
- •Experiment 2
- •More versus less risk averse participants
- •Theoretical analysis
- •Shared features of the theoretical models
- •The Markov model
- •The quantum-like model
- •Logistic model
- •Theoretical model performance
- •Model comparison for risk attitude partitioning.
- •Discussion
- •Authors contributions
- •Ethical clearance
- •Funding
- •Acknowledgements
- •References
- •Markov versus quantum dynamic models of belief change during evidence monitoring
- •Results
- •Model comparisons.
- •Discussion
- •Methods
- •Participants.
- •Task.
- •Procedure.
- •Mathematical Models.
- •Acknowledgements
- •New Developments for Value-based Decisions
- •Context Effects in Preferential Choice
- •Comparison of Model Mechanisms
- •Qualitative Empirical Comparisons
- •Quantitative Empirical Comparisons
- •Neural Mechanisms of Value Accumulation
- •Neuroimaging Studies of Context Effects and Attribute-Wise Decision Processes
- •Concluding Remarks
- •Acknowledgments
- •References
- •Comparison of Markov versus quantum dynamical models of human decision making
- •CONFLICT OF INTEREST
- •Endnotes
- •FURTHER READING
- •REFERENCES
suai.ru/our-contacts |
quantum machine learning |
56 P. D. Bruza and P. Wittek
functional identity it is impossible to assign a random variable to represent the outcomes of the same measurement protocol in di erent measurement contexts.
It is a requirement that the mapping adheres to the expected normalization condition: e E : v e p(v) = 1. By way of illustration, consider once again Fig. 3. This contextuality scenario has four edges. The normalization condition enforces the following constraints:
p1 + p2 + p3 + p4 |
= 1 |
(1) |
|
q1 + q2 |
+ q3 + q4 |
= 1 |
(2) |
p1 + p2 |
+ q3 + q4 |
= 1 |
(3) |
p3 + p4 |
+ q1 + q2 |
= 1 |
(4) |
where pi, 1 ≤ i ≤ 4 and qj , 1 ≤ j ≤ 4 denote the probabilities of outcomes in the four hyperedges. A definition of contextuality can now be presented.
Definition 1 (Probabilistic contextuality). (General contextuality [2]). Let
X = (V, E) be a contextuality scenario. Let G(X) denote the set of probabilistic models on X. X is deemed “contextual” if G(X) = .
Probabilistic contextuality occurs when there is no probabilistic model p corresponding to composite contextuality scenario X. Determining whether X is contextual is computable by a linear program [2].
3Using Probabilistic Programs to Simulate Bell Scenario Experiments
One of the advantages of using a programming approach to develop probabilistic models is that experimental designs can be syntactically specified in a modular way. In this way, a wide variety of experimental designs across fields can potentially be catered for. For example, consider the situation where an experimenter wishes to determine whether a system S can validly be modelled compositionally in terms of two component subsystems A and B. Two di erent experiments can be carried out upon each of the two presumed components, which will answer a set of ‘questions’ with binary outcomes, leading to four measurement contexts. For example, one experimental context would be to ask A1 of component A and B1 of component B. In Bell scenario experiments, four measurement contexts are typically used: {{A1, B1}, {A1, B2}, {A2, B1}, {A2, B2}}. Bell scenario designs has been widely employed in cognitive psychology to test for contextuality in human cognition [3, 9, 14, 18].
One way to think about system S is that it is equivalent to a set of biased coins A and B, where the bias is local to a given measurement context. Figure 4 depicts a P-program that follows this line of thinking.
suai.ru/our-contacts |
quantum machine learning |
|
|
|
|
|
Probabilistic Programs for Investigating Contextuality |
57 |
1 |
# define the components of the experiment |
|
||||
2 |
def A = component (A1 , A2 ) |
|
||||
3 |
def |
B = |
component (B1 , B2 ) |
|
||
4 |
|
|
|
|
|
|
5 |
var |
P1 = |
context (){ |
|
||
6 |
# declare two |
binary random variables ; 0.5 signifies a fair coin toss |
||||
7 |
|
var |
A1 |
= |
flip (0.6) |
|
8 |
|
var |
B1 |
= |
flip (0.5) |
|
9 |
# declare joint distribution across the variables A1 , B1 |
|
||||
10 |
|
var |
p =[ A1 , B1 ] |
|
||
11 |
# flip the dual coins 1000 times to form the joint distribution |
|
||||
12 |
|
return |
{ Infer ({ samples :1000} , p )} |
|
||
13 |
}; |
|
|
|
|
|
14 |
var |
P2 = |
context (){ |
|
||
15 |
|
var |
A1 |
= |
flip (0.4) |
|
16 |
|
var |
B2 |
= |
flip (0.7) |
|
17 |
|
var |
p =[ A1 , B2 ] |
|
||
18 |
|
return |
{ Infer ({ samples :1000} , p )} |
|
||
19 |
}; |
|
|
|
|
|
20 |
var |
P3 = |
context (){ |
|
||
21 |
|
var |
A2 |
= |
flip (0.2) |
|
22 |
|
var |
B1 |
= |
flip (0.7) |
|
23 |
|
var |
p =[ A2 , B1 ] |
|
||
24 |
|
return |
{ Infer ({ samples :1000} , p )} |
|
||
25 |
}; |
|
|
|
|
|
26 |
var |
P4 = |
context (){ |
|
||
27 |
|
var |
A2 |
= |
flip (0.4) |
|
28 |
|
var |
B2 |
= |
flip (0.5) |
|
29 |
|
var |
p =[ A2 , B2 ] |
|
||
30 |
|
return |
{ Infer ({ samples :1000} , p )} |
|
||
31 |
}; |
|
|
|
|
|
32 |
# return |
a |
single model |
|
||
33 |
return { model ({ design : ‘no - signal ’,P1 ,P2 ,P3 , P4 })} |
|
Fig. 4. Example “Bell scenario” P-program
The Bell scenario program first defines the components A and B together with the associated variables. Thereafter, the program features the four measurement associated contexts P1, P2, P3 and P4. Finally, the line model(design: ‘no-signal’,P1,P2,P3,P4) specifies that the measurement contexts are to be combined according to the no-signaling condition. The question now to be addressed is how the hypergraph semantics are to be formulated. Reference [2] provides the general semantics of the Bell scenarios by means of multipartite composition of contextuality scenarios.
As these semantics are compositional, it opens the door to map syntactically specified components in a P-program to contextuality scenarios and then to exploit the composition to provide the semantics of the program as a whole.
Consider the Bell scenario program depicted in Fig. 4. The syntactically defined components A and B are modelled as contextuality scenarios XA and XB respectively. The corresponding hypergraphs are depicted in Fig. 5.
suai.ru/our-contacts |
quantum machine learning |
58 P. D. Bruza and P. Wittek
|
|
|
|
|
|
|
|
Note how |
the variable |
defini- |
||||
|
|
|
|
1 |
0 |
|
|
|||||||
|
0 |
0 |
|
B1 |
|
tions |
associated |
with |
the |
compo- |
||||
|
|
|
|
|
||||||||||
|
1 |
1 |
|
1 |
0 |
|
|
nent |
map to |
an |
edge |
in a |
hyper- |
|
|
|
B2 |
|
graph. For example, the syntax def |
||||||||||
|
|
|
|
|
|
|
||||||||
|
A1 |
A2 |
|
|
|
|
|
A = component(A1,A2) corresponds |
||||||
|
|
|
|
|
|
|
|
to the two edges labelled A1 and A2 |
||||||
|
XA |
|
|
|
XB |
|
|
|||||||
|
|
|
|
|
|
on the left hand side of Fig. 5. |
||||||||
|
|
|
|
|
|
|
|
|||||||
Fig. 5. Contextuality scenarios correspond- |
Contextuality |
scenarios XA and |
||||||||||||
XB are composed into a single con- |
||||||||||||||
ing to the components A and B defined in |
||||||||||||||
textuality scenario XAB , which will |
||||||||||||||
the Bell scenario P-program shown in Fig. 4. |
||||||||||||||
|
|
|
|
|
|
|
|
express the semantics of the Bell sce- |
||||||
|
|
|
|
|
|
|
|
nario |
P-program. However, |
the no- |
signalling condition imposes constraints on the allowable probabilistic models on the combined hypergraph structure. Following Definition 3.1.2 in [2], a probabilistic model p G(XA × XB ) is a “no signalling” model if:
p(v, w) = |
p(v, w), v V (XA), e, e E(XB ) |
w e |
w e |
p(v, w) = |
p(v, w), w V (XB ), e, e E(XA) |
w e |
w e |
Reference [2] (p. 45) shows that not all probabilistic models of contextuality scenarios composed by a direct product are “no signalling” models. In order to guarantee that all probabilistic models of a combined contextuality scenario are “no signalling” models, the constituent contextuality scenarios XA and XB should be combined by the Foulis-Randall (FR) product denoted XAB = XA FR XB . As with the direct product XA × XB of contextuality scenarios, the vertices of the FR product are defined by V (XA FR XB ) = V (XA) × V (XB ). It is with respect to the hyperedges that there is a di erence between the FR product and the direct product:
XA FR XB = EA→B EB←A
where
EA→B := {v} × f (v) : ea E(XA), f : ea → E(XB )
v ea
EA←B := f (w) × {w} : eb E(XB ), f : eb → E(XA)
w eb
We are now in a position to illustrate the semantics of the P-program of Fig. 4 by the corresponding contextuality scenario depicted in Fig. 6. Observe how the FR product produces the extra edges that span the events across measurement contexts labeled P1, P2, P3 and P4. At first these spanning edges may seem arbitrary, but they happen to guarantee that the allowable probabilistic models over the composite contextuality scenario XA FR XB satisfy the “no signalling” condition [22]. By way of illustration, the normalization condition on edges imposes the following constraints (see Fig. 6):
suai.ru/our-contacts |
quantum machine learning |
Probabilistic Programs for Investigating Contextuality |
59 |
||
p1 + p2 + p3 + p4 |
= 1 |
(5) |
|
q1 + q2 |
+ q3 + q4 |
= 1 |
(6) |
p1 + p2 |
+ q3 + q4 |
= 1 |
(7) |
p3 + p4 |
+ q1 + q2 |
= 1 |
(8) |
where pi, 1 ≤ i ≤ 4 and qj , 1 ≤ j ≤ 4 denote the probabilities of events in the respective hyperedges. A consequence of constraints (5) and (7) is that p3 + p4 = q3 + q4. When considering the associated outcomes this means
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
B2 = 1) |
|||||||
p(A1 = 1 |
|
|
B1 = 0) + p(A1 = 1 |
|
|
B1 = 1) = p(A1 = 1 |
|
|
B2 = 0) + p(A1 = 1 |
|
|
||||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||
|
p3 |
|
|
|
|
p4 |
|
|
|
q3 |
|
|
|
|
q4 |
|
|
In other words, the marginal probability p(A1 = 1) does not di er across the measurement contexts P1 and P2 specified in the P-program of Fig. 4. In a similar vein, Eqs. (5) and (8) imply that the marginal probability p(A1 = 0) does not di er across measurement contexts P1 and P2. The stability of marginal probability ensures that no signalling is occurring from component B to component A. In quantum physics, the FR product is used to compose contextuality scenarios because this product ensures that there is no signalling between the systems. As a consequence, the operational semantics of the P-program must compute the FR product as some component hyperedges derive from measurement contexts, which have been syntactically specified in the P-program, and other edges express the no-signalling constraint. When the FR product is part of the operational semantics, it provides an underlying data structure which allows both classical and non-classical statistical correlations to be simulated [20]. For example, non-classical correlations between variables such as A1B1 can be produced by the P-program using standard Bernoulli samplers to produce (biased) coin flips and the underlying hypergraph data structure constrains the sampling to allow quantum-like correlations to emerge.
To illustrate a Bell scenario experiment in human information processing, consider the information fusion model depicted in Fig. 7. The variable S is a random variable which ranges over a set of image stimuli. Human subjects must decide whether an image is trustworthy [8]. Bivalent random variables C1, C2 relate to features associated with the content of the image. For example, C1 may model the decision whether a subject deems a person portrayed in an image to be honest. Conversely, R1 and R2 are bivalent random variables that relate to representational aspects of the image. For example, R1 may model the decision whether the image has been manipulated, or not. Variable R2 might model the decision whether there was something unexpected perceived in the image. The latent variable γ models the decision whether the content of the image is trustworthy, and depends on variables related to the content C1, C2. Conversely the latent ρ models the decision whether the image is deemed to be authentic, i.e., a true and accurate depiction of reality. Finally, the variable T corresponds to the decision whether the human subject trusts what they have been shown by fusing the assessments regarding the content and representational aspects of the image.
suai.ru/our-contacts |
quantum machine learning |
60 |
P. D. Bruza and P. Wittek |
|
|
|
|
|||
|
P1:{A1,B1} |
P2:{A1,B2} |
P1:{A1,B1} |
P2:{A1,B2} |
||||
|
p1 |
p2 |
q1 |
q2 |
|
|
|
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
00 |
01 |
00 |
01 |
00 |
01 |
00 |
01 |
|
p3 |
p4 |
q3 |
q4 |
|
|
|
|
|
10 |
11 |
10 |
11 |
10 |
11 |
10 |
11 |
|
00 |
01 |
00 |
01 |
00 |
01 |
00 |
01 |
|
10 |
11 |
10 |
11 |
10 |
11 |
10 |
11 |
|
P3:{A2,B1} |
P4:{A2,B2} |
P3:{A2,B1} |
P4:{A2,B2} |
Fig. 6. Contextuality scenario of the P-program of Fig. 4. In total the hypergraph comprises 12 edges of four events. The nodes in rectangles represent events in a probability distribution returned by a given scope: P1, P2, P3, and P4. Note this figure depicts a single hypergraph. Two copies have been made to depict the spanning edges more clearly. This figure corresponds to Figure 7f in [2].
S
C1 |
C2 |
R1 |
R2 |
γρ
T
Fig. 7. Probabilistic fusion model of trust
A Bell scenario experiment considers γ and ρ as being separate sub-systems. (See dashed area of Fig. 7). In terms of the framework depicted in 1, four measurement contexts are defined by jointly measuring one variable from each system: M1 =
{C1, R1}, M2 = {C2, R2}, M3 = {C2, R1}, M4 = {C2, R2}.
4 Potential Applications
in Quantum Physics
Probabilistic programming languages (PPLs) hav already proved useful in cognitive science [16], but, to our knowledge, they have yet to be seriously taken
up by quantum physics. PPLs o er quantum physicists a convenient way to describe specify experiments, and enable a new tool for analyzing statistical correlations based both on simulated as well as actual experimental results. Their potential use is not restricted to Bell scenarios.
Since any PPL is based on random variables, we can ask the question what exactly is a random variable in quantum physics. If we restrict our attention to a single measurement context, due to the normalization constraint, we can think of the measurement context as a (conditional) probability distribution over random variables, which describe the measurement outcomes. More formally, this