- •Introduction
- •Increasing Demand for Wireless QoS
- •Technical Approach
- •Outline
- •The Indoor Radio Channel
- •Time Variations of Channel Characteristics
- •Orthogonal Frequency Division Multiplexing
- •The 5 GHz Band
- •Interference Calculation
- •Error Probability Analysis
- •Results and Discussion
- •IEEE 802.11
- •IEEE 802.11 Reference Model
- •IEEE 802.11 Architecture and Services
- •Architecture
- •Services
- •802.11a Frame Format
- •Medium Access Control
- •Distributed Coordination Function
- •Collision Avoidance
- •Post-Backoff
- •Recovery Procedure and Retransmissions
- •Fragmentation
- •Hidden Stations and RTS/CTS
- •Synchronization and Beacons
- •Point Coordination Function
- •Contention Free Period and Superframes
- •QoS Support with PCF
- •The 802.11 Standards
- •IEEE 802.11
- •IEEE 802.11a
- •IEEE 802.11b
- •IEEE 802.11c
- •IEEE 802.11d
- •IEEE 802.11e
- •IEEE 802.11f
- •IEEE 802.11g
- •IEEE 802.11h
- •IEEE 802.11i
- •Overview and Introduction
- •Naming Conventions
- •Enhancements of the Legacy 802.11 MAC Protocol
- •Transmission Opportunity
- •Beacon Protection
- •Direct Link
- •Fragmentation
- •Traffic Differentiation, Access Categories, and Priorities
- •EDCF Parameter Sets per AC
- •Minimum Contention Window as Parameter per Access Category
- •Maximum TXOP Duration as Parameter per Access Category
- •Collisions of Frames
- •Other EDCF Parameters per AC that are not Part of 802.11e
- •Retry Counters as Parameter per Access Category
- •Persistence Factor as Parameter per Access Category
- •Traffic Streams
- •Default EDCF Parameter Set per Draft 4.0, Table 20.1
- •Hybrid Coordination Function, Controlled Channel Access
- •Controlled Access Period
- •Improved Efficiency
- •Throughput Improvement: Contention Free Bursts
- •Throughput Improvement: Block Acknowledgement
- •Delay Improvement: Controlled Contention
- •Maximum Achievable Throughput
- •System Saturation Throughput
- •Modifications of Bianchi’s Legacy 802.11 Model
- •Throughput Evaluation for Different EDCF Parameter Sets
- •Lower Priority AC Saturation Throughput
- •Higher Priority AC Saturation Throughput
- •Share of Capacity per Access Category
- •Calculation of Access Priorities from the EDCF Parameters
- •Markov Chain Analysis
- •The Priority Vector
- •Results and Discussion
- •QoS Support with EDCF Contending with Legacy DCF
- •1 EDCF Backoff Entity Against 1 DCF Station
- •Discussion
- •Summary
- •1 EDCF Backoff Entity Against 8 DCF Stations
- •Discussion
- •Summary
- •8 EDCF Backoff Entities Against 8 DCF Stations
- •Discussion
- •Summary
- •Contention Free Bursts
- •Contention Free Bursts and Link Adaptation
- •Simulation Scenario: two Overlapping QBSSs
- •Throughput Results with CFBs
- •Throughput Results with Static PHY mode 1
- •Delay Results with CFBs
- •Conclusion
- •Radio Resource Capture
- •Radio Resource Capture by Hidden Stations
- •Solution
- •Mutual Synchronization across QBSSs and Slotting
- •Evaluation
- •Simulation Results and Discussion
- •Conclusion
- •Prioritized Channel Access in Coexistence Scenarios
- •Saturation Throughput in Coexistence Scenarios
- •MSDU Delivery Delay in Coexistence Scenarios
- •Scenario
- •Simulation Results and Discussion
- •Conclusions about the HCF Controlled Channel Access
- •Summary and Conclusion
- •ETSI BRAN HiperLAN/2
- •Reference Model (Service Model)
- •System Architecture
- •Medium Access Control
- •Interworking Control of ETSI BRAN HiperLAN/2 and IEEE 802.11
- •CCHC Medium Access Control
- •CCHC Scenario
- •CCHC and Legacy 802.11
- •CCHC Working Principle
- •CCHC Frame Structure
- •Requirements for QoS Support
- •Coexistence Control of ETSI BRAN HiperLAN/2 and IEEE 802.11
- •Conventional Solutions to Support Coexistence of WLANs
- •Coexistence as a Game Problem
- •The Game Model
- •Overview
- •The Single Stage Game (SSG) Competition Model
- •The Superframe as SSG
- •Action, Action Space A, Requirements vs. Demands
- •Abstract Representation of QoS
- •Utility
- •Preference and Behavior
- •Payoff, Response and Equilibrium
- •The Multi Stage Game (MSG) Competition Model
- •Estimating the Demands of the Opponent Player
- •Description of the Estimation Method
- •Evaluation
- •Application and Improvements
- •Concluding Remark
- •The Superframe as Single Stage Game
- •The Markov Chain P
- •Illustration and Transition Probabilities
- •Definition of Corresponding States and Transitions
- •Solution of P
- •Collisions of Resource Allocation Attempts
- •Transition Probabilities Expressed with the QoS Demands
- •Average State Durations Expressed with the QoS Demands
- •Result
- •Evaluation
- •Conclusion
- •Definition and Objective of the Nash Equilibrium
- •Bargaining Domain
- •Core Behaviors
- •Available Behaviors
- •Strategies in MSGs
- •Payoff Calculation in the MSGs, Discounting and Patience
- •Static Strategies
- •Definition of Static Resource Allocation Strategies
- •Experimental Results
- •Scenario
- •Discussion
- •Persistent Behavior
- •Rational Behavior
- •Cooperative Behavior
- •Conclusion
- •Dynamic Strategies
- •Cooperation and Punishment
- •Condition for Cooperation
- •Experimental Results
- •Conclusion
- •Conclusions
- •Problem and Selected Method
- •Summary of Results
- •Contributions of this Thesis
- •Further Development and Motivation
- •IEEE 802.11a/e Simulation Tool “WARP2”
- •Model of Offered Traffic and Requirements
- •Table of Symbols
- •List of Figures
- •List of Tables
- •Abbreviations
- •Bibliography
7.2 The Single Stage Game (SSG) Competition Model |
147 |
quire that players know the QoS requirement of their opponents. Because of the coexistence approach without interworking, i.e., without information exchange, players observe actions of their opponent player and with the help of a prediction method the demands of their opponent player, but do not have access to the QoS requirements their opponent player is attempting to achieve. The prediction method to determine the demand of the opponent player from the observations is explained in Section 7.4.
It is assumed that one common utility function is used by all players, where the individual players do not know the parameters of the utility functions of their respective opponents.
For the game approach to be successfully applied, all involved radio systems have to follow it.
It is not the intention of this approach to allow a CCHC to improve its performance against legacy stations, i.e., stations that do not follow the game approach. However, upon detecting a co-located legacy station, a CCHC does model this opponent as a so-called myopic, persistent player, which does not consider any utility, but attempts to allocate its requirement. There are strategies that optimize the utility against such players, as will be shown in Chapter 9.
7.2.3.2Preference and Behavior
The utility U i |
represents the preference relation \i of player i in the sense |
that |
U i( a ) ≥U i( b ) |
whenever a \i b , with a,b Ai . In this case it is said that |
a is |
preferred over b, a weakly dominates b. IfU i( a ) >U i( b ) , a strictly dominates b, i.e., a i b , a,b Ai . The binary operator ” \i ” can be understood as a function from Α to Ai (Debreu 1959:6).
A behavior that a player actually is following is based on such preferences. For example, a player may attempt to maximize its utility within the current stage n by selecting a demand that optimizes its own utility. In contrast, it may prefer to select a demand that allows the opponent player to achieve a certain utility as well. Various kinds of behavior are introduced in Section 8.3.
7.2.4Payoff, Response and Equilibrium
So far, the utility was introduced as a value that represents the QoS a player observes. Because the observed utility is dependent on the actions of all involved players, another term is introduced that highlights this dependency.
148 |
7. The Game Model |
Figure 7.6: Utility U i of a player i vs. observation (Θobsi , ∆obsi ) , with (Θreqi , ∆reqi ) = (0.4,0.04) . There is no impact of any opponent player -i in this example, the player i observes its de-
mands. To demand the requirement is the action that maximizes the observed utility.
The resulting utility as the outcome of an SSG at stage n is called the observed payoffV i :
V i( a( n )) :=( ai( n ),a−i( n )) → U i( n ), a( n ) Α =× |
Αi |
(7.16) |
i N |
|
|
Here, a( n ) =( ai( n ), a−i( n )) is a vector of actions, which is also referred to as action profile. The principle of an observed payoff completes the SSG model. The payoff describes what a player receives as utility for selecting an action, as function of the demand of all involved players. It denotes the single stage payoff of the player i as function of the actions, or demands, of all involved players i, -i. Figure 7.7 shows the payoff V i( a( n )) of player i for the utility function presented above. In this example, the same requirements as before are assumed, whereas player -i demands( Θdem−i = 0.4, ∆dem−i = 0.04 ) . Comparing the payoff in Figure 7.7 with the utility in Figure 7.6, the mutual impact of the allocation processes of the players can be clearly seen. The optimal action of player i, i.e., the action that maximizes the observed payoff, differs significantly from its requirement, i.e., the action that maximizes the utility when no other player demands any resources.
In contrast to decision problems, in a game each player has only partial control over the environment. The payoff of each player depends not only upon its actions but also upon the actions of other players. As consequence, when demanding resources, a player must therefore act by taking into account the possible demand of resources of its opponent. This may be interpreted as response or
7.2 The Single Stage Game (SSG) Competition Model |
149 |
reaction to the opponent’s actions. A best response is the action that maximizes the player’s expected payoff. A player that selects this action is in this thesis referred to as acting rationally by responding to the correct expectation about its
opponent player’s actions. Figure 7.7 shows the payoff |
of |
player i in stage n, |
||
V i ( a( n )) =V i ( ai ( n ),a−i( n )) , where in this example |
it is |
assumed |
that the |
|
opponent player takes |
the action a−i( n ) =( 0.4,0.04 ) . When players |
interact, |
||
there is another payoff |
V −i( a( n )) for the opponent player -i, which also de- |
pends on the action of both players. This payoff is not shown in Figure 7.7. In an SSG without history, and without information exchange between players, the players cannot estimate the upcoming actions of their opponents. What action their opponent player will take is not known to any player.
For successful QoS support, demands must be selected, i.e., actions must be taken, that allow a minimum level of observed QoS, i.e., a minimum payoff, regardless what action the opponent takes. This leads to inefficient usage of resources. In contrast, if there is mutual knowledge about the opponents, this can be used to coordinate the actions. Such knowledge may be taken from the history of interaction of earlier games (as it is assumed in this thesis), or through dedicated information exchange (for example via a common spectrum coordination channel).
Figure 7.7: Payoff V i ( a(n )) of player i vs. demand ai = (Θi |
, ∆i |
) , with the same re- |
|
quirements as before, (Θreqi , ∆reqi |
dem |
dem |
|
) = (0.4,0.04) . The player i does not observe its require- |
ments and does not observe its demands due to the implications of the actions taken by
the opponent player -i. Indicated is the optimal action ai * = (Θdemi * = 0.4, ∆demi * = 0.027) . In this example, the opponent player -i demands (Θdem−i = 0.4, ∆dem−i = 0.04) . The figure is based on the analytical model of the SSG, as described in Section 8.1.