19-03-2011, 10:08 AM
[attachment=10543]
• inverse multiplexing
• Idea: simulate a “large” logical channel out of some number (called a bundle) of “smaller” ones
•
goals
• High page link utilization and low fragmentation
Low bandwidth wireless links
• Tight reordering constraints
TCP doesn’t handle reordered packets well
• Adaptive scheduling
Throughput of shared wireless links is unstable over many time scales
• contributions
• Standard Inverse Multiplexing
Commonly used in ISDN, fractional T1/T3, ATM
Private links with no contention
Stable & similar channel characteristics
• Link Quality Balancing
Adapts to varying capacity shared access channels
Efficient bandwidth utilization
TCP-friendly reordering bound
• outline
• Scheduling techniques
Link Quality Balancing with stable links
• Adaptation
Measuring and reacting to channel variations
• Implementation results
Constant Bit Rate (CBR) Traffic
TCP flows
• known scheduling methods
• Round Robin
Does not assure optimum page link usage
Provides no bounds on delay, ordering
• Deficit Round Robin, Fair Queuing
Provide efficient page link usage, but...
Require information about queue lengths
– In CDPD, queues are often buried inside the networks, hence information is unavailable
Don’t provide ordering guarantees
• deficit round robin
• fragmentation: an extreme
• weighting
• link quality balancing
• Idea: Fragment traffic in proportion to individual page link throughputs
For each link, compute a relative MTU
– For fastest link, use optimum MTU
– On all other links, use a proportionately smaller one
Fragment packets to fill MTU-sized buckets
– Last fragment arrival times are the same on each link
• Guarantees no inter-round reordering; only possible reordering occurs in the same round
– Requires no information on queue lengths
– Work conserving; provides maximal page link usage
• our approach: balancing
• measurement
• Problem: Individual page link throughputs are highly variable over many time scales
• How do we measure current throughput?
Absolute values are difficult and expensive to obtain
– Without synthetic traffic, we are limited by the offered load; who knows if it actually is driving the links to full capacity
Synthetic probes are problematic
– Without priority queuing, introducing synthetic traffic may cause loss of actual traffic
• link quality metric
• Solution: Don’t! Relative metrics suffice
Simply maintain proportional estimates
End-to-end bandwidth probing will do the rest
• But which metric?
Packet arrival times
– Theoretically ideal, but far too noisy to be used in reality
Short-term throughput
– Similarly difficult to measure
Loss Rates
– With bounded queues, loss rates are a rough indicator of appropriate throughput, and easy to measure
• feedback loop
• Invariant: Always schedule traffic so that quality metric will be identical across links
As a corollary, any perceived deviation at the receiver implies an improper estimate
Use the receiver’s data to periodically update the Multiplexor’s scheduling proportions
End-to-end bandwidth probing should cause the weakest page link to fail first and/or more often
• Links are asymmetric; measure both ways
• cbr traffic
• tcp traffic
• evaluation & future work
• LQB handles shared wireless links well
Fragmentation is minimal
Reordering is tightly bounded
Adapts well to varying channel characteristics
• But we’d like to find a better metric
Loss rates are delayed and very coarse grained
Perhaps filtering functions exist for inter-packet arrival times