Cache affinity scheduling method
In the case of for shared-memory multiprocessors, there are two factors that increase the number of cache misses. One of them being that several processes are forced to time-share the same cache, and as a result, the processes will mutually displace the cache state built up by the previous state. a stream of misses as it rebuilds ita cache state is produced.The processes should reuse their cached state more in order to reduce the number of misses in these workloads. The process of schedule each process based on its affinity to individual caches is one of the methods to solve this. That is, the process is scheduled based on the amount of state that the process has accumulated in an individual cache. This technique is called cache affinity scheduling.
Cache utilisation is often very poor in multithreaded applications, due to the loss of data access locality incurred by frequent context switching. when dynamic load balancing is introduced , the problem gets even more aggravated and the cache content is disrupted by thread migration.
The cache-affinity schedulers traditionally utilise per-processor run queues. A process can develop affinity to a processor and the benefits of this locality management is also described well in the literature. In the present computers, there is a wide gap between the ever-faster processors
and slow memory technology. The cache affinity technology will introduce locality awareness in scheduler design.The memory conscious scheduling will also improve the improve performance of fine grain thread scheduling .
For more details, see:
http://portal.acmcitation.cfm?id=167038&dl=ACM&coll=DL&CFID=2972863&CFTOKEN=66513205
http://freepatentsonline5784614.html