Lavoisier S.A.S.
14 rue de Provigny
94236 Cachan cedex
FRANCE

Heures d'ouverture 08h30-12h30/13h30-17h30
Tél.: +33 (0)1 47 40 67 00
Fax: +33 (0)1 47 40 67 02


Url canonique : www.lavoisier.fr/livre/autre/analysis-of-cache-performance-for-operating-systems-and-multiprogramming/descriptif_1601851
Url courte ou permalien : www.lavoisier.fr/livre/notice.asp?ouvrage=1601851

L’édition demandée n’est plus disponible, nous vous proposons la dernière édition.

Analysis of Cache Performance for Operating Systems and Multiprogramming, Softcover reprint of the original 1st ed. 1989 The Springer International Series in Engineering and Computer Science Series, Vol. 69

Langue : Anglais

Auteur :

Couverture de l’ouvrage Analysis of Cache Performance for Operating Systems and Multiprogramming
As we continue to build faster and fast. er computers, their performance is be­ coming increasingly dependent on the memory hierarchy. Both the clock speed of the machine and its throughput per clock depend heavily on the memory hierarchy. The time to complet. e a cache acce88 is oft. en the factor that det. er­ mines the cycle time. The effectiveness of the hierarchy in keeping the average cost of a reference down has a major impact on how close the sustained per­ formance is to the peak performance. Small changes in the performance of the memory hierarchy cause large changes in overall system performance. The strong growth of ruse machines, whose performance is more tightly coupled to the memory hierarchy, has created increasing demand for high performance memory systems. This trend is likely to accelerate: the improvements in main memory performance will be small compared to the improvements in processor performance. This difference will lead to an increasing gap between prOCe880r cycle time and main memory acce. time. This gap must be closed by improving the memory hierarchy. Computer architects have attacked this gap by designing machines with cache sizes an order of magnitude larger than those appearing five years ago. Microproce880r-based RISe systems now have caches that rival the size of those in mainframes and supercomputers.
1 Introduction.- 1.1 Overview of Cache Design.- 1.2 Review of Past Work.- 1.3 Then, Why This Research?.- 1.4 Contributions.- 1.5 Organization.- 2 Obtaining Accurate Trace Data.- 2.1 Current Tracing Techniques.- 2.2 Tracing Using Microcode.- 2.3 An Experimental Implementation.- 2.4 Trace Description.- 2.5 Applications in Performance Evaluation.- 2.6 Extensions and Summary.- 3 Cache Analyses Techniques — An Analytical Cache Model.- 3.1 Motivation and Overview.- 3.2 A Basic Cache Model.- 3.3 A Comprehensive Cache Model.- 3.4 Model Validation and Applications.- 3.5 Summary.- 4 Transient Cache Analysis — Trace Sampling and Trace Stitching.- 4.1 Introduction.- 4.2 Transient Behavior Analysis and Trace Sampling.- 4.3 Obtaining Longer Samples Using Trace Stitching.- 4.4 Trace Compaction — Cache Filtering with Blocking.- 5 Cache Performance Analysis for System References.- 5.1 Motivation.- 5.2 Analysis of the Miss Rate Components due to System References.- 5.3 Analysis of System Miss Rate.- 5.4 Associativity.- 5.5 Block Size.- 5.6 Evaluation of Split Caches.- 6 Impact of Multiprogramming on Cache Performance.- 6.1 Relative Performance of Multiprogramming Cache Techniques.- 6.2 More on Warm Start versus Cold Start.- 6.3 Impact of Shared System Code on Multitasking Cache Performance.- 6.4 Process Switch Statistics and Their Effects on Cache ModeUng.- 6.5 Associativity.- 6.6 Block Size.- 6.7 Improving the Multiprogramming Performance of Caches.- 7 Multiprocessor Cache Analysis.- 7.1 Tracing Multiprocessors.- 7.2 Characteristics of Traces.- 7.3 Analysis.- 8 Conclusions and Suggestions for Future Work.- 8.1 Concluding Remarks.- 8.2 Suggestions for Future Work.- Appendices.- B.1 On the Stability of the Collision Rate.- B.2 Estimating Variations in the Collision Rate.- CInter-Run Intervals and Spatial Locality.- D Summary of Benchmark Characteristics.- E Features of ATUM-2.- E.1 Distributing Trace Control to All Processors.- E.2 Provision of Atomic Accesses to Trace Memory.- E.3 Instruction Stream Compaction Using a Cache Simulated in Microcode.- E.4 Microcode Patch Space Conservation.

Date de parution :

Ouvrage de 190 p.

15.5x23.5 cm

Disponible chez l'éditeur (délai d'approvisionnement : 15 jours).

105,49 €

Ajouter au panier