Computer Architecture - John L. Hennessy , David A. Patterson -... - Librairie Eyrolles
Tous nos rayons

Déjà client ? Identifiez-vous

Mot de passe oublié ?

Nouveau client ?

CRÉER VOTRE COMPTE
Computer Architecture
Ajouter à une liste

Librairie Eyrolles - Paris 5e
Indisponible

Computer Architecture

Computer Architecture

A Quantitative Approach

John L. Hennessy, David A. Patterson

1032 pages, parution le 01/07/2001 (2eme édition)

Résumé

The innovation continues--in this truly significant second edition, the book that defined the way we currently think about computer architecture has been rewritten to reflect the dramatic advances in the field. In the same way the first edition revolutionized the approach to understanding computers, David Patterson and John Hennessy's landmark revision focuses on the new generation of architectures and design techniques with a view toward the future.

The technological and market changes implied by the authors' quantitative evaluation in the first edition have arrived. The examples in this contemporary new edition have been completely re-written to reflect the new state of computing. MIPS R4000, Intel 80X86, PowerPC, and HP PA-RISC are presented as real world examples to illustrate and reinforce fundamental techniques required for building machines that maximize performance and stay within cost constraints.

The acclaimed style and special features such as "fallacies and pitfalls" and "historical perspective" that made it the best selling advanced computer bookremain, but each discussion has been revised to meet the needs of today's professional.

Readers of the first edition will need this new edition, and will benefit from the increased coverage of pipelining and storage, the comprehensive presentation of caches and new chapters including shared memory multiprocessing and networking technology. Anyone involved in building computers, from palmtops to supercomputers, will profit from the unmatched experience and the quantitative approach that Hennessy and Patterson apply to their presentation.

New Features

  • A comprehensive comparison of all RISC architectures
  • Focuses on advanced computer architecture topics and techniques
  • Expanded coverage of pipelining--now in two chapters "Pipelining" and "Advanced Pipelining" emphasizing superscalar and multiple issues
  • Extensively revised and expanded presentation of storage systems including I/O performance measures
  • A new chapter illustrating the breadth of design alternatives for multiprocessors
  • A new chapter on interconnect technology helps the reader understand the architectural implications of interconnect technology
  • New examples reflect the current state of computing--MIPS R4000, Intel 80X86, PowerPC, and HP PA-RISC
  • Interactive simulator software
Contents
Foreword
Preface
Acknowledgments
1 Fundamentals of Computer Design
1.1 Introduction
1.2 The Task of a Computer Designer
1.3 Technology and Computer Usage Trends
1.4 Cost and Trends in Cost
1.5 Measuring and Reporting Performance
1.6 Quantitative Principles of Computer Design
1.7 Putting It All Together: The Concept of Memory Hierarchy
1.8 Fallacies and Pitfalls
1.9 Concluding Remarks
1.10 Historical Perspective and References
Exercises

2 Instruction Set Principles and Examples
2.1 Introduction
2.2 Classifying Instruction Set Architectures
2.3 Memory Addressing
2.4 Operations in the Instruction Set
2.5 Type and Size of Operands
2.6 Encoding and Instruction Set
2.7 Crosscutting Issues: The Role of Compilers
2.8 Putting It All together: The DLX Architecture
2.9 Fallacies and Pitfalls
2.10 Concluding Remarks
Historical Perspective and References
Exercises

3 Pipelining
3.1 What is Pipelining?
3.2 The Basic Pipeline for DLX
3.3 The Major Hurdle of Pipelining-Pipeline Hazards
3.4 Data Hazards
3.5 Control Hazards
3.6 What Makes Pipelining Hard to Implement?
3.7 Extending the DLX Pipeline to Handle Multicycle Operations
3.8 Crosscutting Issues: Instruction Set Design and Pipelining
3.9 Putting It All Together: The MIPS R4000 Pipeline
3.10 Fallacies and Pitfalls
3.11 Concluding Remarks
3.12 Historical Perspective and References
Exercises

4 Advanced Pipelining and Instruction-Level Parallelism
4.1 Instruction-Level Parallelism: Concepts and Challenges
4.2 Overcoming Data Hazards with Dynamic Scheduling
4.3 Reducing Branch Penalties with Dynamic Hardware Prediction
4.4 Taking Advantage of More ILP with Multiple Issue
4.5 Compiler Support for Exploiting ILP
4.6 Hardware Support for Extracting More Parallelism
4.7 Studies of ILP
4.8 Putting It All Together: The PowerPC 620
4.9 Fallacies and Pitfalls
4.10 Concluding Remarks
4. 11 Historical Perspective and References
Exercises

5 Memory-Hierarchy Design
5.1 Introduction
5.2 The ABCs of Caches
5.3 Reducing Cache Misses
5.4 Reducing Cache Miss Penalty
5.5 Reducing Hit Time
5.6 Main Memory
5.7 Virtual Memory
5.8 Protection and Examples of Virtual Memory
5.9 Crosscutting Issues in the Design of Memory Hierarchies
5.10 Putting It All Together: The PowerPC 620
5.11 Fallacies and Pitfalls
5.12 Concluding Remarks
5.13 Historical Perspective and References
Exercises

6 Storage Systems
6.1 Introduction
6.2 Types of Storage Devices
6.3 Buses-Connecting I/O Devices to CPU/Memory
6.4 I/O Performance Measures
6.5 Reliability, Availability, and RAID
6.6 Crosscutting Issues: Interfacing to an Operating System
6.7 Designing an I/O System
6.8 Putting It All Together: UNIX File System Performance
6.9 Fallacies and Pitfalls
6.10 Concluding Remarks
6.11 Historical Perspective and References
Exercises

7 Interconnectoin Networks
7.1 Introduction
7.2 A Simple Network
7.3 Connecting the Interconnection Network to the Computer
7.4 Interconnection network Media
7.5 Connecting More Than Two Computers
7.6 Practical Issues for Commercial Interconnection Networks
7.7 Examples of Interconnection Networks
7.9 Internetworking
7.10 Putting it All Together: An ATM Network of Workstations
7.11 Fallacies and Pitfalls
7.12 Concluding Remarks
7.13 Historical Perspective and References
Exercises

8 Multiprocessors
8.1 Introduction
8.2 Characteristics of Application Domains
8.3 Centralized Shared-Memory Architectures
8.4 Distributed Shared-Memory Architectures
8.5 Synchronization
8.6 Models of Memory Consistency
8.7 Crosscutting Issues
8.8 Putting It All Together: The SGI Challenge Multiprocessor
8.9 Fallacies and Pitfalls
8.10 Concluding Remarks
8.11 Historical Perspective and References
Exercises

Appendix A: Computer Arithmetic by David Goldberg Xerox Palo Alto Research Center
A.1 Introduction
A.2 Basic Techniques of Integer Arithmetic
A.3 Floating Point
A.4 Floating-Point Multiplication
A.5 Floating Point-Addition
A.6 Division and Remainder
A.7 More on Floating-Point Arithmetic
A.8 Speeding Up integer Addition
A.9 Speeding Up integer Multiplication and Division
A.10 Putting It All together
A.11 Fallacies and Pitfalls
A.12 Historical Perspective and References
Exercises

Appendix B: Vector Processors
B.1 Why Vector Processors?
B.2 Basic Vector Architecture
B.3 Two Real-World Issues: Vector Length and Stride
B.4 Effectiveness of Compiler Vectorization
B.5 Enhancing Vector Performance
B.6 Putting It All Together: Performance of Vector Processors
B.7 Fallacies and Pitfalls
B.8 Concluding Remarks
B.9 Historical Perspective and References
Exercises

Appendix C: Survey of RISC Architectures
C.1 Introduction
C.2 Addressing Modes and Instruction Formats
C.3 Instructions: The DLX Subset
C.4 Instructions: Common Extensions to DLX
C.5 Instructions Unique to MIPS
C.6 Instructions Unique to SPARC
C.7 Instructions Unique to PowerPC
C.8 Instructions Unique to PA-RISC
C.9 Concluding Remarks
C.10 References

Appendix D: An Alternative to RISC: the Intel 80x86
D.1 Introduction
D.2 80x86 Registers and Data Addressing Modes
D.3 80x86 Integer Operations
D.4 80x86 Floating-Point Operations
D.5 80x86 Instruction Encoding
D.6 Putting It All together: Measurements of Instruction Set Usage
D.7 Concluding Remarks
D.8 Historical Perspective and References

Appendix E: Implementing Coherence Protocols
E.1 Implementation Issues for the Snooping Coherence Protocol
E.2 Implementation Issues in the Distributed Directory Protocol
References
Index

L'auteur - John L. Hennessy

John L. Hennessy est le président de l'université de Stanford et le collaborateur d'organismes tels l'IEEE et l'ACM. Titulaire de nombreux prix, il est à l'origine de plusieurs applications commerciales, dont l'architecture des multiprocesseurs DASH et Origin (Silicon Graphics). Il est cofondateur de la société MIPS, qui a développé le premier microprocesseur RISC.

L'auteur - David A. Patterson

David A. Patterson est enseignant à l'université de Californie, à Berkeley, où il est titulaire de la chaire d'informatique. Il a reçu le prix John von Neumann en 2001, conjointement avec John L. Hennessy, pour les recherches dont cet ouvrage rend compte. Il fut associé au projet RISC 1, qui est à l'origine de l'architecture SPARC (Sun Microsystems) et RAID. Le traducteur : Daniel Etiemble, ingénieur en électronique et docteur ès sciences informatiques, est professeur à l'université Paris Sud (centre d'Orsay) où il enseigne l'architecture des ordinateurs. Il a été professeur deux ans à l'université de Toronto.

Caractéristiques techniques

  PAPIER
Éditeur(s) Morgan Kaufmann
Auteur(s) John L. Hennessy, David A. Patterson
Parution 01/07/2001
Édition  2eme édition
Nb. de pages 1032
Format 18,7 x 23,5
Couverture Broché
Poids 1700g
Intérieur Noir et Blanc
EAN13 9781558603721

Avantages Eyrolles.com

Livraison à partir de 0,01 en France métropolitaine
Paiement en ligne SÉCURISÉ
Livraison dans le monde
Retour sous 15 jours
+ d'un million et demi de livres disponibles
satisfait ou remboursé
Satisfait ou remboursé
Paiement sécurisé
modes de paiement
Paiement à l'expédition
partout dans le monde
Livraison partout dans le monde
Service clients sav@commande.eyrolles.com
librairie française
Librairie française depuis 1925
Recevez nos newsletters
Vous serez régulièrement informé(e) de toutes nos actualités.
Inscription