Parallel, distributed, and grid computing

Editoren B. Buchberger
B. Buchberger
M. Affenzeller
M. Affenzeller
A. Ferscha
A. Ferscha
M. Haller
M. Haller
T. Jebelean
T. Jebelean
E.P. Klement
E.P. Klement
P. Paule
P. Paule
G. Pomberger
G. Pomberger
W. Schreiner
W. Schreiner
R. Stubenrauch
R. Stubenrauch
R. Wagner
R. Wagner
G. Weiß
G. Weiß
W. Windsteiger
W. Windsteiger
Titel Parallel, distributed, and grid computing
Typ in Buch
Verlag Springer
Kapitel Parallel, distributed, and grid computing
Ausgabe 1st Edition
ISBN 978-3-642-02126-8
Monat June
Jahr 2009
Seiten 333-378
SCCH ID# 908
Abstract

The core goal of parallel computing is to speedup computations by executing independent computational tasks concurrently (“in parallel”) on multiple units in a processor, on multiple processors in a computer, or on multiple networked computers which may be even spread across large geographical scales (distributed and grid computing); it is the dominant principle behind “supercomputing” respectively “high performance computing”. For several decades, the density of transistors on a computer chip has doubled every 18–24 months (“Moore’s Law”); until recently, this rate could be directly transformed into a corresponding increase of a processor’s clock frequency and thus into an automatic performance gain for sequential programs. However, since also a processor’s power consumption increases with its clock frequency, this strategy of “frequency scaling” became ultimately unsustainable: since 2004 clock frequencies have remained essentially stable and additional transistors have been primarily used to build multiple processors on a single chip (multi-core processors). Today therefore every kind of software (not only “scientific” one) must be written in a parallel style to profit from newer computer hardware.