High Performance Computing Trends: New Architectures

Stone Ridge Technology's thoughts on the adoption of GPUs & success NVIDIA has attained in promoting them in the last decade.

Bg blog hpc trends new architectures

Posted in: High Performance Computing

Whether you favor GPUs as an HPC option or not their adoption as an alternative hardware platform is a salient feature of the history of computing in the last decade.

A second HPC trend has been the emergence of new compute architectures starting around 2005 when multi-core chips began to emerge on the market. I refer to IBM Cell, Clearspeed, FPGAs, GPUs and others. I believe that there was a period of time when clock speeds first saturated and application performance flatlined that developers starting looking around for options beyond traditional CPU implementations. Most of these HPC options disappeared or scoped back to use on special purpose problems for one of two reasons. First, it's hard and very expensive to develop new chips and even harder to improve them year after year. Without a large extant consumer base for the chips an HPC offering proved difficult to maintain (e.g. Clearspeed). The second reason I attribute to complex and unfamiliar programming models (.e.g IBM Cell, FPGAs). As things have settled out over the last 10 years FPGAs have remained a viable solution for specialized problems where they work exceptionally well but the most notable trend is by far the success of GPUs for general computation in HPC.

Whether you favor GPUs as an HPC option or not their adoption as an alternative hardware platform is a salient feature of the history of computing in the last decade. GPUs and specifically NVIDIA GPUs have succeeded where the others have failed because they got a few important things right. First, the success of their business does not depend on HPC. The bulk of the industry revenue comes from the large and growing graphics processing market (e.g. gamers, artists, film animation etc). Modest modifications were made to these mainline chips to create the Tesla line for HPC compute. For example, adding double precision floating point and beefing up the board memory. Second, NVIDIA invested time and money into CUDA, their development environment for GPU coding. CUDA is essentially C/C++ with extensions and developers generally find it a natural extension of their skills and a low barrier to entry into GPU coding. Finally, the performance gap between GPUs and CPUs provided the needed incentive for developers to try something new. There are two measures of performance dear to HPC developers. The first is Giga-Flops which measures how fast the processor can do operations once it has data and the second is Giga-Bytes per second or bandwidth which measures how quickly data can be moved for processing. On both accounts, measured chip to chip, GPUs have maintained a substantial and growing gap over CPUs which currently sits at about 6x. With 6x on the table for performance, a user friendly development environment and a stable vendor behind it, GPUs gave HPC developers reason to experiment.

Since 2008 there have been hundreds of full applications ported to GPUs and millions of downloads of the CUDA development kit. In addition, hundreds of universities now offer GPU coding classes. Twenty percent of the top 20 supercomputers in the world have GPUs. In Oil and Gas, GPUs have been aggressively adopted by the seismic processing industry where they provide 5x or more improvement in performance and they have been making their way into other industry applications most notably in reservoir simulation with ECHELON, our company's GPU based reservoir simulator. The adoption of GPUs and the success NVIDIA has attained in promoting them is one of the key stories of HPC computing in the last decade.

This post is one of several based on themes I presented in a keynote talk delivered in mid-September 2015 at the 2nd annual EAGE workshop on High Performance Computing in the Upstream in Dubai.

Vincent Natoli

Vincent Natoli

Vincent Natoli is the president and founder of Stone Ridge Technology. He is a computational physicist with 30 years experience in the field of high-performance computing. He holds Bachelors and Masters degrees from MIT, a PhD in Physics from the University of Illinois Urbana-Champaign and a Masters in Technology Management from the Wharton School at the University of Pennsylvania.

Subscribe for Updates

Stone Ridge Technology – ECHELON Advantages

Recent Articles

What we are doing to help improve the reservoir simulation industry.