GPUs in Reservoir Simulation Software

GPU technology is expected to continue to advance at a rapid pace while the reservoir analysis market is expected to grow at a compound annual growth rate of 4% through 2022.

Motherboard
|

Posted in: GPUs

A few months ago, on 4 May, the Italian energy company, Eni issued a press release announcing the successful achievement of a breakthrough calculation in reservoir modeling. One of the company’s high-resolution deep-water reservoir models, with 5.7 million active cells, was used to generate 100,000 realizations, each with different petro-physical properties. Eni ran all 100,000 models in 15 hours on HPC4, its industry-leading supercomputer with each individual model simulating 15 years of production in an average of 28 minutes. The calculation was a prominent example of “GPU-computing” which is a growing trend in high-performance applications. Instead of using HPC4’s 3200 CPUs, it used the machine’s 3,200 NVIDIA Tesla GPUs. To achieve this, Eni partnered with Stone Ridge Technology, a US-based software company that develops and markets a GPU-based reservoir simulator called ECHELON.

The announcement is important because it demonstrates how energy companies can rapidly generate large amounts of data to help make important decisions. Calculations like this were previously deemed difficult or impossible to do because the asset was considered too complex or the simulation run times were so long that the project could not be completed fast enough for a business decision to be made. By utilizing GPU technology for reservoir simulation, energy companies are able to do complex studies in short timeframes allowing more information to be incorporated into the decision-making process. This ability is becoming more important as energy companies continue to look for ways to establish a strategic advantage in the industry.

Reservoir simulation software, like ECHELON, model the subsurface flow of hydrocarbons and water in a petroleum reservoir. They allow energy companies, like Eni, to optimize recovery from their assets by simulating numerous ‘what-if’ scenarios for well placement and development strategies. ECHELON is unique in that it is built to run entirely on NVIDIA® Tesla® GPUs using the CUDA® software, the same high-performance computing platform now powering the revolution in artificial intelligence, machine learning and big data.

GPUs have traditionally been used for fast 3D game rendering, however over the past decade they have been harnessed more broadly to accelerate computational workloads in areas of scientific computing including those related to oil and gas exploration and production. Seismic processing was one of the first areas that the oil and gas industry began utilizing GPUs. Today they are also heavily used in the machine learning processes that have become popular within the industry. GPU technology had yet to be fully implemented into reservoir simulation until a few years ago when Stone Ridge saw the opportunity for a technological advantage. Implementing GPUs into reservoir simulation posed a difficult challenge due to the complex nature of the codes relative to other computations. Why go through the trouble of writing software for GPUs as opposed to more traditional CPU development? There are three good answers, performance, compute density and scalability. By embracing newer GPU technology, modern algorithms, and software design approaches, ECHELON software achieves performance levels, compute densities, and scalability that significantly outpace the CPU based codes.

GPU Performance

Two processor metrics are of particular importance for performance. The first, FLOPS, measures how many calculations a processor can execute in a unit time period. The second, bandwidth, measures how fast data can be moved into the processor. Today’s leading generation GPUs offer roughly 10x more FLOPS and bandwidth than CPUs in a chip to chip comparison. The charts below compare the evolution of these metrics on GPU and CPU over the last decade. Figure 1 illustrates the peak bandwidth and GFLOPS provided by the latest GPU and CPU machines. They illustrate how GPU technology has dramatically grown and outpaced CPU technology in recent years. All signs point to the continuation of this trend into the coming years. In order to take full advantage of this technology gap, software needs to be written so that all computations are completed on the GPUs. Previous industry technologies have tried to utilize GPUs by porting part of their code and creating a hybrid CPU/GPU approach. This has resulted in limited performance increases due to Amdahl’s law and the communication issues that arise between CPU and GPU systems. Stone Ridge Technology was the first to create a simulator that runs all significant computation on the GPU. Their ECHELON reservoir simulation software uses the full capability of GPUs to offer game changing performance which will improve further as GPU technology advances into the future.

Comparison bandwidth evolution Comparison bandwidth evolution

Compute Density

Compute density refers to the physical hardware requirements required to achieve a given level of performance on a particular model. For example, using bandwidth as a proxy for compute ability, we can make a comparison of compute density between GPU and CPU systems. A modern GPU server node can house 8 NVIDIA Volta cards each with about 900 GB/s of memory bandwidth for a total of 7.2TB/s in the compute node. By comparison the two CPUs in the node will offer about 200 GB/s. Thus, to match the 7.2TB/s GPU bandwidth would require 36 nodes or two racks of computers. This density matters in real calculations. For example, in 2017 Stone Ridge and IBM demonstrated ECHELON’s performance by running a billion-cell model on just 30 nodes in 90 minutes. By comparison other companies that have reported billion cell calculations have typically used thousands of nodes.

Scalability

Scalability is the ability to efficiently run large models without significant performance loss. Here again ECHELON outperforms traditional CPU solutions. Because CPU solutions need to find bandwidth somehow to achieve performance they must scavenge it from many nodes in a cluster, hence the use of thousands of nodes for their billion-cell calculation. Using the example above where we matched the memory bandwidth of a single GPU node we note that 36 nodes would have between 16 and 32 CPU cores each for a total number of between 576 and 1152 cores. To accomplish the calculation the reservoir must be divided into between 576 and 1152 tiny domains. Each of these domains would need to communicate with its neighbors. The GPU solution by comparison would have 8 domains, one for each GPU in the server node. Thus we have 8 domains for our GPU solution vs roughly 1,000 domains for a CPU solving the same problem. More domains mean more communication between domains and thus decreased efficiency. GPUs have a two order of magnitude advantage in the number of domains. There is an additional, more subtle point here as well. The most efficient linear solver algorithms for reservoir simulation e.g. Algebraic Multi-Grid (AMG) are difficult to parallelize. Thus, they work more efficiently on GPU where there are two orders of magnitude fewer domains.

Revolutionizing current methods

Today’s energy companies have increasingly more complex and critical issues that require a more detailed understanding of the subsurface due to higher stakes from deep-ocean drilling, the increasing complexity of unconventional reservoirs, and the increased computational requirements of ensemble methodologies. This leads to a high demand for ultra-fast, high resolution, reservoir simulation. The massively parallel GPU hardware, along with careful implementation used by ECHELON allows scalable simulation up to billions of cells. This is all accomplished at speeds that enable the practical simulation of hundreds of ensemble realizations of large, complex models, all while using far fewer hardware resources than CPU based solutions. Reservoir engineers can develop more accurate, robust and predictive models when the time required for each iteration cycle is reduced by an order of magnitude or more. More detailed and higher resolution models provide engineers and managers with the ability to better understand the subsurface and to make more informed decisions about how to optimize production.

The process of studying the subsurface begins with geologists creating a model of what they believe the subsurface looks like and what concentrations of hydrocarbons exist throughout the model. Geologists use data from several tools such as maps, seismic imaging, and measurements from existing wells to create a very detailed model of the subsurface. Companies spend a large amount of money obtaining this data and the process of building the model usually takes several months. Unfortunately, this detail is usually discarded in subsequent stages of processing.

Once the geologic model is created, reservoir engineers then use simulation to try to predict and optimize the production of the hydrocarbons over a certain time. This process usually takes several months as well. Unfortunately, due to the technology available in simulation, a large amount of the detail that is paid for and obtained by the geologist is removed by reservoir engineers in order to achieve reasonable run times and limit the amount of time the engineer needs to spend waiting on simulations to finish.

The need for a fast, scalable reservoir simulator has grown in recent years due to greater reliance by energy companies on more compute intensive workflows in their business decision making. One of these compute intensive workflows is the process of uncertainty quantification for prediction. The uncertainty quantification process requires engineers to simulate hundreds to thousands of scenarios with each being assigned a probability. It is customary in the industry to describe this uncertainty in terns of low(P90), medium (P50), and high (P10). Due to the heavy computing power and time needed for these studies, engineers were previously required to shorten the projects by limiting the amount of runs and/or detail used in the study. Limiting the runs and detail used in the study lowers the accuracy and confidence a company has in using the data for business decisions.

Another compute intensive workflow that is used in the industry is the process of history matching. History matching involves calibrating the simulation model to account for observed data associated with the asset. This is achieved by varying the parameters of the model over several simulations until the the output of the simulation matches the historical production data available. Like the uncertainty quantification process, history matching requires engineers to run hundreds to thousands of simulations with each simulation taking anywhere from a few minutes to several hours to run. Engineers are often required to find workarounds that limit the time and accuracy of the project to complete it in a reasonable timeframe.

Summary

The use of a fast, scalable reservoir simulator allows these studies to be completed much more efficiently while also keeping the geologic detail that makes them more accurate. This allows energy companies to feel more confident about the data they are using for their business decisions while also saving costs by using less resources for the studies. The calculation done by Eni shows how energy companies can take advantage of these emerging technologies to efficiently produce large amounts of data to be used in their decision making process, allowing them to create a more accurate description of the subsurface and providing them with a strategic advantage in acquisitions and their positioning within the industry. GPU technology is expected to continue to advance at a rapid pace while the reservoir analysis market is expected to grow at a compound annual growth rate of 4% through 2022.


Author
Brad Tolbert

Brad Tolbert

Brad Tolbert heads up the Sales and Marketing team at Stone Ridge Technology.

Subscribe for Updates

Stone Ridge Technology – ECHELON Advantages

Recent Articles

What we are doing to help improve the reservoir simulation industry.