Meet Ken Esler, Stone Ridge Technology's CTO
Posted in: Employee Spotlight
Ken Esler, our CTO joined SRT in 2010 and is based at our headquarters in Maryland. Ken has a Physics degree from MIT and a Ph.D in Physics from UIUC where he also did his Postdoctoral Research.
Emily:
After graduating from University of Illinios Urbana-Champaign with a Ph.D. in Physics you joined a start-up called Stone Ridge Technology, what inspired you to work for an energy/tech start-up?
Ken:
During my graduate and postdoctoral work, I had the opportunity to develop two different scientific computing applications and to run them on some of the largest supercomputers in the world. I found I really enjoyed this aspect of my work, but part of me also longed to develop applications with more direct real-world application. When Dr. Natoli reached out to offer me the opportunity to apply the skills I had developed to engineering applications, I jumped at the chance. There was additional serendipity in that the change also allowed me to relocate to Maryland, where both my parents and my wife’s parents live. At the time, I only wished Vin had called half an hour earlier, as I was literally walking out the appointment in which I just renewed my apartment lease when he called.
Emily:
What did you do for your PhD and how did that topic shape your interests and then your postdoc too?
Ken:
During my Ph.D. studies, I worked on developing methods to compute the properties of materials starting from first-principles, i.e. starting only from the atomic number of each atom in the material, solving equations governing the constituent particles of the material at the quantum level to compute the macroscopic properties of that material. It’s believed that the amount of computation needed to solve these equations directly grows exponentially with the number of particles, which means that clever approximations need to be employed, along with supercomputers, to enable application to real materials. During this time, I found that I really enjoyed developing faster algorithms and implementing them to run efficiently on the best machines.
During my postdoctoral research, I continued in a related area of study, but with applications to solid-state materials. During this time, I first became aware of intriguing research into the use of GPUs for scientific computing. This was around 2007, when CUDA was in its infancy. I was at first a bit skeptical, but decided to give it a shot. My first experiments were very encouraging, which led me to continue and eventually port the application we were developing to CUDA. This yielded about an order of magnitude in speedup, and I’ve been hooked ever since.
Emily:
What are you currently working on?
Ken:
For the past several months, I have been working with Leonardo Patacchini(SRT) and Paola Panfili (Eni) on adding the features to ECHELON needed to simulate subsurface CO2 storage. This includes extending our compositional formulation to include a multiphase flash capability allowing a full compositional description of the aqueous phase. We plan to develop additional features next year in support of CCS modeling.
Emily:
What are some of the biggest challenges you face helping customers?
Ken:
Well, there’s – no, wait, I’m not allowed to talk about that. And then there’s – no, we can’t discuss that either. But seriously, we can’t usually discuss specific technical challenges without violating non-disclosure agreements, but in more general terms, our customers have approached us when their performance requirements have outgrown their current solutions. Whether they are making the jump to ensemble-based modeling as a matter of standard practice, moving models from black-oil to compositional to improve the fluid description or handle a new recovery process, need more resolution in their models, simply want to reduce expenditure on hardware infrastructure and software licensing, or some combination of these, we believe we have built a very compelling solution.
Emily:
What really motivates you about Computational Sciences?
Ken:
There is an extraordinary beauty and elegance in the connection between mathematics and the mechanics of the physical world expressed in governing equations. While this connection has been understood for centuries, digital computing has made it possible to solve those equations to make reliable, quantitative predictions that have practical value. From the design of aircraft to medical imaging, computational science has allowed us to take this connection between math and physics beyond general principles to detailed application.
In the context of reservoir simulation, the computer has become a tool to “see” inside the Earth – to gain a detailed understanding of the dynamic interaction of the fluids in the subsurface. It provides a way to perform experiments, identifying opportunities and pitfalls, all with nearly immediate feedback. I find the process of building and improving this tool – adding features, improving performance, increasing usability – to be very gratifying.
Emily:
What is your take on some of the largest recent changes in tech (AI, Machine Learning, Big Data)?
Ken:
“Better to remain silent and risk being thought a fool than to speak and remove all doubt.”
Regarding machine learning, I am willing to say very little. Clearly, ML has had a transformational effect on many aspects of technology. In our field of reservoir modeling, I tend to be a little skeptical of a purely data-driven approach to forecasting in a general context, simply because sufficiently broad data covering the range of potential subsurface conditions in a given reservoir is usually unavailable. Furthermore, they ignore the beautiful and elegant connection between physics and mathematics I mentioned above. Even if such methods could produce reliable predictions of future production, the black-box nature of the prediction may fail to yield the physical insight needed to understand the subsurface dynamics and make optimal development decisions. That said, novel methods known as physics-informed neural networks (PINNs), which constitute a sort of hybrid approach between conventional simulation and ML, may prove interesting. Other hybrid approaches which combine machine learning with conventional simulation may also prove very effective.
Emily:
Are you willing to make any guesses on the next big disruption?
Ken:
Regarding other disruptive technologies, I can say even less. I have no crystal ball, but I think we can recognize trends that govern the near term and the constraints that underlie these trends. Around 2004, CPU clock speeds reached a plateau after decades of steady increases. Transistors continued getting smaller, but the power usage prevented processors from running at higher frequencies. The industry soon realized that increases in performance would have to come primarily through increased parallelism combined with better algorithms. New computing architectures emerged, with GPUs at the center of the massively parallel era of HPC. While clock speeds have been almost unchanged for the last 15 years, transistors have continued to shrink, increasing core counts and allowing codes designed for this massive parallelism to continue to grow in performance.
There is a limit, however, to how small transistors can be made – one which we are rapidly approaching. We have a few more generations of process technologies before we hit some fairly fundamental limits. At that point, I believe we will start to see more and more special-purpose hardware developed to increase performance, but only in areas with a sufficiently large market to justify the engineering effort. For example, one could imagine a bespoke processor designed specifically to run reservoir simulation. It could easily run an order-of-magnitude faster than a software implementation, but it would cost at least hundreds of millions to design and manufacture, and the community of users is too small to ammortize that cost. The technology is available, but the economics are not.
Once in a while, however, a technology developed for a market with sufficient economies of scale turns out to be applicable to one without. Such a serendipity occurred when it was discovered that processors developed for gaming graphics could be harnessed to speed science and engineering applications. SRT was among the first companies (in any field) to jump into this GPU computing with both feet, and we’re always on the lookout for the next advance.
Emily:
I always ask our team this at the end of the spotlight interviews, what is on your desk right now?
Ken:
How much space do you have to dedicate to this list? I’ll keep it to a random sampling of the current contents: laptop, keyboard, mouse, monitor. Printouts of several articles, a phone, my passport, expired smoke detector, empty tape dispenser, some blank white sheets with some scribbled equations, and a Post-It note from my youngest daughter reading, “I love you U Dab” – we’re still working on d’s vs. b’s.
Emily Fox
Emily Fox is Stone Ridge Technology's Director of Communication.
What we are doing to help improve the reservoir simulation industry.
ECHELON version 2024.1 features. Contact us today to trial our leading reservoir simulation software.
Read article →Generative AI is poised to play a significant role in Reservoir Engineering workflows.
Read article →