Theoretical Chemistry, Real-World Applications: Leveraging HPE High Performance Computing in Research Simulations
HPE High Performance Compute
For decades, scientists and researchers have been trying to build a machine that can do everything from compute highly complex biomedical research simulations to flawlessly encrypt data to accurately predict climate change models. At the moment, however, only elements of true quantum computing are put into practice today.
The vast majority of calculations are done by scientists and people who are dedicated to the trial and error methodology behind the scientific process. Regardless of the number of setbacks and new pathways that emerge, the ultimate goal is to discover truths about how the world works.
And so continues the cyclical process of hypothesis, experimentation, analysis, findings publication, and critique. It’s how we discover and share the biggest scientific breakthroughs of humankind, and this is exactly what our research team does at the Institute of Theoretical Chemistry at the University of Vienna in Austria.
Supercomputing Simulations
Our work is around photochemistry—how molecules interact with light. One current highlight is a post-doc project around photodynamic therapy, which is used in cancer treatment research. Ideally, when a patient receives the substance, it’s distributed throughout the body and only activated by light. So, for example, if someone has a tumor, the light targets that mass via laser or irradiation.
The theory is that it will kill the cancer cells and do nothing to the rest of the body, which would be an amazing improvement on existing anticancer drugs. Our post-doc project is trying to find new or modified substances for photodynamic therapy that have fewer side effects with better performance.
As a senior scientist at the institute, I am responsible for listening to what the research team needs, and then creating the best infrastructure possible for them to do their best work. But our team’s research is different from the typical workloads being run at a big supercomputing center.
If the data center has thousands upon thousands of CPU cores, and if a researcher’s workload uses all of them, their work is usually completed in a couple of hours. Our team cannot scale up to that many CPUs because our programs, codes, and algorithms usually don’t parallelize. Nonetheless, we still need a lot of memory for our simulations.
Interruptions can also be a problem. With smaller workloads, it only takes a few hours to run simulations, which allows us to vary the parameters multiple times and experiment further. But sometimes we have workloads that run anywhere from four to six weeks, and I need to make sure that nothing else happens with the machine during that time, such as being rebooted for maintenance.
If the programs are interrupted while running simulations, it would disrupt the calculations and affect the results. With larger workloads, there’s very little margin for error. The researcher must be sure that all inputs are correct before running the simulation and try to ensure that the machines don't fail, or else a lot of time and resources are lost.
Theoretical quantum chemistry drives these programs. Without the appropriate computing strength and hardware to support our research, it would take exponentially longer to reach discoveries. As truly applicable quantum computing is still a long while away (at least for practical applications), it pushes us to seek out the solutions to our research challenges using the solutions already available on the market.
In examining the photostability of our DNA and artificial photosynthesis, our body of research involves massive calculations and requires a staggering amount of memory. This research is conducted at a quantum mechanical level, so it is intensive and complex. As such, this is the type of work we can only carry out using HPE High Performance Computing solutions.
An Inheritance That Pays Dividends: My 20-Year Relationship with HPE
I started my university career as a 20-year-old student. I’ve worked at universities since then, and have worked with HPE almost as long. I inherited HPE hardware when I worked at a university in Berlin, and when our research group moved to the University of Vienna in 2011, we not only brought a team of researchers but our hardware as well. I would say my more than 20-year relationship with HPE is just like a marriage: it has its ups and downs but I would never consider looking elsewhere.
I know the ins and outs of what’s on the market, the companies, and the hardware they offer. HPE is one of, if not the market leader in this space. HPE is one of the few companies offering the particular hardware we need to reliably run our simulations. All the competitors in the market offer the same standard machines and, in the end, they offer the same hardware with the same memory and the same CPU power.
Due to several strategic acquisitions over the years, HPE’s comprehensive portfolio is second to none. I like the fact that if my researchers come to me with a request, I can almost certainly meet their needs with an existing HPE HPC solution.
We are currently using the HPE Apollo 6000 Gen9 system and the HPE Superdome Flex server, which runs both HPC and Artificial Intelligence (AI) workloads. We use the Superdome Flex in particular for its huge amount of memory and its extra flexibility, which is necessary to handle projects requiring a lot of memory while concurrently accommodating projects that require a smaller computational lift. The Superdome Flex gives us a favorable CPU cost for one machine with a great deal of memory. It’s robust, cost-effective, and flexible, so we often run 10–20 small simulations concurrently.
We also maintain the HPE hardware standard service for our standard jobs. While we could, in theory, replace our standard HPE hardware to something from a different vendor, at the end of the day, I like that everything comes from the same company. You have the same management tools, you understand how the solutions work, and you know how to navigate support when you need assistance.
We use a mixture of open-source code as well as HPE tools. The open-source tools help us keep costs down. Despite our limited budget, it’s important for us to have specialized hardware and setup. In contrast, at a big supercomputing site, there are always restrictions in run time and memory, among other things. Our hardware is a complement to supercomputations.
How HPE Hardware Keeps Us Competitive
As we are a publicly funded university, we do fundamental basic research. Our goal is to understand the chemistry, and our desired business outcome is to publish our findings. We generate knowledge and understanding of nature, and we share that knowledge with our peers, interested industry parties, and the public.
Our area of research is highly competitive, and even though many of my colleagues in the field are my friends, we still vie for grants and private funding in addition to trying to produce results quickly and get published in peer-reviewed journals. HPE technology helps us stay competitive in our field. If we didn’t have the right resources to enable our research, we wouldn’t be able to further explore photostability and photosynthesis.
We recently had a group seminar with a master's student who explained the calculations she did in the previous month. She was essentially trying to repeat calculations from a Ph.D. student from 10 years ago. At that time, it took our former student three years—the entirety of her Ph.D. program—to crunch the numbers and get results. Now, this master's student could repeat the same research and get her results within a couple of months because today we have more computational power. We can now go further and be more precise. That progress is what makes working in a university environment exciting, and I take great pleasure in what the next generation of researchers will be able to achieve.
Yet, even with the biggest computers in the world, we are still not able to solve some current problems. We might know a solution in theory, and our colleagues from the physics department can give us the equation, but we cannot solve it simply because we lack the computational power to do so. But hopefully, that will not always be the case.
Investing in Research Today Means Quantum Leaps Tomorrow
Public universities' budgets are significantly limited compared to our corporate counterparts, so leveraging HPE HPC solutions ensures that I make the best computing solutions available to our team today and into the future. This is especially relevant as we are not able to replace our hardware frequently. Our machines must be robust enough to help us carry out simulations that we’ve barely begun to explore.
Both research and technology are fast-paced fields. The implications for AI and deep learning were somewhat unimaginable a decade ago, yet are now foundational to the research being conducted today. We are consistently doubling the proficiency of our computing capabilities—it’s the hallmark of Moore’s Law. And who knows—if quantum computing becomes a reality, we’ll be able to do simulations we can only dream of nowadays.
But until then, HPE is helping our team navigate the challenges of a memory-intensive, compute-heavy environment in our present reality.